by Sarah K. White

AI in hiring might do more harm than good

Feature
Sep 17, 202110 mins
Artificial IntelligenceDiversity and InclusionHiring

AI hiring tools claim to reduce bias in hiring by incorporating machine-based decisions, but at least in its early stages, AI hiring strategies have the potential to hurt DEI in your organization.

man concerned artificial intelligence ai sign
Credit: dny59 / Getty Images

The use of artificial intelligence in the hiring process has increased in recent years with companies turning to automated assessments, digital interviews, and data analytics to parse through resumes and screen candidates. But as IT strives for better diversity, equity, and inclusion (DEI), it turns out AI can do more harm than help if companies aren’t strategic and thoughtful about how they implement the technology.

“The bias usually comes from the data. If you don’t have a representative data set, or any number of characteristics that you decide on, then of course you’re not going to be properly, finding and evaluating applicants,” says Jelena Kovačević, IEEE Fellow, William R. Berkley Professor, and Dean of the NYU Tandon School of Engineering.

The chief issue with AI’s use in hiring is that, in an industry that has been predominantly male and white for decades, the historical data on which AI hiring systems are built will ultimately have an inherent bias. Without diverse historical data sets to train AI algorithms, AI hiring tools are very likely to carry the same biases that have existed in tech hiring since the 1980s. Still, used effectively, AI can help create a more efficient and fair hiring process, experts say.

The dangers of bias in AI

Because AI algorithms are typically trained on past data, bias with AI is always a concern. In data science, bias is defined as an error that arises from faulty assumptions in the learning algorithm. Train your algorithms with data that doesn’t reflect the current landscape, and you will derive erroneous results. As such, with hiring, especially in an industry like IT, that has had historical issues with diversity, training an algorithm on historical hiring data can be a big mistake.

“It’s really hard to ensure a piece of AI software isn’t inherently biased or has biased effects,” says Ben Winters, an AI and human rights fellow at the Electronic Privacy Information Center. While steps can be taken to avoid this, he adds, “many systems have been shown to have biased effects based on race and disability.”

If you don’t have appreciable diversity in your data set, then it’s impossible for an algorithm to know how individuals from underrepresented groups would have performed in the past. Instead, your algorithm will be biased toward what your data set represents and will compare all future candidates to that archetype, says Kovačević.

“For example, if Black people were systematically excluded from the past, and if you had no women in the pipeline in the past, and you create an algorithm based on that, there is no way the future will be properly predicted. If you hire only from ‘Ivy League schools,’ then you really don’t know how an applicant from a lesser-known school will perform, so there are several layers of bias,” she says.

Wendy Rentschler, head of corporate social responsibility, diversity, equity, and inclusion at BMC Software, is keenly aware of the potential negatives that AI can bring to the hiring process. She points to an infamous case of Amazon’s attempt at developing an AI recruiting tool as a prime example: The company had to shut the project down because the algorithm discriminated against women.

“If the largest and greatest software company can’t do it, I give great pause to all the HR tech and their claims of being able to do it,” says Rentschler.

Some AI hiring software companies make big claims, but whether their software can help determine the right candidate remains to be seen. The technology can help companies streamline the hiring process and find new ways of identifying qualified candidates using AI, but it’s important not to let lofty claims cloud judgment.

If you’re trying to improve DEI in your organization, AI can seem like a quick fix or magic bullet, but if you’re not strategic about your use of AI in the hiring process, it can backfire. The key is to ensure your hiring process and the tools you’re using aren’t excluding traditionally underrepresented groups.

Discrimination with AI

It’s up to companies to ensure they’re using AI in the hiring process as ethically as possible and not falling victim to overblown claims of what the tools can do. Matthew Scherer, senior policy counsel for worker privacy at the Center for Democracy & Technology, points out that, since the HR department doesn’t generate revenue and is usually labeled as an expense, leaders are sometimes eager to bring in automation technology that can help cut costs. That eagerness, however, can cause companies to overlook potential negatives of the software they’re using. Scherer also notes that a lot of the claims made by AI hiring software companies are often overblown, if not completely false.

“Particularly tools that claim to do things like analyze people’s facial expressions, their tone of voice, anything that measures aspects of personality — that’s snake oil,” he says.

At best, tools that claim to measure tone of voice, expressions, and other aspects of a candidate’s personality in, for example, a video interview are “measuring how culturally ‘normal’ a person is,” which can ultimately exclude candidates with disabilities or any candidate that doesn’t fit what the algorithm determines is a typical candidate. These tools can also put disabled candidates in the uncomfortable position of having to decide whether they should disclose any disabilities before the interview process. Disabled candidates may have concerns that if they don’t disclose, they won’t get the right accommodations needed for the automated assessment, but they might not be comfortable disclosing a disability that early in the hiring process, or at all.

And as Rentschler points out, BIPOC, women, and candidates with disabilities are often accustomed to the practice of “code switching” in interviews — which is when underrepresented groups make certain adjustments to the way they speak, appear or behave, in order to make others more comfortable. In this case, AI systems might pick up on that and incorrectly identify their behavior as inauthentic or dishonest, turning away potentially strong candidates.

Scherer says discrimination laws fall into two categories: disparate impact, which is unintentional discrimination; and disparate treatment, which is intentional discrimination. It’s difficult to design a tool that can avoid disparate impact “without explicitly favoring candidates from particular groups, which would constitute disparate treatment under federal law.”

Regulations in AI hiring

AI is a relatively new technology, leaving oversight scant when it comes to legislation, policies, and laws around privacy and trade practices. Winters points to a 2019 FTC complaint filed by EPIC alleging HireVue was using deceptive business practices related to the use of facial recognition in its hiring software.

HireVue claimed to offer software that “tracks and analyzes the speech and facial movements of candidates to be able to analyze fit, emotional intelligence, communication skills, cognitive ability, problem solving ability, and more.” HireVue ultimately pulled back on its facial recognition claims and the use of the technology in its software.

But there’s similar technology out there that uses games to “purportedly measure subjective behavioral attributes and match with organizational fit” or that will use AI to “crawl the internet for publicly available information about statements by a candidate then analyze it for potential red flags or fit,” according to Winters.

There’s also concerns around the amount of data that AI can collect on a candidate while analyzing their video interviews, assessments, resumes, LinkedIn profiles, or other public social media profiles. Oftentimes, candidates might not even know they’re being analyzed by AI tools in the interview process and there are few regulations on how that data is managed.

“Overall, there is currently very little oversight for AI hiring tools. Several state or local bills have been introduced. However, many of these bills have significant loopholes — namely not applying to government agencies and offering significant workarounds. The future of regulation in AI-supported hiring should require significant transparency, controls on the application of these tools, strict data collection, use, and retention limits, and independent third-party testing that is published freely,” says Winters.

Responsible use of AI in hiring

Rentschler and her team at BMC have focused on finding ways to use AI to help the company’s “human capital be more strategic.” They’ve implemented tools that screen candidates quickly using skills-based assessments for the role they’re applying to. BMC has also used AI to identify problematic language in its job descriptions, ensuring they’re gender-neutral and inclusive. BMC has also employed the software to connect new hires with their benefits and internal organizational information during the onboarding process. Rentschler’s objective is to find ways to implement AI and automation that can help the humans on her team do their jobs more effectively, rather than replace them.

While AI algorithms can carry inherent bias based on historical hiring data, one way to avoid this is to focus more on skills-based hiring. Rentschler’s team only uses AI tools to identify candidates who have specific skill sets they’re looking to add to their workforce, and ignores any other identifiers such as education, gender, names, and other potentially identifying information that might have historically excluded a candidate from the process. By doing this, she has hired candidates from unexpected backgrounds, Rentschler says, including a Syrian refugee who was originally a dentist, but also had some coding experience. Because the system was focused only on looking for candidates with coding skills, the former dentist made it past the filter and was hired by the company.

Other ethical strategies include having checks and balances in place. Scherer consulted with a company that designed a tool to send potential candidates to a recruiter, who would then review their resumes and decide whether they were a good fit for the job. Even if that recruiter rejected a resume, the candidate’s resume would still be put through the algorithm again, and if it was flagged as a good potential candidate, it would be sent to another recruiter who wouldn’t know it was already reviewed by someone else on the team. This ensures that resumes were being double-checked by humans and that they aren’t relying solely on AI to determined qualified candidates. It also ensures that recruiters aren’t overlooking qualified candidates.

“It’s important that the human retains the judgment and doesn’t just rely on what the machine says. And that’s the thing that is hard to train for, because the easiest thing for a human recruiter to do will always be to just say, ‘I’m going to just go with whatever the machine tells me if the company is expecting me to use that tool,’” says Scherer.