Automated Hiring Tools

The role of AI-driven algorithms is expanding at almost every stage of the hiring process. Automated hiring tools now routinely include software that scrapes social media profiles and draws conclusions about one’s personality, tools that scan resumes, software that scores applicants based on their facial movements, word choices, and the sound of their voice — and even tools that gather data, such as right and wrong answers and reaction time, as applicants play a simple video game.

Nearly 83 percent of all U.S. employers and up to 99 percent of Fortune 500 companies now use some form of automated tool to screen or rank candidates for hire, according to government and industry estimates. These tools, which are used on millions of job applications across a wide array of positions at many levels, are now generally referred to as Automated Employment Decision Tools (AEDTs). Major firms surveyed by researchers say the result of this widespread adoption is a process they claim is faster, scales up dramatically, and results in a greater number of qualified candidates finding better jobs. 

But there is serious concern that many of these tools reproduce, and sometimes amplify, biases and human errors they are supposed to eliminate. “AI systems may produce biased results because of limitations in their training data or errors in their programming, with significant legal risks in the hiring and HR context,” warns the American Bar Association. Nearly half of employed U.S. job seekers (49%) believe AI tools used in job recruiting are more biased than their human counterparts, according to the American Staffing Association. Alex Engler, Assistant Director for AI Policy at the White House Office of Science and Technology Policy, says, “This transition to an algorithm-dominated hiring process is happening faster than firms, individuals, or governments are able to evaluate its effects.”

The rise of algorithms making high-stakes decisions, including hiring, is one of the most important civil rights issues of our time because discrimination could be rampant in these automated systems.
— Hilke Schellmann | Investigative journalist

The impact of automated hiring tools on low-wage workers has been the subject of two reports by Upturn, a nonprofit research organization, which found that while those tools “rarely make affirmative hiring decisions, they often automate rejections.” In 2021, Upturn conducted an empirical research project about the technologies that applicants for low-wage hourly jobs encounter by submitting online applications to 15 large, hourly employers for entry-level positions — including cashier, retail, or warehouse — in the Washington, D.C. metro area. Among other findings, Upturn recommended that employers discontinue the use of “personality tests” for such positions. “By purporting to assess applicants’ personalities against some norm, personality tests reflect ableist assumptions about the type of person that makes a good job candidate,” the report said. “When deployed at scale, these tests can systematically lock people out of employment if they don’t fit the ‘norm.’”

People with disabilities are especially vulnerable to bias in automated hiring tools. The Equal Employment Opportunity Commission notes, for example, that a job applicant who has limited manual dexterity because of a disability may have difficulty taking a knowledge test that requires the use of a keyboard, trackpad, or other manual input device. Video interviewing software that analyzes applicants’ speech patterns in order to reach conclusions about their ability to solve problems is not likely to score an applicant fairly if the applicant has a speech impediment that causes significant differences in speech patterns. “Without careful forethought, the tools can reject applicants simply because of a disability — unbeknownst to the applicants and even to the employer,” warns a report by the Center for Democracy and Technology.

Investigative journalist Hilke Schellmann argues in a new book that “the rise of algorithms making high-stakes decisions, including hiring, is one of the most important civil rights issues of our time because discrimination could be rampant in these automated systems.” Schellman notes that if an AI system that turns out to be biased is used on all job candidates in a large corporation, hundreds of thousands of people could potentially be affected.

The emerging suite of data-driven technologies in the workplace raises critical questions. Will these technologies be used to benefit and empower workers, help them thrive in their jobs, and bring greater equity to the workplace? Or will they be used to deskill workers, extract ever more labor, increase race and gender inequality, and suppress the right to organize? Who is going to be at the table when these decisions are made, and in particular what role will workers themselves have? In other words, who is going to govern technology? And what values will we as a society choose to prioritize in that governance?
— UC Berkeley Labor Center

There have already been some notable missteps. In 2018, Amazon scrapped an AI recruiting engine that was years in development because it showed biases against women. And in 2021, HireVue, a major vendor of AI-based hiring tools, said it would stop relying on “facial analysis” to assess job candidates. The move followed a Federal Trade Commission complaint by an advocacy group that argued that HireVue’s AI tools—which the company claimed could measure the “cognitive ability,” “psychological traits,” “emotional intelligence,” and “social aptitudes” of job candidates—were unproven, invasive, and prone to bias. At the time, HireVue acknowledged the public outcry over its use of facial analysis and said the technology “wasn’t worth the concern.”

In May 2024, the American Civil Liberties Union alleged in a complaint to the Federal Trade Commission that a large consulting firm, Aon Consulting (NYSE: AON), which sells a mix of applicant screening software to Fortune 500 firms, has made false or misleading claims that its tools are “fair,” free of bias and can “increase diversity.” According to the complaint, Aon’s algorithmically driven personality test, ADEPT-15, relies on questions that adversely impact autistic and neurodivergent people, as well as people with mental health disabilities. Aon also offers an AI-infused video interviewing system and a gamified cognitive assessment service that are likely to discriminate based on race and disability, according to the complaint.

One of the most dominant global providers of worker-related technology today is Workday Inc. (NASDAQ: WDAY), a software company used by more than half of Fortune 500 companies to pay, hire, onboard, and administer benefits to their employees. Workday says that by “embedding AI into the core of our cloud-based platform, we can rapidly deliver new AI capabilities into our products,” including a service that “uses AI to gain insight into an organization’s current skills and identify skills needed for the future, allowing for smarter talent decisions across the company.” With more than 10,000 customers globally, Workday had a market capitalization of about $55 billion as of June 1, 2024.

Workday faces a class action lawsuit filed in 2024 by a job seeker who argues that the company’s algorithmic decision-making tools “provide a ready mechanism for discrimination.” Workday has denied wrongdoing and argued it is not an employer and is not liable for how clients use its products. The lawsuit highlights the risks companies are exposed to by developing or using hiring tools that may exhibit discrimination. 

There will almost certainly be more litigation alleging bias in AEDTs. In 2023, a China-based tutoring company settled a suit brought by the U.S. EEOC, which claimed the company used hiring software powered by artificial intelligence to illegally weed out older job applicants. The iTutorGroup agreed to pay $365,000 to more than 200 job applicants allegedly passed over because of their age. The EEOC had alleged that iTutorGroup programmed online recruitment software to screen out women aged 55 or older and men who were 60 or older.

There is also likely to be more regulation and legislation of AEDTs. A groundbreaking New York City law enacted in 2021 prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates. But so far, disclosures required by the law are rare. Only 18 of nearly 400 employers analyzed in a study had posted the information as of May 2024, according to researchers at Cornell University.

Illinois and Maryland have passed laws regulating AI tools. California, New Jersey, New York, Vermont, and Washington, D.C., have introduced bills similar to New York City’s law, while a Massachusetts bill seeks to prevent “dystopian work environments.”

Matthew Scherer, Senior Policy Counsel at the Center on Democracy and Technology (CDT), notes that, unlike traditional decision-making processes, “people frequently do not know when AI is evaluating them, much less how they will be evaluated.” CDT recently issued a report laying out a “legislative roadmap” for helping to prevent automated electronic decision systems from perpetuating discrimination in employment decisions. Recommendations included comprehensive workplace technology legislation as well as legislation establishing robust disclosure requirements that could address many key discrimination risks. “When consequential decisions are left to automated systems, consumers and workers should have the right to access the information upon which the decision was based, to obtain an explanation as to the reasons for the decision itself, and to opt out of automated decision-making and request human review,” Scherer says.