To staff the new U.S. Space Force with 800 personnel in 2021, the military selected an artificial intelligence interviewing program called HireVue to assess and chat with prospective space guardians, contracting papers show. Around the same time, the Veterans Affairs Department (VA) used algorithm-driven chat service Paradox AI to recruit candidates during the COVID-19 pandemic. The chatbot asked VA job seekers a few qualification questions before granting (or denying) access to a VA Sunshine Healthcare Network virtual job fair, VA press secretary Terrence Hayes told the author of this article.
These are just two instances of the nation’s largest employer, the U.S. government, rushing to buy hiring formulas that rate or predict job performance, as benchmarks and AI upend the workplace. Nearly every private-sector employer is on board with AI-aided recruiting too.
Generating Bias?
While perhaps a boon for ramping up a workforce, these recruiting aids tend to “screen out,” or fire and not hire, people with little-known developmental or learning disabilities. The machines do not understand, for example, the eye movements of people with Autism Spectrum Disorder, the tics of those with Tourette’s Syndrome, or the clunky fingers and hyperfocus of those with Nonverbal Learning Disability, including vice presidential nominee Tim Walz’s son. Further, some test formats block people with these “neuro-divergent” disabilities from even opening an application.
What can valuable workers with these invisible disabilities, employers, product developers, and governments do to stop math from tagging top-notch talent bad apples? Not much right now, according to attorneys, computer researchers, and the European Union (EU) – which banned employers from using emotion-detection algorithms last month due to potential discrimination.
“Neuro-divergence is made up of many diagnoses, and two people can present diagnoses very differently from each other,” Aaron Konopasky, senior attorney-advisor at the U.S. Equal Employment Opportunity Commission (EEOC), told the author. Consequently, developers do not have a “neuro-divergence” demographic to test for disparate impacts. “So, it’s almost impossible for test designers to design something that’s perfectly fair.”
The Americans with Disabilities Act (ADA) has yet to protect employees because neither employers nor test-takers know how a neuro-divergent condition will impact scores. And, no national laws or regulations target algorithmic bias against neuro-divergence in employment efforts. Further exacerbating matters: AI developers do not publicize the limitations of their products.
At first, VA officials said the department had not been using the AI components of Paradox AI. When later shown the contracts, Hayes said that the VA used Paradox only during the COVID-19 crisis. An Air Force spokesperson said that the branch currently uses HireVue to coordinate interviews and that “other capabilities are under review.”
Meanwhile, the $578 million AI recruitment tech market is expected to nearly double to $1 billion in less than a decade. The Army, U.S. Space Force, and intelligence community, among other large federal entities, and more than half of corporations have purchased algorithmic hiring tools. HireVue claims to have more than 1,150 global clients. Federal employers, including intelligence agencies, have signed contracts worth nearly $40 million for subscriptions to the company’s platform. Automated hiring vendor Harver, the owner of soft-skills evaluator Pymetrics, reports more than 1,300 customers, while Paradox AI reports 1,000 clients worldwide.
Well-Meaning?
The development of these hiring robots started with seemingly good intentions. Algorithm-driven video interviews, assessments, and conversational AI aimed to reduce human biases against visible disabilities – race, gender, physical challenges, and others – by focusing decisions purely on aptitude. But that thinking overlooked neuro-divergent conditions that prevent some people from displaying aptitude, or even applying for a job, according to federal attorneys. Both miscalculations violate the ADA.
For example, a chatbot that cuts a candidate who replies “No,” to a binary Yes/No question about a job requirement, such as – “Do you have a high school education?” – can violate the ADA, Konopasky said. “The whole point of the ADA is that there can be exceptions,” within reason, he added. If a chatbot message or an employer who wrote the bot’s script thinks a high school diploma symbolizes responsibility, and a candidate “didn’t get a high school diploma because of their disability, can do the job, and is perfectly responsible,” cutting off the conversation at that point can be illegal.
The VA’s Hayes said that the Paradox AI chatbot asked three to five basic eligibility questions, based on job requirements concerning education and standard conditions of federal employment, among other things. Konopasky said he was unaware of VA’s use of conversational AI for recruiting and was unable to comment.
Besides blocking neuro-divergent applicants from applying, some screening tools do not measure the skills necessary for a job. For instance, autistic individuals tend to rank below non-autistic individuals on tests that require people to match pictures of faces to emotions. This is an illegal selection criterion when an ability to identify specific emotions is not a needed skill for a position and does not affect job performance, Konopasky said. As such, a Pymetrics test that requires analyzing static photos of faces can be widely off-base, a Center for Democracy and Technology paper explains.
Ezra Awumey, a Carnegie Mellon University researcher, has found that emotion recognition algorithms, in particular, may disproportionately disadvantage neuro-divergent personnel. “Emotion AI” scans voices, facial movements, and other body signals purportedly to measure worker competency. Many researchers discredit the premise of this technology– a theory that humans exhibit certain universal emotions – as incompatible with the emotional cues of people with autism. After public outcry, in 2021, HireVue abandoned a feature that analyzed an applicant’s facial expressions to measure emotional intelligence.
Neuro-Talent Has Not Circumvented the System – Yet
Neuro-divergent individuals are on their own to brave this uncharted territory because case law in algorithmic discrimination is nearly nil, and ADA legal options take X-ray vision into algorithms to pull off.
To secure protection, a job applicant wanting an alternative to algorithmic screening as an accommodation must ask for it. To do that, they must have enough insight into the algorithm’s mechanics to realize they need an accommodation. Then they must disclose an otherwise invisible disability to an employer, who may have their own biases, to request it. If a candidate with such a disability stays silent, their troubles double. One, they have already lost the position; and two, for recourse, they must ferret out the algorithm and the hiring process to prove their dyslexia, autism, or other disability screened them out of contention.
Disclosure of a neuro-divergent disability “is a very, very tricky calculus,” the EEOC’s Konopasky acknowledged.
On one side, “if the employer is evil, then – you don’t want to tell them information that could presumably move you down the ladder,” he said. On the other, “if you disclose and request an accommodation and they say no” because of bias or unreasonableness “you have grounds for a legal claim.”
Brian Dimmick, senior staff attorney for the American Civil Liberties Union (ACLU) disability rights program, told the author that “this is a new and developing area of law, and there is not a lot of case law out there.”
The one court that has heard an ADA case rejected allegations that an algorithm-driven test cost a neuro-divergent individual a job. In Beale v. Clearwater, cybersecurity firm Clearwater Compliance terminated a dyslexic employee who read slowly on an algorithm-based test that measured his motivation, personality, and other traits to select future Clearwater staff. A 2021 ruling in the case found that the firm did not deny accommodations on the test because the staffer never asked for an alternative. The court decided that the worker was fired because of poor performance, not the test results – which did not measure performance. Clearwater declined to comment. The employee could not be reached for comment.
The ACLU has one EEOC complaint in the pipeline. On behalf of a biracial autistic job applicant and a similarly situated class, the ACLU alleges that hiring technology vendor Aon and an employer that uses Aon’s assessments discriminated based on race and disability. In May, the ACLU also tried tapping Federal Trade Commission authority to restrict Aon from marketing their online hiring tests as “bias-free.”
Artificial Interviews of Prospective Federal Workers
While the private sector’s deployment of hiring robots is widely known, the pervasiveness of the technology in the federal government has received little attention.
Besides the U.S. Space Force and the VA contracting for bots to chat with applicants, as discussed above, the Army has spent at least $5 million on gaming algorithms to determine each soldier’s fitness for assignments and build teams. Procurement papers indicate the Army still subscribes to the platform, a Pymetrics system. Also, around 2020, HireVue began recording video interviews with West Point Cadets vying for branch positions and recording their responses to coding challenges. The Army did not respond to requests for comment.
Coincidingly, the National Geospatial-Intelligence Agency inked a $28 million potentially four-year contract for a HireVue subscription to help the whole intelligence community conduct faster interviews and “predict top performers.” The deal followed a pilot program with several intelligence agencies that yielded positive results, according to contracting documents.
In late 2023, the Air Force announced a planned purchase of HireVue to deploy AI features that assess attributes “predictive” of performance on the job. Contract papers in 2021 show that the scope of work included conversational AI.
Meanwhile, documents state the VA piloted HireVue interviewing and “emotional intelligence” software for “rating and ranking” job candidates in 2023. The VA’s Hayes referred questions about the software’s technical specifications to HireVue. Hayes said that the program did not directly control hiring decisions and is not in use at this time.
A 2022 product solicitation reveals the Food and Drug Administration (FDA) has already developed a “recruitment plan to leverage artificial intelligence” for a congressionally mandated hiring spree. The FDA chose HireVue to measure the “behavioral and performance-based attributes” of candidates vying for about 950 vacant positions. The software package also offered “predictive analytics functionality to assist in decision making.” FDA spokesperson Jeremy Kahn declined to comment on the contract, telling the author that, currently, the agency is not using the AI portions of HireVue.
Also in 2020, the Homeland Security Department Cybersecurity and Infrastructure Security Agency (CISA) announced a contract to procure HireVue for interviews and administering “game-based and/or video-based” assessments. A CISA spokesperson told the author that the agency presently is not using HireVue’s AI components.
No one can be sure HireVue’s algorithms are not operational at some of these agencies now, because of the product’s opaque, “black box,” workings, say researchers and attorneys. “It’s really concerning,” the ACLU’s Dimmick said. Federal employers “read the marketing, buy the software, start using it, and don’t really even think about the implications” or necessarily “understand what the AI is and is not part of.”
He added, “I don’t know that a lot of these federal employers necessarily mean to discriminate,” but until they “realize that there can be consequences” for using these algorithms, “I don’t think they’re going to change their practices.”
The problem of neuro-divergent bias in algorithmic job screening extends beyond employers to the companies that develop the algorithms.
HireVue branding claims the company “works to prevent and mitigate bias,” but “no one knows how HireVue builds its proprietary software,” Pranav Narayanan Venkit, a Penn State doctoral student researching algorithmic bias against people with language disabilities, told the author. HireVue declined to comment for this article.
The Paradox website does not mention disability inclusion at all. A company spokesperson admitted Paradox has “done a poor job… telling a complete story around how we mitigate bias.” The spokesperson partly blamed employers who do not use disability-friendly options, such as allowing candidates to voice rather than type chat responses.
Pymetrics did not respond to requests for comment.
A Legal No Man’s Land
So far, the U.S. federal government and Congress have not banned AI from aiding in hiring and firing.
In March, the White House Office of Management and Budget (OMB) issued requirements to reduce the risk of AI discrimination in systems that directly control, among other decisions, pre-employment screening outcomes and the hiring or termination of federal employees.
The OMB memorandum attempts to address the risk by requiring impact assessments and consultation with populations whose rights are at stake but does not ban the use of the systems. Annually, agencies must publicly post current uses. However, this requirement does not apply to the largest federal users of AI recruiters – the military and the intelligence community. Also, civilian agencies can obtain waivers in certain circumstances. In October, the White House issued separate guidelines instructing the Defense Department and intelligence community to notify individuals – after the fact –if AI informed a decision that disadvantaged their employment eligibility or classified information access.
In a similar move two years ago, the EEOC, which enforces disability discrimination laws in the private and federal sectors, and the Justice Department, which does the same in state and local governments, sketched guidelines on algorithmic discrimination. The guidelines are not mandatory. Justice and the Commission declined to comment on whether an update or binding rules are forthcoming.
Further, with Congress generally at a standstill until a government shutdown looms, lesser priorities such as perpetually-floated AI bills fall to the wayside.
By contrast, state legislatures have not wasted time. Illinois, Maryland, and New York City mandate that employers notify employees or job candidates about when and how an AI system is operating, sometimes going so far as requiring an employer to obtain consent before using the system and to conduct audits for algorithmic bias. Still, independent auditors of New York City’s auditing law, the first of its kind, identified failures – including industry lobbying that narrowed the definition of “automated employment decisionmaking technology” and challenges accessing data to audit.
The United States lags behind the EU in protecting individuals with disabilities in the robo-hiring process. The 2024 European Union AI Act, which took effect on Aug. 1, outright bans emotion-detection algorithms in workplace situations. The EU law recognized research showing emotions vary across cultures and situations, and even within a single individual, and, therefore, anticipated the need to block discrimination against certain populations. Also, the law strictly regulates the use of any algorithm to hire, promote, terminate, or evaluate job applicants and staff, since such technology may “perpetuate historical patterns of discrimination,” including against people with disabilities.
A European Commission (EC) spokesperson told the author, “Without a clear legal framework in place, the use of AI in workplaces could lead to increased inequality, posing risks of discrimination, for instance, related to hiring and firing decisions.” The EC spokesperson added, “AI must serve humans, not the other way around.”