Before AI comes for your job, it wants your job applications

0
29

In late 2017, Hilke Schellmann was closing out a conference in Washington, DC, when she hailed a ride to the train station. The filmmaker and New York University journalism professor hopped in her Lyft, asked the driver how he was doing, and was met with a pause. It had been a strange day, he answered. He’d applied for a job as a baggage handler at the local airport, and that afternoon he’d been called up to interview with a robot.

Schellmann was intrigued. By the following April, she’d attend her first HR tech conference, where she watched a company called HireVue present a new kind of video interview: one that used AI to analyze candidates’ facial movements and tone of voice to determine how well they matched a role. That analysis could be used to make—or deny—a job offer. “It seemed like magic,” Schellmann recalls. But when she began to ask questions about the science behind the analysis, she says, she realized there wasn’t any.

Now a range of HR software promises that AI can help companies make better hires than humans. AI is already coming for our job applications: More than 80% of employers use it to make hiring decisions, US Equal Employment Opportunity Commission chair Charlotte Burrows estimated in 2023. Today robots screen our resumes and record our first interviews to recommend the best hires. But they don’t stop there: Some ask us to play AI video games, where pumping a virtual balloon supposedly sheds light on your professional aptitudes. Some listen in on our interview calls, evaluating our words to predict our soft skills. Still others scan our social media in a flash, compiling Cambridge Analytica–style personality profiles for our future employers. Plenty don’t need our permission to get started—and we’ll often never know we were evaluated by an algorithm.

In her new book, The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now, Schellmann peers into the black box deciding whether or not we get a job—and finds that the machines are just as flawed as the people who build them. Posing as a candidate, she uncovers their failings firsthand: Transcription tools give her high marks in English after she speaks to them in German; social media screeners spit out opposing personality profiles based on whether they look at her Twitter or her LinkedIn.

Meanwhile, Schellmann talks with more than 200 people—employment lawyers, organizational psychologists, regulators, recruiters, candidates, and the machine makers themselves—to uncover how these tools not only replicate human biases, but produce entirely new ways to discriminate.

Quartz spoke with Schellmann about how hiring came to involve fewer humans and more computers, along with what job candidates can do to take back some control. This interview has been edited and condensed for length and clarity.

The vast majority of job hunters encounter some kind of AI as they look for open roles. (All of the large job platforms—like LinkedIn, Indeed, ZipRecruiter, and Monster—confirm that they use AI, although they’re not required to disclose exactly where or how it works.) Why do companies buy additional AI tools from vendors?

The advent of job boards like LinkedIn and Monster [has] been wonderful for applicants —you can send your resumes to lots and lots of people and jobs every day. On the other hand, that has led to companies feeling they’re getting [deluged], and they can’t read them all. For example, Google says they get about 3 million applicants every year. There’s no way that human recruiters can go through all of these resumes or applications, and so these companies need a technological solution.

That’s what AI vendors cater to, to say, “Hey, we have a great solution. It’s efficient, it will save you money, and it will find the most qualified candidates for the job without any bias.” We’ve seen proof that [the technology] is very efficient and saves a lot of money. We haven’t found a lot of evidence to prove that it finds the most qualified candidates, or that there is less bias.

AI tools are built on human-based data—in the case of resume screeners, for example, the AI is trained on resumes of current employees and taught to look for patterns among them. In some cases, that can reflect existing disparities back at us; in more than one case, AI trained on data from a male-dominated team learned to downrank women. In some cases, it can produce entirely new biases. How do those flaws get caught?

Every so often, companies bring in outside counsel and outside lawyers to evaluate these tools. [Former employment lawyer] Matthew Scheier told me that none of the tools he looked at when he was an employment lawyer were ready for primetime. [Software auditor] John Scott, the COO of [HR consulting firm] APTMetrics, looked at five resume screeners and found problems in all five. [Whistleblower and former employment lawyer] Ken Willner said he found problematic variables in about a quarter of them. It’s not a random fluke—it’s actually a pattern that things go wrong. There’s bias, and possible discrimination and harm, that these tools cost.

Willner was really concerned when he looked at one of the resume screeners and found one of the variables predicted upon was the word “Africa,” [like in] “African” and “African American.” That would constitute race discrimination. Our skin color should have nothing to do with whether we’re chosen or rejected for a job. [Another evaluator] found that the word “Thomas” in one resume screener was predictable. My apologies to all the Thomases out there, but the name Thomas doesn’t qualify you for any job.

Something else that shocked me was that instances of bias in the AI tools were never discovered by the vendor themselves, according to Willner. They were found exclusively when a company using the tool brought in a third-party auditor.

A lot of [what I call predictive AI tools] use machine learning, and they often use deep neural networks. So the developers themselves often don’t know exactly what the tools actually predict [or] how they reach the results of their conclusions. I think that should all worry us.

You also write about how these tools leave a tremendous amount of space to discriminate against people with disabilities—and seem to fly under legal radars doing it.

People with disabilities are a huge part of the population—about 10 to 20% in the US, maybe even more. Disabilities can be visible, invisible, physical, mental; there’s all kinds of variation, [and] disability might express itself very differently. So even if I’m autistic, and my data is being fed into an AI system, it doesn’t actually mean that people with autism are adequately represented in the training data. There’s an individual expression of disabilities that cannot be adequately represented in a system that looks for statistically relevant patterns.

I think a lot of folks on the hiring side say, “Well, the law says that folks who have a disability can have a reasonable accommodation”—for example, if you encounter a one-way video interview and are deaf or hard of hearing, or you have a speech impairment, maybe the company would put a human on the other end. But what I’ve learned speaking to vocational counselors who work with people with disabilities, tragically, is that every time they’re asked for a reasonable accommodation, which is the law, they’ve never heard back. I think it’s getting harder and harder because we have more automatic screens in the hiring pipeline.

So how can candidates regain some agency, or feel like they can do something to better equip themselves for AI to read their job application?

There are some lessons to be learned here for job seekers. I want to preface this by saying I don’t know everything about every tool that is in use.

We used to tell people, “Oh, make your resume stand out; make it eye-catching.” Now it’s the opposite advice: Make your resume machine-readable. Not two columns, just one column; clear text; short, crisp sentences. Use easily quantifiable information. If you have a license—for example, a nursing license—put that on there. Maybe even put the licensing numbers there [so] a computer can look up that you are licensed to practice, or something like that.

I think for a lot of people, it’s really empowering to use ChatGPT and other generative AI to proofread their resumes [or draft] cover letters. There are jokes on employment platforms where people joke, like, “Oh yeah, let the better AI win…my cover letter’s written by AI, and the companies all use AI [to read it].” I think that feels empowering to some job applicants. I think generative AI has changed the power balance just a little bit.

Honestly, reading about some of this AI software gives me a very dystopian feeling—but it’s nice to know that those public-facing tools have democratized it, if just a little.

I do think we’re just at the beginning and now pulling out the errors. Maybe we’re moving a little bit this curtain of secrecy and to show, “Hey, this is what’s already happening. We see all of these problems.” Let’s push for some changes—like, let’s push for more transparency, possibly more regulation. But let’s also put pressure on companies to do the right thing.

While I was writing the book, I was like, “I think there needs to be [some] huge civil society organizations that test these tools, but also build tools in the public interest.” So maybe, you know, someone or an organization could build a tool like a resume screener that is not biased. Maybe we can put that in the public interest, into the public domain, and push companies to use the tools that are not discriminatory.

LEAVE A REPLY

Please enter your comment!
Please enter your name here