Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on November 30, 2021

Dear CEOs, you’re getting ripped off by legal AI scams

CC: HR, IT


Dear CEOs, you’re getting ripped off by legal AI scams

What if I told you I was selling a set of computer programs that could automagically solve all of your hiring, diversity, and management problems overnight? You’d be stupid not to at least listen to the rest of the offer, right?

Of course no such system exists. The vast majority of AI products purported to predict social outcomes are blatant scams. The fact that most of them are legal doesn’t stop them from being snake oil.

Typically, the following AI systems fall under the “legal snake oil” category:

  • AI that predicts recidivism
  • AI that predicts job success
  • Predictive policing
  • AI that predicts whether an individual will become a criminal or terrorist
  • AI that predicts outcomes for children

The reason for this is simple: AI cannot do anything a human (given enough time and resources) could not themselves do. Artificial intelligence is not psychic and it cannot predict social outcomes.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

As associate professor of computer science at Princeton University Arvind Narayanan said in a series of recent lectures on snake oil AI:

These problems are hard because we can’t predict the future. That should be common sense. But we seem to have decided to suspend common sense when AI is involved.

Think about it, have you ever heard of a big business that’s never made a single hiring mistake? 

These systems work on the same principle as the magic beans from Jack and the Beanstalk. You have to install the systems, pay for them, and then use them for an extended period of time before you can evaluate their effectiveness.

That means you’re being sold on statistics up front. And, when it comes to benchmarking black box AI systems, you may as well be measuring how much mana it takes to cast a fireball spell or counting how many angels can dance on the head of a pin: there’s no science to be done.

Take HireVue, one of the most popular AI-hiring system vendors in the world. Its platform can purportedly measure everything from “leadership potential” to “personality” and “work style” from a combination of video interviews and games.

That sounds pretty fancy, and HireVue’s statistical claims all seem quite impressive. But the bottom line is that AI can’t do any of those things.

The AI doesn’t measure candidate quality, it measures a candidate’s adherence to an arbitrary set of rules decided on by the platform’s developers.

Here’s a snippet from a recent article by the Financial Times’ Sarah O’Connor that explains how silly the video interview process really is:

While it’s hard to communicate naturally in such an unnatural situation, the platforms simultaneously urge jobseekers to “be authentic” to have the best chance of success. “Get excited and share your energy with the camera, letting your personality shine,” HireVue advises.

Unless you’re being hired to be a TV news anchorperson, this is ridiculous.

Energy” and “personality” are subjective ideas that can’t possibly be measured, as is “authenticity” when it comes to humans.

HireVue’s systems, like all AI purported to predict social outcomes, are nothing more than arbitrary discriminators. 

If the only “good” candidates are those who smile, maintain eye contact, and exhibit the right “authenticity” and “energy,” then candidates with muscular, neurological, or nervous system disorders who can’t do those things are instantly excluded. Candidates who don’t present as neurotypical on camera are excluded. And candidates who are culturally diverse from the creators of the software are excluded. 

So why do CEOs and HR leaders still insist on using AI-powered hiring solutions? There are two simple reasons:

  1. They’re gullible enough to believe the vendor’s claims
  2. They recognize the value in being able to blame the algorithm

Here are some other scientific and (well-sourced) journalistic resources explaining why AI purported to predict social outcomes is almost always a scam:

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top