Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your online ticket now!
After about the fifth or sixth time you read an article about a Black man being wrongfully arrested due to faulty facial recognition AI, you start to wonder how nobody seems to be doing anything to stop this from happening.
Sure, whenever something goes wrong, the company behind the software is always working to improve results and the law enforcement agency using the software is always reviewing procedures to ensure this doesnât happen again.
But it does. It seems like a day doesnât go by where a law enforcement agency isnât exposed for misusing facial recognition or predictive-policing systems.
And itâs not just the government. Big business, small business, and everything in between are caught up in the AI snake oil craze.
Hiring algorithms that judge human emotion, honesty, or sentiment are inherently unethical and biased. AI systems that allege to predict human behavior before it happens are almost always scams.
Yet hundreds â perhaps even thousands â of companies that specialize in BS AI are thriving. Why?
The short answer: money. Itâs money. Itâs always money.
Must be the money
Most AI companies and organizations are in pursuit of useful technology. But the oneâs weâre focused on in this article are those who know theyâre pushing snake oil and rely on hyperbole, âhuman in the loopâ BS, and fuzzy statistics to obscure what their products can do.
And this is mostly about startups, weâll get to big tech and academiaâs role in the crapshow that is the AI world in future articles.
But, the reason companies use scammy hiring algorithms that clearly discriminate against Black applicants or the police donât mind using scheduling software that makes mathematically impossible claims about predicting crime is because everyone involved in the entire process gets paid.
Imagine youâre a business person whoâs interested in AI and you come up with a really cool idea, weâll use predictive policing AI as an example:
Wouldnât it be cool if we could predict where crime was going to happen?
Youâre not an AI expert, but it seems like this should be possible using modern technology. After all, isnât there AI that can tell if someoneâs gay (no), determine if someoneâs a terrorist by looking at their face (hell no), and AI that can fool humans into thinking the things it writes were written by humans (also, hell no)?
Luckily for you, there are plenty of AI developers who definitely think they can create algorithms capable of predicting, based on historical data, where police-presence will be most needed in a given area. With enough data, you can predict anything right (no)?
Now you just need funding. VCâs will fund anything so long as thereâs a market for it, itâs not explicitly illegal, and thereâs money to be made.
Once the product is funded, developed, and packaged, itâs up to the sales and marketing teams to figure out the rest.
If youâre the police officer responsible for purchasing new software for your department, and someone tells you theyâve got research demonstrating their system is better at predicting crime than your current method, it sounds like you might be getting a good deal.
At no point from inception to implementation is anyone involved obliged to wonder if itâs ethical to use this software, because anyone who actually believes a machine can predict crime is in no position to opine on its ethical implementation and everyone else involved is in on the scam.
Basically, the good apples believe systems such as hiring algorithms, predictive-policing, and facial recognition will take the human bias out of situations where it can be a problem, and the bad apples know itâll do the opposite, but they donât care so long as thereâs money to be made.
The founders make money up front. The VCs get theirâs later, and the organizations implementing scammy AI can typically replace several useful systems and humans with a so-called all-in-one package â or, in the case of government orgs, they can justify budget increases with the AIâs output.
Human in the loop
Youâd think thereâd be somebody somewhere with the power to say âHey, Iâm versed in computer science 101 and basic mathematics and your research papers are a joke. We shouldnât create/sell/purchase/use this product.â
But youâd be wrong.
CEOs often donât understand the finite details of their products. If your AI head says they can build a system to predict crime, who are you to tell them theyâre lying?
Often the AI person isnât lying, they just have a myopic view of what âpredicting crimeâ means because theyâre in no position to understand the actual nature of crime â something criminology and sociology experts spend their entire lives studying.
Your average tech startup doesnât tend to hire IT talent based on their sociology credits.
And when it comes to sales teams, marketers, and purchasing agents: itâs in everybodyâs best interest to believe the hype and nobodyâs qualified to dispute it.
The average PR representative or HR manager is not going to read an AI startupâs research papers and suddenly exclaim âhey, wait, these statistics were run against a survey with no ground truth. This math doesnât add up, weâre being scammed!â
Unfortunately the media tends to make things worse. The sheer number of reporters who take press releases as gospel when reporting on these companies is staggering.
Unfortunately, itâs another case where many journalists donât know as much about the topic as the companies and developers theyâre interviewing and quoting.
The Blue Fairy
And, finally, the main reason why the BS AI ecosystem seems to thrive is because almost everybody wants to believe the products it produces are real.
Everybody but criminals wants to believe an AI could predict crime. Everybody should want to believe that an AI hiring algorithm could solve the problem of bigoted hiring practices.
A lot of people want to believe that facial recognition can accurately identify people, sentencing algorithms can be fair, and computer vision can determine if someoneâs gay, a terrorist, or being sincere.
And there are a lot more marketing and PR agents in the world than there are journalists who know what theyâre talking about or AI experts willing to publicly call out BS when they see it.
Until those things change, weâll continue to be treated to a never-ending series of quiet reports demonstrating how flawed these AI systems are followed up by very loud articles detailing how the companies responsible are working diligently to âimproveâ their systems.
Get the TNW newsletter
Get the most important tech news in your inbox each week.