Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on May 28, 2020

AI algorithms are puzzled by our online behavior during the coronavirus pandemic


AI algorithms are puzzled by our online behavior during the coronavirus pandemic Image by: Icons8 (Edited)

At some point, every one of us has had the feeling that online applications like YouTube, Amazon, and Spotify seem to know us better than ourselves, recommending content that we like even before we say it. At the heart of these platforms’ success are artificial intelligence algorithms—or more precisely, machine learning models—that can find intricate patterns in huge sets of data.

Corporations in different sectors leverage the power of machine learning along with the availability of big data and compute resources to bring remarkable enhancement to all sorts of operations, including content recommendation, inventory management, sales forecasting, and fraud detection. Yet, despite their seemingly magical behavior, current AI algorithms are very efficient statistical engines that can predict outcomes as long as they don’t deviate too much from the norm.

But during the coronavirus pandemic, things are anything but normal. We’re working and studying from home, commuting less, shopping more online and less from brick-and-mortar stores, Zooming instead of meeting in person, and doing anything we can to stop the spread of COVID-19.

[Read: This free AI chatbot helps businesses fight COVID-19 misinformation]

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The coronavirus lockdown has broken many things, including the AI algorithms that seemed to be working so smoothly before. Warehouses that depended on machine learning to keep their stock filled at all times are no longer able to predict the right items that need to be replenished. Fraud detection systems that home in on anomalous behavior are confused by new shopping and spending habits. And shopping recommendations just aren’t as good as they used to be.

How AI algorithms see the world

random vectors

To better understand why unusual events confound AI algorithms, consider an example. Suppose you’re running a bottled water factory and have several vending machines in different locations. Every day, you distribute your produced water bottles between your vending machines. Your goal is to avoid a situation where one of your machines is stocked with rows of unsold water while others are empty.

In the beginning, you start by evenly distributing your produced water between the machines. But you realize that some machines run out of bottled water faster than others. So, you readjust the quota to allocate more to regions that have more consumption and less to those that sell less.

To better manage the distribution of your water bottles, you decide to create a machine learning algorithm that predicts the sales of each vending machine. You train the AI algorithm on the date, location, and sales of each vending machine. With enough data, the machine learning algorithm will create a formula that can predict how many bottles each of your vending machines will sell on a given day of the year.

This is a very simple machine learning model, and it sees the world through two independent variables: date and location. You soon realize that your AI’s predictions are not very accurate, and it makes a lot of errors. After all, a lot of factors can affect water consumption at any one location.

To improve the model’s performance, you start adding more variables to your data table—or features, in machine learning jargon—including columns for temperature, weather forecast, holiday, workday, school day, and others.

As you retrain the machine learning model, patterns emerge: The vending machine at the museum sells more in summer holidays and less during the rest of the year. The machine at the high school is pretty busy during the academic year and idle during the summer. The vending machine at the park sells more in spring and summer on sunny days. You have more sales at the library during the final exams season.

This new AI algorithm is much more flexible and more resilient to change, and it can predict sales more accurately than the simple machine learning model that was limited to date and location. With this new model, not only are you able to efficiently distribute your produced bottles across your vending machines, but you now have enough surplus to set up a new machine at the mall and another one at the cinema.

This is a very simple description, but most machine learning algorithms, including deep neural networks, basically share the same core concept: a mapping of features to outcomes. But the artificial intelligence algorithms that power the platforms of tech giants use many more features and are trained on huge amounts of data.

For instance, the AI algorithm powering Google’s ad platform takes your browsing history, search queries, mouse hovers, pauses on ads, clicks, and dozens (or maybe hundreds) of other features to serve you ads that you are more likely to click on. Facebook’s AI uses tons of personal information about you, your friends, your browsing habits, your past interactions to serve “engaging content” (a euphemism for stuff that keeps you stuck to your News Feed to show more ads and maximize its revenue). Amazon uses tons of data on shopping habits to predict what else you would like to buy when you’re viewing a pair of sneakers.

How AI algorithms don’t see the world

human brain gears

As much as today’s artificial intelligence algorithms are fascinating, they certainly don’t see or understand the world as we do. More importantly, while they can dig out correlations between variables, machine learning models don’t understand causation.

For instance, we humans can come up with a causal explanation for why the vending machine at the park sells more bottled water during warm, sunny days: People tend to go to parks when it’s warm and sunny, which is why they buy more bottled water from the vending machine. Our AI, however, knows nothing about people and outdoor activities. Its entire world is made up of the few variables it has been trained on and can only find a positive correlation between temperature and sales at the park.

This doesn’t pose a problem as long as nothing unusual happens. But here’s where it becomes problematic: Suppose the ceiling of the museum caves in during the tourism season, and it closes for maintenance. Obviously, people will stop going to the museum until the ceiling is repaired, and no one will purchase water from your vending machine. But according to your AI model, it is mid-July and you should be refilling the machine every day.

A ceiling collapse is not a major event, and the effect it has on your operations is minimal. But what happens when the coronavirus pandemic strikes? The museum, cinema, school, and mall are closed. And very few people dare to defy quarantine rules and go to the park. In this case, none of the predictions of your machine learning model turn out to be correct, because it knows nothing about the single factor that overrides all the features it has been trained on.

How humans deal with unusual events

Unlike the unfortunate incident at the museum, the coronavirus pandemic is what many call a black swan event, a very unusual incident that has a huge and unpredictable impact across all sectors. And narrow AI systems, what we have today, are very bad at dealing with the unpredictable and unusual. Your AI is not the only one that is failing. Fraud detection systems, spam and content moderation systems, inventory management, automated trading, and all machine learning models that had been trained on our usual life patterns are breaking.

We humans, too, are confounded when faced with unusual events. But we have been blessed with intelligence that extends way beyond pattern-recognition and rule-matching. We have all sorts of cognitive abilities that enable us to invent and adapt ourselves to our ever-changing world.

Back to our bottled water business. Realizing that your precious machine learning algorithm won’t help you during the coronavirus lockdown, you scrap it and rely on your own world knowledge and common sense to find a solution.

You know that people won’t stop drinking water when they stay at home. So you pivot from vending machines to selling bottled water online and delivering it to customers at their homes. Soon, the orders start coming in, and your business is booming again.

While AI failed you when the coronavirus pandemic struck, you know that it’s not useless and can still be of much help. As your online business grows, you decide to create a new machine-learning algorithm that predicts how much water each district will consume on a daily basis. Good luck!

For the moment, what we have are AI systems that can perform specific tasks in limited environments. One day, maybe, we will achieve artificial general intelligence (AGI), computer software that has the general problem-solving capabilities of the human mind. That’s the kind of AI that can innovate and quickly find solutions to pandemics and other black swan events.

Until then, as the coronavirus pandemic has highlighted, artificial intelligence will be about machines complementing human efforts, not replacing them.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with