Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on February 19, 2021

AI’s not all bad — here are 4 good things it may do by 2030


AI’s not all bad — here are 4 good things it may do by 2030 Image by: Pixabay

For decades, artificial intelligence has been depicted as a sinister force in science fiction. Think of HAL-9000, the main antagonist in Arthur C. Clarke’s Space Odyssey series. But while applications of AI and machine learning are indeed sophisticated and carry the potential to be dangerous, my own view is that over the course of this decade, the most frequent encounters people are going to have with these neural technologies will seem both ordinary and positive. But there is one important area of algorithmic use that will require real work.

First the benign uses. I am thinking here of areas in which prototypes already exist: AI-powered activities that are likely to become normal by the end of this decade: Conversational Commerce, Home Technical Support, and Autonomous Vehicles.  However, a fourth one, Institutional Decision Making, has few satisfactory prototypes at this time and so will be harder to fix.

Conversational commerce

This refers to voice-driven sales activity in which the natural voice is the customer’s, interacting with an AI-driven bot voice at the vendor’s end. It is different from today’s e-commerce pattern, where the customer goes through a sequence of steps: visiting the vendor’s website, reviewing a series of pictures, entering their choice, keying in delivery directions, providing credit card information, and then confirming the purchase. Instead, the customer would start by either visiting the website or talking to their smart speaker. A bot would greet the person, ask how it could help while drawing on the knowledge of previous searches and purchases. All of it would take place using natural language. Over time, the AI-bot could even initiate contact, offering suggestions for gifts, reorders, or special deals. I expect that half of all commerce will shift to voice technology by the mid-to-late decade.

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

Home technical support

Today, seeking help with a home appliance issue typically begins with a call to the OEM’s Customer Service desk or local service center. The customer describes their problem, a technician is dispatched to the home, and the problem is addressed on site. Depending on the issue, it can take days to resolve. Within the next few years, however, when that initial call is placed, it will be answered by a 24/7 bot. You will be directed to use your cell phone and point its camera toward the model number identification tag, the control settings, the installation details, and the problem. You will be asked a series of questions to narrow the diagnosis and identify replacement parts. You will then be shown a tutorial video, enhanced by augmented reality, enabling you to do much of the servicing yourself. Should that fail, your call would be directed to a human technician whose advice would also be absorbed by the AI system and used to improve future service calls.

Autonomous vehicles

Fully autonomous cars and trucks are already in the pipeline, but human intervention for their safe operation is still a necessity. By the end of this decade, that will no longer be the case. Responding appropriately to problematic roadway situations—particularly those involving construction, road hazards, hand signals, and reckless drivers—will have been learned and quickly implemented by self-driving cars and trucks. That will free commuters to do other things while underway, alleviate commercial driver shortages, and change the landscape of product delivery. Their on-demand capabilities are also likely to affect patterns of private vehicle ownership.

Institutional decision-making

The most challenging applications of AI are not those embedded in digital devices; instead, they are ones embedded in the policymaking machinery of public and private institutions where they are used to make decisions about human services: getting a loan, securing insurance, setting interest rates, eligibility for government benefits, criminal sentencing, suitability for bail, the likelihood of success at work, and qualification to receive healthcare, among many others. Yet private developers of those algorithms guard their creations jealously and government agencies rarely divulge how their algorithms work. Then too, since algorithms constantly change as more data is ingested, it takes an expert to fully understand how they work, much less to defend them in court. More damaging, however, is when the datasets on which an AI system has been trained are unwittingly biased against a minority group, as some claim to be the case with police data, it can effectively automate discrimination.

Algorithms are not guided by ethics. While they can learn to model good behavior, they are also capable of learning bad behavior based on data whose biases skew its conclusions. Yet the speed, comprehensiveness, and cost savings from AI technology are far too valuable to simply write off.  As a result, I foresee during the remainder of this decade, that new forms of transparency will evolve.  They will enable ordinary citizens and their advocates to better understand and, where necessary, to challenge flawed algorithms, providing a meaningful check against the potential of harm from defective AI systems.

This article was originally published by Gautam Goswami on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top