This article was published on April 4, 2022

Why we need human-centered AI

This expert believes we can create AI systems that can have both high levels of automation and human control


Why we need human-centered AI Image by: Shutterstock

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.

There are two contrasting but equally disturbing images of artificial intelligence. One warns about a future in which runaway intelligence becomes smarter than humanity, creates mass unemployment, and enslaves humans in a Matrix-like world or destroys them a la Skynet. A more contemporary image is one in which dumb AI algorithms are entrusted with sensitive decisions that can cause severe harm when they do go wrong.

What both visions have in common is the absence of human control. Much of the narrative surrounding AI is based on the belief that automation and human control are mutually exclusive.

An alternative view, called “human-centered AI,” aims to reduce fears of existential threats and increase benefits for users and society by putting humans at the center of AI efforts.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“A human-centered approach will reduce the out-of-control technologies, calm fears of robot-led unemployment, and give users the rewarding sense of mastery and accomplishment,” writes Ben Shneiderman, computer science professor at the University of Maryland and author of Human-Centered AI, a book that explores how AI can amplify, augment, empower, and enhance human performance.

Shneiderman believes that with the right framework, design metaphors, and governance structures, we can create AI systems that can have both high levels of automation and human control.

The HCAI framework

 

Human Centered AI
“Human-Centered AI” by Ben Shneiderman

“The idea of levels of automation that range from full human control to full machine autonomy keeps alive the misguided idea that it is a zero-sum game,” Shneiderman told TechTalks. “However, through careful design, as in cellphone cameras and navigation, designers can combine high levels of automation for some tasks, while preserving high levels of human control for creative and personal preference tasks.”

To create this balance, Shneiderman suggests the Human-Centered AI (HCAI) framework, a set of guidelines that keeps humans at the center of highly automated systems. HCAI is founded on three key ideas. First, designers of AI systems should aim to increase automation in a way that amplifies human performance. Second, they must carefully examine and define situations in which full human control or full computer control are necessary. And third, they should understand and avoid the dangers of excessive human or computer control.

With AI systems becoming highly accurate at various tasks, there’s a tendency to omit features that allow humans to control and override automated decisions. The proponents of decreasing human control claim that first, humans make a lot of mistakes, and second, few users will ever learn or bother using the controls.

However, Shneiderman argues that these concerns can be addressed by designing the right user interface and experience elements for AI-powered products. In fact, experience shows that user controls to activate, operate, and override can make for more reliable, safe, and trustworthy systems, Shneiderman argues.

“Designers who adopt the HCAI mindset will emphasize strategies for enabling diverse users to steer, operate, and control their highly automated devices, while inviting users to exercise their creativity to refine designs,” he writes in Human-Centered AI.

The balance between human and computer control

AI Robot

Mature technologies such as elevators, cameras, home appliances, or medical devices that have been in use for decades owe their success to finding the right balance between automation and human control.

With advances in AI creating a shift toward integratingmachine learning and deep learning into applications, design paradigms for applications are changing.

For example, previously, the graphical user interfaces of applications left very little room for user error. But today, the impressive performance of large language models sometimes creates the illusion that current AI systems can be trusted with open-ended conversations without the need for user controls. Likewise, advances in computer vision create the illusion that AI systems can perfectly classify images without the need for human intervention.

But various studies and incidents have shown that machine learning systems can fail in unexpected ways, making them unreliable in critical applications. Not every application is affected in the same way by these failures. For example, a wrong product or content recommendation might have a minor impact. But a declined loan or job application can be much more damaging, and a wrong medical decision can prove to be fatal.

Evidently, today’s applications need to make the best use of advances in machine learning without sacrificing safety and robustness.

“Finding the design principles that combine human control and computer automation is the current grand challenge, especially for life-critical tasks in transportation and medical care,” Shneiderman said.

Recent years have seen some practical developments for addressing the challenges of integrating machine learning into real-world applications. For example, explainable AI (XAI) is a growing area of research for developing tools that provide visibility and control into how complex machine learning models make their decisions.

XAI tools can highlight areas in an image or words in a text excerpt that have contributed the most to a deep neural network’s output. Such features can be integrated into AI-powered applications such as medical imaging tools to help human experts decide whether they can trust the decisions made by their AI assistants.

Even simple features such as displaying confidence scores, providing multiple output suggestions, and adding slider controls to the user interface can go a long way toward mitigating some of the challenges that current AI systems face. For example, users of recommendation systems can be given tools to understand and control what type of content they are shown, as YouTube has recently done. This can provide a much better experience than opaque algorithms that optimize content for factors that don’t necessarily contribute to users’ wellbeing.

In Human-Centered AI, Shneiderman provides guidelines covering visual design, previews of expected actions, audit trails, near-miss and failure reviews, and others that can help ensure reliability, safety, and trustworthiness. Basically, by acknowledging the limits of both human and artificial intelligence, designers and developers of automated products can find the right division of labor between humans and AI.

“There is a lot of research to be done, but awareness that combined solutions are possible and desirable is the first step,” Shneiderman said.

Putting HCAI to practical use

expandable robot arm

In Human-Centered AI, Shneiderman provides concrete examples and frameworks to bring HCAI to real-world applications, including four design metaphors for creating safe and reliable HCAI systems:

Supertools use combinations of AI with HCAI thinking to improve the value and acceptance of products and services. Examples include giving users control elements to operate their AI-guided recommender systems, such as sliders to choose music or checkboxes to narrow e-commerce searches.

Telebots acknowledge that “computers are not people and people are not computers.” Telebots are designed to embrace these differences and create synergies that amplify the strengths of both. Instead of trying to replicate elements of human intelligence, designers of telebots leverage unique computer features, including sophisticated algorithms, huge databases, superhuman sensors, information-abundant displays, and powerful effectors. At the same time, they provide features that enable humans to make high-level, sensitive, and critical decisions. We can see this kind of design in surgical robots, financial market software, and teleoperated robots.

The control center metaphor suggests that trustworthy autonomy requires human supervision. Control centers enable human oversight, support continuous situation awareness, and offer a clear model of what is happening and what will happen next. Control centers provide information-abundant control panels, extensive feedback for each action, and an audit trail to enable retrospective investigations. “For many applications control centers may provide more opportunities for human oversight. When rapid response necessitates autonomous activity, great care and constant review of performance will help make for safer operation,” Shneiderman writes.

The active appliance metaphor suggests that instead of chasing anthropomorphic designs, AI systems should be optimized to respond to genuine human needs. Consider ATM machines, which do not look like bank tellers but are very efficient in solving user problems. Accordingly, advances in AI and robotics research should keep us on the path of solving problems in the most efficient way possible. An interesting example is Boston Dynamics, which is trying to find the right balance between scientific research and real-world applications. The company has poured much energy and resources into overcoming the challenges of humanoid robots. At the same time, its latest commercial product, Stretch, looks nothing like a human worker but can lift and move crates and boxes with higher efficiency.

“HCAI thinking reveals ways to design new technologies that limit the dangers and guide business leaders in creating safety cultures in which successful products and services are the norm,” Shneiderman said. “Remember, the goals are more than commercial success; we want to promote human creativity, responsibility, sustainability, and social connectedness. Beyond that we want to increase self-efficacy, bring joy, spread compassion, and respect human dignity.”

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with