Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on November 17, 2020

Researchers developed ‘explainable’ AI to help diagnose and treat at-risk children


Researchers developed ‘explainable’ AI to help diagnose and treat at-risk children Image by: faithie / Shutterstock

A pair of researchers from the Oak Ridge Laboratory have developed an “explainable” AI system designed to aid medical professionals in the diagnosis and treatment of children and adults who’ve experienced childhood adversity. While this is a decidedly narrow use-case, the nuts and bolts behind this AI have particularly interesting implications for the machine learning field as a whole.

Plus, it represents the first real data-driven solution to the outstanding problem of empowering general medical practitioners with expert-level domain diagnostic skills – an impressive feat in itself.

Let’s start with some background. Adverse childhood experiences (ACEs) are a well-studied form of medically relevant environmental factors whose effect on people, especially those in minority communities, throughout the entirety of their lives has been thoroughly researched.

While the symptoms and outcomes are often difficult to diagnose and predict, the most common interventions are usually easy to employ. Basically: in most cases we know what to do with people suffering from or living in adverse environmental conditions during childhood, but we often don’t have the resources to take these individuals completely through the diagnosis to treatment pipeline.

Enter Nariman Ammar and Arash Shaban-Nejad, two medical researchers from the University of Tennesee’s Oak Ridge National Laboratory. They today published a pre-print paper outlining the development and testing of a novel AI framework designed to aid in the diagnosis and treatment of individuals meeting the ACEs criteria.

Unlike a broken bone, ACEs aren’t diagnosed through physical examinations. They require a caretaker or medical professional with training and expertise in the field of childhood adversity to diagnose. While the general gist of diagnosing these cases involves asking patients questions, it’s not so simple as just going down a checklist.

Medical professionals may not suspect ACEs until the “right” questions are asked, and even then the follow-up questions are often more insightful. Depending on the particular nuances of an individual case, there could be tens of thousands of potential parameters (combinations of questions and answers) affecting the recommendations for intervention a healthcare provider may need to make.

And, perhaps worse, once interventions are made – meaning, appointments are set with medical, psychiatric, or local/government agencies that can aid the patient – there’s no guarantees the next person in the long line of healthcare and government workers a patient will encounter is going to be as competent when it comes to understanding ACEs as the last one.

The Oak Ridge team’s work is, in itself, an intervention. It’s designed to work much like a tech support chat bot. You input patient information and it recommends and schedules interventions based on the various databases its trained on.

This may sound like a regular chatbot, but this AI makes a lot of inferences. It processes plain language requests such as “my home has no heating” into inferences about childhood adversity (housing issues) and then searches through what’s essentially the computer-readable version of a medical textbook on ACEs and decides on the best course of action to recommend to a medical professional.

The Q&A isn’t a pre-scripted checklist, but instead a dynamic conversation system based on “Fulfillments” and webhooks that, according to the paper, “enable the agent to invoke external service endpoints and send dynamic responses based on user expressions as opposed to hard-coding those responses.”

Using its own inferences, it decides which questions to ask based on context from previously answered ones. The goal here is to save time and make it as frictionless as possible to extrapolate the most useful information possible in the least amount of questions.

Coupled with end-level scheduling abilities, this could end up being a one-stop-shop for helping people who, otherwise, may continue living in an environment that could cause permanent, lifelong damage to their health and well-being.

The best part about this AI system is that it’s fully explainable. It converts those fulfillment and webhooks into actionable items by attaching them to the relevant snippets of data it used to extrapolate its end-results. This, according to the research, allows for an open-box fully traceable system that – barring any eventual UI and connectivity issues – should be usable by anyone.

If this methodology can be applied to other domains – like, for example, making it less painful to deal with just about every other chatbot on the planet – it could be a game changer for the already booming service bot industry.

As always keep in mind that arXiv papers are preprints that haven’t been peer-reviewed and they’re subject to change or retraction. You can read more about the Oak Ridge team’s new AI framework here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top