AI can’t explain its decision making and that’s a problem

Cody DeBos
5 min readDec 10, 2019

--

Artificial intelligence (AI) is currently revolutionizing the technology world. From cars capable of driving on their own to algorithms that can crunch through mountains of data in seconds, AI systems have a wide array of uses. They are also being utilized more frequently in important sectors like healthcare and defense to make decisions and enhance efficiency.

In fact, without AI, today’s world would look a lot different. Despite only catching on and booming into the mainstream in recent years, artificial intelligence has already had a massive impact on virtually every facet of the digital world.

However, there’s one looming problem that must be solved if AI is to continue being used in high-stakes situations. Although these algorithms are capable of making accurate decisions at a high rate, researchers don’t really know how they work. If humans are to ever fully trust artificial intelligence then finding a way to uncover the reasoning behind its decisions will be crucial.

Black Box Algorithms

Any web developer worth their salt should be able to peer into a website’s code and interpret what is happening line-by-line. However, trying to do so with an AI algorithm is nearly impossible. Therein lies the problem with understanding the workings of artificial intelligence.

In the past decade, researchers have turned to the so-called “black box approach” when developing AI systems. Essentially, this means that they are able to test and develop a fully functional artificial intelligence without looking at the code that makes it work. Allowing AI to teach itself through deep learning in this way, by simply presenting it with tremendous amounts of data and an objective to reach, has proven to be highly successful.

However, that method severely restricts the ability of researchers to understand why an AI system makes a certain decision. Moving forward, that will prove to be a difficult hurdle to overcome.

Levels of Understanding

Understanding how AI works on a basic level is difficult enough. The invention of neural networks has allowed artificial intelligence to mimic the multi-directional connections of the human brain. Processors capable of handling large amounts of data are interconnected in thousands, possibly even millions, of ways. Then, using machine learning, these networks can be trained to identify patterns that are imperceptible to the human eye.

That’s what makes AI so valuable. It is able to spot early signs of cancer from patient medical records and sniff out emails sent by a terrorist far faster than any human doctor or agent.

Yet, it doesn’t stop there. After making a decision, AI systems go on to teach themselves from the experience and adapt to make better choices in the future.

All of this is accomplished thanks to a multi-layered network of artificial neurons that get progressively more complex as you move upwards. For example, the first layer of an image recognition algorithm might only detect lines and colors. The second layer uses that information to detect textures like hair, fur, or feathers. The next layer then pulls together all of its previous conclusions to determine what the image is.

Although this strategy is very effective, it also contributes to the complex nature of AI. Due to the astronomical amount of connections within and between layers, even a simple three-layered algorithm is nearly impossible to unravel. This leaves researchers with solid decisions that have no rationales to back them up. That makes people nervous.

Establishing Priorities

AI is currently in use across almost every sector as a tool for enhanced efficiency and safety. Yet, none of these applications feature AI systems that can provide an explanation for their decision-making.

Imagine being automatically denied for a loan and the human banker sitting across from you not having a better explanation than, “The algorithm says no.” Likewise, how will society deal with an autonomous car making a potentially life-threatening decision on the road without explaining why it turned into oncoming traffic?

Yet, some experts in the computing world suggest that this isn’t a problem in every instance. Adrian Weller is the program director for AI at The Alan Turing Institute. He says, “If we could be sure that a system was working reliably, without discrimination, and safely-sometimes those issues might be more important than whether we can understand exactly how it’s operating.”

He has a point. In situations like the analysis of medical records, a computer program will almost certainly be more accurate than humans. There are simply too many variables for the brain to account for. An AI system can do so simultaneously to produce better outcomes and save more lives. So long as the results are consistently reliable that is probably better than understanding how the conclusions are reached.

By contrast, having a rationale for an AI’s decisions might be worth the impedance it causes to the system. Military applications, autonomous cars, and use by law enforcement all come to mind. These fields rely heavily on the decision-making process and have a high propensity for false alarms.

Ultimately, weighing these options will come down to a case-by-case basis. Developers for individual AI systems will need to sort through the legal and ethical implications for working interpretability into their network’s architecture.

Working the Problem

Although figuring out how to reveal the secrets of AI decision making won’t happen overnight, researchers are working on it. In 2015, a team from Google altered an image recognition algorithm to operate in reverse.

The results were both stunning and bizarre.

Known as Deep Dream, the photos created by the algorithm displayed hallucinogenic-like images. Photos of clouds showed alien-like creatures emerging from them while the algorithm’s rendering of a dumbbell included the arm holding it. Obviously, these results told researchers that AI isn’t perfect-nor are its intuitions. However, the experiment was telling. It has allowed researchers to understand that AI thinks not like a human but as its own entity.

Still, this research is in the preliminary stages. It will likely be many years, perhaps even decades, before we uncover the “why” behind the decisions of AI.

Carlos Guestrin, a University of Washington professor, says, “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain. We’re a long way from having truly interpretable AI.”

Perhaps until that day arrives, it is better to avoid introducing AI into important areas where lives and safety hang in the balance. At the least, developers (and consumers) should take a cautious approach with this unpredictable technology until its mysterious inner workings are finally understood.

Originally published at https://www.theburnin.com on December 10, 2019.

--

--

Cody DeBos
Cody DeBos

Written by Cody DeBos

Freelance Writer | RN-BSN | YA author | MTG Player | LoTR geek | Meme Connoisseur | Owner of Bolt the Bird | Business inquiries to: cody@codydeboswriting.com

No responses yet