AI Systems Can Be Wrong and Not Admit It

Artificial intelligence systems are being built to help diagnose diseases, but before we can trust them with life-and-death responsibilities, AI will need to develop a very human trait: Admitting mistakes.

And the truth is: they can’t do that … yet.

Today, AI can more often provide the correct answer to a problem than it can realize it made a mistake, according to researchers from the University of Cambridge and the University of Oslo.

This fundamental flaw, they report, is rooted in a math problem.

Some mathematical statements cannot be proven true or false. For example, the same math most of us learned in school to find answers to simple and tricky questions cannot then be used to prove our consistency in applying it.

Maybe we gave the right answer and perhaps we didn’t, but we needed to check our work. This is something computer algorithms mostly can’t do, still.

It is a math paradox first identified by mathematicians Alan Turing and Kurt Gödel at the beginning of the 20th century that flags some math problems cannot be proven.

Mathematician Stephen Smale went on to list this fundamental AI flaw among the world’s 18 unsolved math problems.

Building on the mathematical paradox, investigators, led by Matthew Colbrook, PhD, from the University of Cambridge Department of Applied Mathematics and Theoretical Physics, proposed a new way to categorize AI’s problem areas.

In the Proceedings of the National Academy of Sciences , the researchers map situations when AI neural networks — modeled after the human brain’s network of neurons — can actually be trained to produce more reliable results.

It is important early work needed to make smarter, safer AI systems.

Source

Proceedings of the National Academy of Sciences: “The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale’s 18th problem.”

Source: Read Full Article