Skip to main content Skip to secondary navigation
Main content start

Finally, a peek inside the ‘black box’ of machine learning systems

​Neural networks are machine learning systems that solve problems without being programmed. A new mathematical approach seeks to error-check their logic.

A new way to verify the accuracy of computerized neural networks treats their structures like the branches of a tree. | iStock/trekandshoot
A new way to verify the accuracy of computerized neural networks treats their structures like the branches of a tree. | iStock/trekandshoot

Many astounding feats of computer intelligence, from automated language translation to self-driving cars, are based on neural networks: machine learning systems that figure out how to solve problems with minimal human guidance. But that makes the inner workings of neural networks like a black box, opaque even to the engineers who initiate the machine learning process.

“If you look at the neural network, there won’t be any logical flow that a human can understand, which is very different from traditional software,” explains Guy Katz, a postdoctoral research fellow in computer science at Stanford.

That opacity can be worrisome when it comes to using neural networks in safety-critical applications, like preventing aircraft collisions. Running hundreds of thousands of successful simulations isn’t enough to inspire full confidence, because even one system failure in a million can spell disaster.

“We know we can put in some inputs and get some outputs, and the networks seem to work wondrously well,” says Clark Barrett, an associate professor (research) of computer science at Stanford, “but how do we ensure that these magical boxes work for every given set of circumstances?”

To answer this question, researchers have devised a new tool that mathematically tests the validity of neural networks. The tool takes advantage of the fact that, in a simplified sense, the set of possible configurations of a neural network resemble the branches of a tree. The tree as a whole represents all the possible ways that the network can solve problems. But just as not every branch on a tree bears fruit, not every branch in this tree of possibilities is used in practice.

As Katz explains, neural networks solve problems by relying on a type of switch called an activation function. Depending on the input to the network, the switch is positively or negatively activated. A positive activation takes the network down a particular branch in the tree of possibilities, while a negative activation takes the network down a different branch. This activation process is repeated, over and over, through each layer in the network.

The problem is with the number of branches in this tree of possibilities. For example, a 6-layer network with 50 nodes per layer has 300 nodes. Yet this generates 2300 possibilities – more than there are atoms in the universe. Current error-checking tools rely on a technique called linear programing (LP) that was invented in the 1940s. LP techniques must follow each and every branch in the tree of possibilities – even though vast sections may in fact be dead branches, never actually used in practice.

The computer scientists worked with Mykel Kochenderfer, an assistant professor of aeronautics and astronautics at Stanford. His team has been developing neural networks as part of a next-generation air traffic control system.

Together the Stanford researchers have created a new error-checking tool. Essentially, the tool explores possible network configurations, periodically detecting and pruning dead branches, so that only a small portion of the tree is fully explored. The time savings is huge. When they applied their tool to Kochenderfer’s 300-node network it only had to evaluate 220 cases, or about a million possibilities. “A million is not a tough number for a computer to deal with,” Barrett points out. “That’s a huge reduction in the number of cases.”

Users of the new tool can ask specific questions about the properties of the system the network is used for. For example, for the aircraft-collision system, one question would be: If there’s an intruder aircraft coming from the right, is it possible that the system won’t alert you? If the tool comes back with a No (and only then), the system is guaranteed to be safe for that property. The researchers were able to prove several of these properties, demonstrating that the neural network behaves as expected in various situations. They will be presenting this work later this year at the international conference on Computer-Aided Verification (CAV).

The researchers say that their tool, called Reluplex, works on much larger networks than existing alternatives, but it’s still not capable of testing networks with millions of nodes. And Reluplex itself still needs to be independently verified. Nonetheless, the researchers anticipate many possible uses. For example, their tool can help to simplify neural networks and to make networks more robust against “adversarial cases,” or malicious attempts to fool neural networks. “You can use this tool to prove properties of a network – but the tool is very general,” Barrett says.