Google and Uber Are Building AI Systems That Doubt Themselves

Google and Uber Want AI Systems That Doubt Themselves
Text Size
- +

Toggle Dark Mode

Google and Uber are introducing a sense of uncertainty into their deep-learning models, so that AI programs can measure their confidence in predictions and decisions, according to The MIT Technology Review.

The aim is to create AI programs that are less error prone and make better decisions in critical scenarios that require good judgment, such as those involving self-driving cars. Beyond self-driving, Uber uses machine learning for everything from matching riders to drivers, routing, assigning Uber pool rides, to setting surge pricing.

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, a member of Google’s AI team. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

Self-doubt, probability, and uncertainty would make for smarter deep learning programs, says Noah Goodman, a Stanford professor affiliated with Uber’s AI Lab. For example, it would allow programs to recognize objects by referring to a few examples, which would also make it easier to develop complex deep-learning systems.

As such, Uber recently released Pyro, a programming language that combines probabilistic programming with deep-learning, allowing systems to be pre-programmed with knowledge.

“In cases where you have prior knowledge you want to build into the model, probabilistic programming is especially useful, ” Goodman says.

Sponsored
Social Sharing