Charles Stanhope

Charles Stanhope at

"Neural networks are turning up everywhere these days, including in safety-critical systems, such as autonomous driving and flight control systems. When these systems fail, human lives are at risk. But it’s hard to provide formal guarantees about the behavior of neural networks — how can we know for sure that they won’t steer us the wrong way?" from Proving that safety-critical neural networks do what they're supposed to.

clacke@libranet.de ❌, Christopher Allan Webber, Tyng-Ruey Chuang, Mike Linksvayer likes this.

Sarah Elkins, clacke@libranet.de ❌, clacke@libranet.de ❌, clacke@libranet.de ❌ and 5 others shared this.

Part 2 of Proving that safety-critical neural networks do what they're supposed to.

Charles Stanhope at 2017-06-01T23:59:34Z

Sarah Elkins likes this.

"If you think you have a network that we should be trying to verify properties of and that you think our solver (or the next version of it) should be able to handle, please get in touch!"

Sarah Elkins at 2017-06-04T20:27:28Z

(my quote above is from Part 2)

Sarah Elkins at 2017-06-04T20:30:53Z