You are here : Home > Trusted AI: new advances in the formal validation of neural networks

News

Software and systems engineering

Trusted AI: new advances in the formal validation of neural networks


​Researchers from CEA-List, a CEA Tech institute, have trialed a new approach to the formal validation of neural networks applied to image recognition. Their goal is to improve the safety of features such as pedestrian detection.

Published on 22 September 2020

When it comes to artificial intelligence, proving that a neural network has been "taught" to do something like recognize pedestrians is a challenge. The formal validation of neural networks applied to image recognition, essential in critical use cases, has been a significant technical hurdle. Before a vehicle's ability to effectively avoid all pedestrians can be tested, for instance, it is important to unambiguously specify what constitutes a pedestrian.

Since it is impossible to describe a pedestrian in mathematical terms, CEA-List researchers came up with the idea of using image generators (also known as simulators) to "train" neural networks. This avenue seems all the more promising given that simulators are now in widespread use, particularly in the automotive industry, to compensate for the lack of real-world training data. The new specification formalism developed places the simulator at the heart of the validation process, where it serves to formally specify the properties the neural network must satisfy. This allows conventional formal analysis methods, one of CEA-List's areas of expertise, to be applied to the network to verify its compliance.

The theoretical results produced were included in the proceedings of the European Conference on Artificial Intelligence (ECAI 2020)[1], while proof-of-concept testing was completed on a scaled-down simulator specially designed for the purpose. The next stage of the process will involve validating the theory on a full-scale system, but this pioneering advance is a significant and tangible step towards trusted AI.



[1] Girard-Satabin, J.; Charpiat, G.; Chihani, Z. & Schoenauer, M. "CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators" ECAI 2020

Top page

Top page