You are here : Home > Towards neural networks that can withstand attacks

News

Software cybersecurity

Towards neural networks that can withstand attacks


​At a time when artificial intelligence is making inroads into our everyday lives, List, a CEA Tech institute, is driving advances in cybersecurity that could result in more robust neural networks. CES 2020 provided an opportunity to showcase two demonstrator systems.

Published on 28 May 2020

​From autonomous vehicles to video surveillance, the potential uses for AI in our everyday lives are vast. Hackers, however, are rapidly coming up with attacks on these new applications for AI. Most attacks take advantage of the vulnerability of deep learning systems to disrupt the signal (image, sound) and "trick" or, in some cases influence the AI's decisions. List, a CEA Tech institute, develops trustworthy AI. The institute recently came up with some effective ways to fend off attacks.

Specifically, they intentionally introduced random modifications of the neural activations from the earliest stages of the neural network design process. The goal is to scramble the network during the learning phase as well as during operation. The researchers' approach enables the machine to remember only the relevant parts of incoming information and to not be fooled by an attack. An alternative for existing machines that need additional protection is to introduce the noise directly into the incoming signal. These modifications are made using an "overlayer" that mitigates or neutralizes the effects of an attack.

The loss of performance that occurs when the defect is introduced is offset by the fact that the system is more robust and can better withstand attacks. A demonstrator presented at CES 2020 was well received. An article* was also published very recently in Neural Information Processing Systems, a major scientific journal in the field of AI.

*Pinot, R., Meunier, L., Araujo, A., Kashima, H., Yger, F., Gouy-Pailler, C., and Atif, J. (2019). Theoretical evidence for adversarial robustness through randomization. In Advances in Neural Information Processing Systems 32, pp. 11838–11848.

Top page

Top page