Speeding innovation for industry
Software and systems engineering
Neural networks are very powerful for deep learning applications. However, they are not particularly well-suited to incremental learning. Currently, when a neural network learns a new piece of information, old information is overwritten. A solution to this "catastrophic forgetting" would make neural networks more operational in autonomous systems running in constantly-changing environments.
Researchers at Leti, a CEA Tech institute, worked with cognitive neuropsychology lab LPNC, which has been developing a human memory model since the 90s, and fellow CEA Tech institute List, which is developing an artificial neural network simulation tool called N2D2, to come up with a model that could be a game changer. The model can re-learn all information (old and new) together using two neural networks, eliminating the need to save old information to an external memory, which would drastically increase memory requirements.
Here's how it works: The first network is presented with alternating true examples that correspond to the new information being learned and "pseudo examples" that are generated by the second network. These "pseudo examples" represent what has already been learned and are used to "refresh" the first network's memory, so to speak. The primary advantages of the method are that it does not limit network plasticity and does not require additional memory.
CEA is a French government-funded technological research organisation in four main areas: low-carbon energies, defense and security, information technologies and health technologies. A prominent player in the European Research Area, it is involved in setting up collaborative projects with many partners around the world.