Vous êtes ici : Accueil > Resistive memories for spike-based neuromorphic circuits

Publications

Resistive memories for spike-based neuromorphic circuits

Publié le 29 mars 2018
Resistive memories for spike-based neuromorphic circuits
Auteurs
Vianello E., Werner T., Bichler O., Valentian A., Molas G., Yvert B., De Salvo B., Perniola L.
Year2017-0273
Source-Title2017 IEEE 9th International Memory Workshop, IMW 2017
Affiliations
CEA, LETI, Minatec Campus, Grenoble, France, CEA, LIST, Saclay, France, INSERM, UA01, France
Abstract
In the last decade machine learning algorithms have proven unprecedented performance to solve many real-world detection and classification tasks, for example in image or speech recognition. Despite these advances, there are still some deficits. First, these algorithms require significant memory access thus ruling out an implementation using standard platforms (e.g. GPUs, FPGAS) for embedded applications. Second, most machine leaning algorithms need to be trained with huge data sets (supervised learning). Resistive memories (RRAM) have demonstrated to be a promising candidate to overcome both these constrains. RRAM arrays can act as a dot product accelerator, which is one of the main building blocks in neuromorphic computing systems. This approach could provide improvements in power and speed with respect to the GPU-based networks. Moreover RRAM devices are promising candidates to emulate synaptic plasticity, the capability of synapses to enhance or diminish their connectivity between neurons, which is widely believed to be the basis for learning and memory in the brain. Neural systems exhibit various types and time periods of plasticity, e.g. synaptic modifications can last anywhere from seconds to days or months. In this work we proposed an architecture that implements both Short-And Long-Term Plasticity rules (STP and LTP) using RRAM arrays. We showed the benefits of utilizing both kinds of plasticity with two different applications, visual pattern extraction and decoding of neural signals. LTP allows the neural networks to learn patterns without training data set (unsupervised learning), and STP makes the learning process very robust against environmental noise. © 2017 IEEE.
Author-Keywords
Artificial synapses, Component: RRAM, Long-Term plasticity, Short-Term plasticity, Spiking neural networks, Unsupervised learning
Index-Keywords
Education, Learning systems, Memory architecture, Neural networks, Program processors, Random access storage, RRAM, Speech recognition, Timing circuits, Unsupervised learning, Artificial synapses, Embedded application, Neuromorphic circuits, Neuromorphic computing, Resistive memory (rram), Short term plasticity, Spiking neural networks, Synaptic modification, Learning algorithms
ISSN 
Lien vers articleLink

Retour à la liste