You are here : Home > PsD-DRT-23-0047



Published on 7 December 2023
Research FieldEmerging materials and processes for nanotechnologies and microelectronics

Domaine-SElectronics and microelectronics - Optoelectronics

ThemeTechnological challenges

Theme-SEngineering sciences

Emerging materials and processes for nanotechnologies and microelectronics Technological challenges Electronics and microelectronics - Optoelectronics Engineering sciences DRT DCOS S3C LDMC Grenoble
Design of in-memory high-dimensional-computing system
Conventional von Neumann architecture faces many challenges in dealing with data-intensive artificial intelligence tasks efficiently due to huge amounts of data movement between physically separated data computing and storage units. Novel computing-in-memory (CIM) architecture implements data processing and storage in the same place, and thus can be much more energy-efficient than state-of-the-art von Neumann architecture. Compared with their counterparts, resistive random-access memory (RRAM)-based CIM systems could consume much less power and area when processing the same amount of data. This makes RRAM very attractive for both in-memory and neuromorphic computing applications. In the field of machine learning, convolutional neural networks (CNN) are now widely used for artificial intelligence applications due to their significant performance. Nevertheless, for many tasks, machine learning requires large amounts of data and may be computationally very expensive and time consuming to train, with important issues (overfitting, exploding gradient and class imbalance). Among alternative brain-inspired computing paradigm, high-dimensional computing (HDC), based on random distributed representation, offers a promising way for learning tasks. Unlike conventional computing, HDC computes with (pseudo)-random hypervectors of D-dimension. This implies significant advantages: a simple algorithm with a well-defined set of arithmetic operations, with fast and single-pass learning that can benefit from a memory-centric architecture (highly energy-efficient and fast thanks to a high degree of parallelism).
Département Composants Silicium (LETI) Service des Composants pour le Calcul et la Connectivité Laboratoire de Composants Mémoires
BARRAUD Sylvain CEA DRT/DCOS//LDMC CEA/Grenoble 17 rue des martyrs 38054 04 38 78 98 45
Start date1/2/2023

Retour à la liste