RRAM Neuromorphic AI chip performance improved for edge computing

Date: 18/08/2022
A group of international researchers able to improve RRAM based computer in memory (CIM) neuromorphic processor by using various technologies such as: voltage mode sensing of RRAM weights, interleaved replacement of CMOS neuron circuits with RRAM weights, using of hardware algorithm co-optimisation techniques to improve the reliability and power consumption of AI neuromorphic processor.

To go into further details: The voltage mode sensing helps in achieving high parallelism and low power consumption, where all the columns and rows of RRAM array is activated in single computing cycle.
Though they have not disclosed of using three-dimensional semiconductor fabrication but they could place CMOS neuron circuits interleaved with RRAM weights, calling it as transposable neurosynaptic array architecture, and non-ideality-aware model training and fine-tuning techniques.

To achieve high versatility, they claim to have co-optimised all hierarchies of this chip design starting from hardware algorithms to architecture to circuits and devices. The performance and accuracy is hardware measured instead of simulating the device on software.

NeuRRAM distributes processing loads parallel across 48 RRAM-CIM neuro-synaptic cores to achieve high versatility and efficiency. This chip achieves data parallelism by mapping a layer in the neural network model onto multiple codes for parallel inference of multiple data. It can also do model parallelism by mapping different layers of model onto different cores of the chip to infer in a pipelined way.

Claimed to consume very little power compared to similar chips, the NeuRRAM chip tested to handle wide range of AI dataloads such as voice for recognition, picture/images for recognition and classification, and handwritten notes for recognition and text conversion without connecting to cloud. Its part of the project called visual cortex on silicon. Edge devices such as wearable health monitoring devices, smart factory/industrial automation sensors, and any such AI processing requiring devices can use this chip.

Comments by researchers:
“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering.

"Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said. “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms."

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

To know more on this visit: https://ucsdnews.ucsd.edu/pressrelease/Nature_bioengineering_2022

Author: Srinivasa Reddy N
Header ad