image

RRAM-based analog computing system rapidly solves matrix equations with high precision

Conceptual diagram of our high-precision analog matrix inversion solver. Credit: Zhong Sun, Peking University.

Analog computers are systems that perform computations by manipulating physical quantities such as electrical current, that map math variables, instead of representing information using abstraction with discrete binary values (i.e., 0 or 1), like digital computers.

While analog computing systems can perform well on general-purpose tasks, they are known to be susceptible to noise (i.e., background or external interferences) and less precise than digital devices.

Researchers at Peking University and the Beijing Advanced Innovation Center for Integrated Circuits have developed a scalable analog computing device that can solve so-called matrix equations with remarkable precision. This new system, introduced in a paper published in Nature Electronics, was built using tiny non-volatile memory devices known as resistive random-access memory (RRAM) chips.

“I have been working on analog computing since 2017,” Zhong Sun, assistant professor at Peking University and senior author of the paper, told Tech Xplore.

“We refer to our approach as modern analog computing, as it focuses on solving matrix equations—rather than differential equations as in traditional analog computing—using nonvolatile resistive memory arrays instead of conventional CMOS circuits.”

Over the past decade, Sun and his colleagues developed a wide range of analog computing systems. Most of these systems, however, were found to be significantly less precise than digital computers in performing desired operations, which limited their potential for real-world applications.

“Around 2022, we began addressing this issue directly, aiming to achieve high-precision analog computing comparable to modern digital systems,” said Sun.

“In our recent paper, we demonstrate fully analog matrix equation solving with 24-bit fixed-point precision (comparable to FP32) by combining a low-precision matrix inversion circuit (first designed in 2019) with high-precision matrix–vector multiplication using bit slicing across multiple resistive memory arrays.”

The new analog matrix equation solver introduced by the team builds on a circuit developed by Sun and other researchers in 2019, when he was a post-doctoral researcher at Politecnico di Milano. While this circuit can solve matrix equations with a specific form (Ax = b) in a single step, it was found to be less precise than digital systems.

“As part of our new study, we combined this low-precision solver with high-precision matrix-vector multiplication using a conventional bit-slicing technique, enabling iterative refinement of the solution,” explained Sun.

“In each iteration, the low-precision inversion circuit provides an approximate result, and the high-precision operation refines it by indicating the correction direction and magnitude. This hybrid approach converges rapidly—significantly faster than conventional gradient-descent-based algorithms.”

To demonstrate the scalability of their analog computing method, the researchers fabricated an 8×8 array-based circuit and tested its ability to solve various matrix equations. They found that the circuit could solve 16×16 matrix equations, then progressively various other matrix equations (e.g., 32×32).

The matrix equation solver they developed could be improved further and might inspire the development of other precise analog computing systems. In the future, it could prove useful for advancing various technologies, ranging from wireless communications to artificial intelligence (AI).

“The most notable contribution is our demonstration that fully analog matrix computing can achieve high precision comparable to floating-point digital systems, while also addressing scalability,” added Sun.

“Our next goal is to scale up the system by building larger array-based circuits and integrating all components on chip, embedding both matrix inversion and matrix-vector multiplication functionalities in a single, chip-level platform.”

Written for you by our author Ingrid Fadelli, edited by Sadie Harley, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Pushen Zuo et al, Precise and scalable analogue matrix equation solving using resistive random-access memory chips, Nature Electronics (2025). DOI: 10.1038/s41928-025-01477-0.

© 2025 Science X Network

Citation:
RRAM-based analog computing system rapidly solves matrix equations with high precision (2025, October 30)
retrieved 31 October 2025
from https://techxplore.com/news/2025-10-rram-based-analog-rapidly-matrix.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Comments are closed

Uploading