Keywords: resistor network, nonlinear resistive network, deep resistive network, convex quadratic programming, block coordinate descent, self-learning machine, analog computing, in-memory computing, equilibrium propagation
TL;DR: We formulate the problem of simulating nonlinear resistive networks as a convex QP problem with linear inequality constraints, which we solve using an exact coordinate descent algorithm
Abstract: Analog electrical networks are explored as energy-efficient platforms for machine learning. In particular, resistor networks have recently gained attention for their ability to learn using local rules such as equilibrium propagation. However, simulating these networks has been challenging due to reliance on slow circuit simulators like SPICE. Assuming ideal circuit elements, we introduce a fast simulation approach for nonlinear resistive networks, framing the problem of computing its steady state as a quadratic programming (QP) problem with linear inequality constraints. Our algorithm significantly outperforms prior approaches, enabling the training of networks 327 times larger at speeds 160 times faster.
Submission Number: 4
Loading