June 30th, 2024

An Analog Network of Resistors Promises Machine Learning Without a Processor

Researchers at the University of Pennsylvania created an analog resistor network for machine learning, offering energy efficiency and enhanced computational capabilities. The network, supervised by Arduino Due, shows promise in diverse tasks.

Read original articleLink Icon
An Analog Network of Resistors Promises Machine Learning Without a Processor

Researchers from the University of Pennsylvania have developed an analog network of resistors for machine learning tasks, aiming to reduce power consumption and increase efficiency compared to traditional processors. This innovative approach involves a non-linear learning metamaterial based on resistive elements, which can perform computations beyond the capabilities of linear systems. The network, supervised by an Arduino Due, has shown promise in tasks like image classification, non-linear regression, and XOR operations. Despite currently consuming more power than digital accelerators, the researchers believe that as the technology scales, it will become more energy-efficient and eliminate the need for external memory components. The analog network is robust, retrainable in seconds, and operates with minimal energy consumption, making it suitable for applications in edge systems like sensors, robotic controllers, and medical devices. The team's work has been published as a preprint on Cornell's arXiv server, showcasing the potential for fast, low-power computing in various fields.

Related

Finnish startup says it can speed up any CPU by 100x

Finnish startup says it can speed up any CPU by 100x

A Finnish startup, Flow Computing, introduces the Parallel Processing Unit (PPU) chip promising 100x CPU performance boost for AI and autonomous vehicles. Despite skepticism, CEO Timo Valtonen is optimistic about partnerships and industry adoption.

Researchers run high-performing LLM on the energy needed to power a lightbulb

Researchers run high-performing LLM on the energy needed to power a lightbulb

Researchers at UC Santa Cruz developed an energy-efficient method for large language models. By using custom hardware and ternary numbers, they achieved high performance with minimal power consumption, potentially revolutionizing model power efficiency.

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Researchers innovate AI language models by eliminating matrix multiplication, enhancing efficiency. A MatMul-free method reduces power consumption, costs, and challenges the necessity of matrix multiplication in high-performing models.

Mechanical computer relies on kirigami cubes, not electronics

Mechanical computer relies on kirigami cubes, not electronics

Researchers at North Carolina State University created a mechanical computer based on kirigami, using polymer cubes for data storage. The system offers reversible data editing and complex computing capabilities, with potential applications in encryption and data display.

Hardware FPGA DPS-8M Mainframe and FNP Project

Hardware FPGA DPS-8M Mainframe and FNP Project

A new project led by Dean S. Anderson aims to implement the DPS‑8/M mainframe architecture using FPGAs to run Multics OS. Progress includes FNP component implementation and transitioning software gradually. Ongoing development updates available.

Link Icon 14 comments
By @rsfern - 5 months
This is really cool, but I was confused by the framing as a resistor network since I think that should be linear (to first order? I’m not an EE)

What they have is a transistor network, and they constrain all the transistors to the ohmic regime, so now the resistivity of an individual transistor can be some nonlinear function of its inputs, which is really cool, like detuning transistors to do analog computation instead of digital.

Here’s the preprint: https://arxiv.org/abs/2311.00537

By @superlopuh - 5 months
Here's a keynote talk by Andrea Liu, the lead of the project, it's a much better resource about one of the most exciting things going on in ML right now:

https://youtu.be/7hz4cs-hGew?si=64O3Q7g-qeRQ0Td4

By @linsomniac - 5 months
A couple years ago Veritasium did a video on analog computers, which included a segment on Mythic AI that uses NAND flash cells kind of "undervolted" as an analog computer to run neural networks.

https://youtu.be/GVsUOuSjvcg?si=GGsEWELZyjb0TQfG&t=898

https://mythic.ai/

By @vessenes - 5 months
Finding analog architectures for something that is largely continuous but quantized for digital circuitry right now (gradient descent) is pretty appealing. I’d love to see a toy network built out; I wonder how physically large this breadboard setup would have to be to get good results on MNIST, for instance.
By @quantum_state - 5 months
When memristor came into the scene, thought it would be a reasonable substrate for network based learning system … anyone know more about what happened after?
By @shrubble - 5 months
The traditional problem that analog computers faced were that voltages could vary from run to run and thus give different results, to the point that analog computer manufacturers made their own power supplies and capacitors to extremely high tolerances.

It's not clear in the paper if this problem was addressed or if the rapid training possible meant that in practice they never had this issue.

By @riedel - 5 months
IMHO this can become really cool if combined with mass customisation e.g. by using printed electronics. My colleague who will shortly defend his PhD is working on this [1]

Edit: downloadable link [2]

[1] https://ieeexplore.ieee.org/abstract/document/10323917/

[2] https://publikationen.bibliothek.kit.edu/1000161182/15110728...

By @alexpotato - 5 months
This post reminded me of the below post (from 2007!):

https://www.damninteresting.com/on-the-origin-of-circuits/

Summary:

- Researcher took a FPGA

- used genetic algorithms to have the FPGA identify first tones and then more complex audio sequences

- there was no clock or timer used

- when they found a good solution, they tried to copy over the FPGA "configuration" to another identical FPGA.

- that didn't work!

- they assumed it was b/c the genetic algorithm + no timer had found a quirk in the specific FPGA unit and used that to help improve the processing quality

By @pmorici - 5 months
Is this similar to the Extropic approach but a different mechanism?
By @osigurdson - 5 months
Removing entropy from transistors is expensive - computers use just two states separated by large voltage differences. In AI, entropy isn't a problem as we don't care about repeatable results. Therefore, why not use more of the linear or even non linear range of the transistor for this purpose?
By @beedeebeedee - 5 months
There won't be any energy efficiency improvements until they are able to make analog VLSI chips like Carver Mead's. Nice to see this idea is getting more recognition. The potential has been there for a long time, but the business went digital.
By @grantmuller - 5 months
I read George Dyson’s “Analogia” with a bit of skepticism a few years ago, now all of the sudden it feels relevant (if you can make it past all of the chapters on kayak-building).
By @klysm - 5 months
Conceptually I expect compiling NNs to hardware to eventually be done at a small scale at first. Imagine you have a relatively simple task with a fairly small NN which is being used a low latency, mass production application. If you could compile that sensor into a passive component array, that could be massive.
By @hulitu - 5 months
> An Analog Network of Resistors Promises Machine Learning Without a Processor

Ecerybody promises "Machine Learning", but the machines never learn. /s