An Analog Network of Resistors Promises Machine Learning Without a Processor
Researchers at the University of Pennsylvania created an analog resistor network for machine learning, offering energy efficiency and enhanced computational capabilities. The network, supervised by Arduino Due, shows promise in diverse tasks.
Read original articleResearchers from the University of Pennsylvania have developed an analog network of resistors for machine learning tasks, aiming to reduce power consumption and increase efficiency compared to traditional processors. This innovative approach involves a non-linear learning metamaterial based on resistive elements, which can perform computations beyond the capabilities of linear systems. The network, supervised by an Arduino Due, has shown promise in tasks like image classification, non-linear regression, and XOR operations. Despite currently consuming more power than digital accelerators, the researchers believe that as the technology scales, it will become more energy-efficient and eliminate the need for external memory components. The analog network is robust, retrainable in seconds, and operates with minimal energy consumption, making it suitable for applications in edge systems like sensors, robotic controllers, and medical devices. The team's work has been published as a preprint on Cornell's arXiv server, showcasing the potential for fast, low-power computing in various fields.
Related
Finnish startup says it can speed up any CPU by 100x
A Finnish startup, Flow Computing, introduces the Parallel Processing Unit (PPU) chip promising 100x CPU performance boost for AI and autonomous vehicles. Despite skepticism, CEO Timo Valtonen is optimistic about partnerships and industry adoption.
Researchers run high-performing LLM on the energy needed to power a lightbulb
Researchers at UC Santa Cruz developed an energy-efficient method for large language models. By using custom hardware and ternary numbers, they achieved high performance with minimal power consumption, potentially revolutionizing model power efficiency.
Researchers upend AI status quo by eliminating matrix multiplication in LLMs
Researchers innovate AI language models by eliminating matrix multiplication, enhancing efficiency. A MatMul-free method reduces power consumption, costs, and challenges the necessity of matrix multiplication in high-performing models.
Mechanical computer relies on kirigami cubes, not electronics
Researchers at North Carolina State University created a mechanical computer based on kirigami, using polymer cubes for data storage. The system offers reversible data editing and complex computing capabilities, with potential applications in encryption and data display.
Hardware FPGA DPS-8M Mainframe and FNP Project
A new project led by Dean S. Anderson aims to implement the DPS‑8/M mainframe architecture using FPGAs to run Multics OS. Progress includes FNP component implementation and transitioning software gradually. Ongoing development updates available.
What they have is a transistor network, and they constrain all the transistors to the ohmic regime, so now the resistivity of an individual transistor can be some nonlinear function of its inputs, which is really cool, like detuning transistors to do analog computation instead of digital.
Here’s the preprint: https://arxiv.org/abs/2311.00537
It's not clear in the paper if this problem was addressed or if the rapid training possible meant that in practice they never had this issue.
Edit: downloadable link [2]
[1] https://ieeexplore.ieee.org/abstract/document/10323917/
[2] https://publikationen.bibliothek.kit.edu/1000161182/15110728...
https://www.damninteresting.com/on-the-origin-of-circuits/
Summary:
- Researcher took a FPGA
- used genetic algorithms to have the FPGA identify first tones and then more complex audio sequences
- there was no clock or timer used
- when they found a good solution, they tried to copy over the FPGA "configuration" to another identical FPGA.
- that didn't work!
- they assumed it was b/c the genetic algorithm + no timer had found a quirk in the specific FPGA unit and used that to help improve the processing quality
Ecerybody promises "Machine Learning", but the machines never learn. /s
Related
Finnish startup says it can speed up any CPU by 100x
A Finnish startup, Flow Computing, introduces the Parallel Processing Unit (PPU) chip promising 100x CPU performance boost for AI and autonomous vehicles. Despite skepticism, CEO Timo Valtonen is optimistic about partnerships and industry adoption.
Researchers run high-performing LLM on the energy needed to power a lightbulb
Researchers at UC Santa Cruz developed an energy-efficient method for large language models. By using custom hardware and ternary numbers, they achieved high performance with minimal power consumption, potentially revolutionizing model power efficiency.
Researchers upend AI status quo by eliminating matrix multiplication in LLMs
Researchers innovate AI language models by eliminating matrix multiplication, enhancing efficiency. A MatMul-free method reduces power consumption, costs, and challenges the necessity of matrix multiplication in high-performing models.
Mechanical computer relies on kirigami cubes, not electronics
Researchers at North Carolina State University created a mechanical computer based on kirigami, using polymer cubes for data storage. The system offers reversible data editing and complex computing capabilities, with potential applications in encryption and data display.
Hardware FPGA DPS-8M Mainframe and FNP Project
A new project led by Dean S. Anderson aims to implement the DPS‑8/M mainframe architecture using FPGAs to run Multics OS. Progress includes FNP component implementation and transitioning software gradually. Ongoing development updates available.