August 4th, 2024

What Is Analog Computing?

Analog computing, utilizing continuous systems, contrasts with digital computing's binary approach. Researchers are revisiting it for energy efficiency, especially in AI, amid rising computational demands and energy consumption.

Read original articleLink Icon
What Is Analog Computing?

Analog computing, which utilizes continuous physical systems to perform calculations, contrasts sharply with the prevalent digital computing that relies on binary digits (0s and 1s). Historically, analog devices like the Antikythera mechanism and slide rules have been used to model natural phenomena, demonstrating their effectiveness in handling continuous data. The differential analyzer, developed by Vannevar Bush in 1931, exemplified the peak of analog computing, capable of solving complex differential equations but requiring manual reconfiguration for different equations. Although digital computing emerged as a more user-friendly and accurate alternative, it has led to significant energy consumption, particularly with the rise of artificial intelligence, exemplified by a planned $100 billion data center by Microsoft and OpenAI that would consume about 5 gigawatts of power. In light of these challenges, researchers are revisiting analog computing as a potential solution for more energy-efficient processing, especially in AI applications, where analog circuits could model operations using electrical signals rather than traditional binary methods. This exploration of analog computing may provide a sustainable path forward in the face of growing computational demands.

- Analog computing uses continuous physical systems, unlike digital computing's binary approach.

- Historical analog devices effectively modeled natural phenomena and complex equations.

- Digital computing, while user-friendly, has led to high energy consumption, especially in AI.

- Researchers are revisiting analog computing for its potential energy efficiency in modern applications.

- The exploration of analog methods may offer sustainable solutions to current computational challenges.

Link Icon 10 comments
By @HarHarVeryFunny - 6 months
It's interesting that analog computers embody both meanings of "analog" - that of continuous quantities vs discrete/digital ones, and also of one thing being comparable to ("an analog of", cf analogy) something else.

The article seems to put a bit of a silly spin on an interesting subject by wanting to present analog computers as a potential alternative to digital ones, when in fact that is only rarely going to be possible.

Digital computers support programming - solving of problems by describing the process by which they are to be solved, whereas analog computers instead support solving of (limited types) of problem by building an analog of the problem (which you could think of as "describing" the problem, rather than how to solve it), either using analog electronics (a patch panel analog computer), or by building a physical analog of it such as the Antikythera mechanism which models the motions of the planets.

The article oddly omits any mention of quantum computers which (unless I'm totally misguided!) are essentially a type of analog computer - one that is no replacement for digital computers and general problem solving since it can't be programmed, but rather only works when one can configure it to be an analog of the problem to be solved. As with any other type of analog computer, the problem is then solved by letting the system run and the dynamics play out.

By @nbingham - 6 months
The reason we ended up with digital logic is because of noise. Hysteresis from digital gates was the only way to make such large systems from such small devices without all of our signaling turning into a garbled mess. Analog processing has its niches and I suspect the biggest niche will be where that noise is a feature rather than a hindrance, something like continuous time neutral networks.
By @AlexDragusin - 6 months
The article ends flat and abruptly, I was setting myself for a long and nice journey in the style of Quanta Magazine, what happened?
By @adrianN - 6 months
The internet archive has this interesting educational film about mechanical computers on ships: https://archive.org/details/27794FireControlComputersPt1
By @nickpsecurity - 6 months
Here’s an example of a general purpose, analog computer:

https://museum.syssrc.com/artifact/exhibits/251/

I’ll note that there’s been plenty of work on general-purpose, analog computing. Analog’s competitiveness also lives on in mixed-signal ASIC’s. The problem with new products, though, is analog requires fully-custom, circuit design. Outside some niche uses, you can’t just synthesize them from higher-level specs like you can digital designs with standard cells.

That puts analog at a severe disadvantage on NRE cost. Especially as both design rules and mask costs increase with every node shrink.

By @_glass - 6 months
I would say an analog synthesizer is an analog computer. The input are for example voltages to simulate notes, and the computational output is a waveform. I prefer analog because digital synthesizer, at least for Subtractive synthesis, are only a simulation of the real instrument. The downsides are not unlike other analog computers, noise and dependence on temperature distort the output.
By @sholladay - 6 months
Veritasium has a good video about analog computers. It covers how they work, what they are used for, pros and cons compared to digital, what they look like today, a bit of the history and their possible future.

https://youtu.be/GVsUOuSjvcg

By @jerf - 6 months
This is one of those things I don't get why people go so gaga over them. The article starts with:

"But it’s not obvious why a system that operates using discrete chunks of information would be good at modeling our continuous, analog world."

OK, fine. But, if my "discrete" modelling is using 64 bits of precision, or, in decimal, about 19 digits of precision, and your analog system on a good day has about 3, how could your analog system even "tell" mine is discrete? It can't "see" that finely in the first place.

Oh, you don't like how it's excessively precise? It is off-the-shelf numerical tech to include precision in calculations. It isn't something every programmer knows, but it's been in books for years.

Oh, by the way, it is also off-the-shelf numerical tech to estimate errors using digital computers. That's going to be a hard trick to match in your analog system. Also, digital systems offer many tradeoffs with regard to error levels and computation time. Another thing your analog system is going to have a hard time with.

"But floating point numbers have lots of problems with instability and such." Most of those problems will manifest in your analog computers, too. Many of those aren't so much about floating point per se as just doing inadvisable things with numbers in general, e.g., if you set up an analog computer to subtract two quantities that come out "close to zero" and then divide by that quantity, your analog computer is going to behave in much the same way that a digital computer would. Ye Olde Classic "add a whole bunch of small numbers together only for them all to disappear because of the large number" is also the sort of problem that you can only get because digital is good enough to reveal it in the first place; accurate adding together millions of very tiny numbers is something your analog computer almost certainly just plain can't do at all.

A carefully crafted analog system might be able to outpace a digital system, but it's not going to be the norm. Normally the digital system will outpace the analog system by absurd amounts.

Now, are there niche cases where analog may be interesting? Yes, absolutely. Neural nets is an interesting possibility. Obviously a big possibility. There's a handful of others. But it's a niche within a niche. There's no problem with that. There's plenty of interesting niches within niches. You can make careers in a niche within a niche. But I don't know what it is about this one that convinces so many people that there's some sort of revolution budding and we're on the cusp of an amazing new era of analog computing that will do so many amazing things for everyone... as opposed to a niche within a niche. We had that revolution. It was digital computing. It is the single largest revolution in the history of man as measured by factors of magnitude of improvement from start to now, and the ditigal revolution is still not done. Analog is not going to overtake that. It doesn't have the headroom. It doesn't have the orders of magnitude improvement possible. That's why it was left behind in the first place. The delta between my kid 3D printing a clock mechanism and the finest Swiss watchmaker who ever lived is still a small single digit number of orders of magnitude in terms of their mechanical precision, and in the digital world we still expect that degree of progress about every 5-10 years or so... and that's slower than it was for many decades.

By @cduzz - 6 months
Why do computers have clocks?

So they don't have to do everything at once.

(the joke is that analog computers don't have clocks... Do they?)

By @dragontamer - 6 months
OpAmps are still one of the most common chips used today, and one of the most obvious analog computers in use.

And yet not even one paragraph in this article?