Ask HN: Weirdest Computer Architecture?
The traditional computing stack includes hardware and software layers, but alternatives like quantum and neuromorphic computing introduce new models, changing components and architectures beyond classical Turing machines.
The concept of "the stack" in computing typically refers to the layers of abstraction that define how hardware and software interact. While the traditional stack includes physical components like electronics and logical constructs such as Turing machines, there are alternative computing paradigms that can modify or replace some of these components.
For instance, quantum computing introduces a different computational model that does not rely on classical Turing machines. Instead, it uses quantum bits (qubits) and quantum gates, which fundamentally change the way computation is performed. Similarly, neuromorphic computing mimics the neural structure of the human brain, using spiking neural networks instead of traditional logic gates and transistors.
In terms of software architecture, while Unix and similar systems dominate, there are other operating systems and architectures that can be employed, such as those designed for specific hardware like embedded systems or real-time operating systems.
Moreover, user interfaces can vary widely, with alternatives to traditional screens and input devices emerging, such as virtual reality interfaces or brain-computer interfaces.
In summary, while the traditional stack is well-defined, there are indeed alternative stacks that utilize different computational theories and architectures, particularly in the realms of quantum and neuromorphic computing, which can change the components from Turing machines upwards.
Related
HyperCard Simulator
A HyperCard simulator replicates HyperCard stacks and archives, detailing stack components, properties, fields, styles, and alignment. It covers account management, scripting language grammar, interface tools, and stack management options.
Extreme Measures Needed to Scale Chips
The July 2024 IEEE Spectrum issue discusses scaling compute power for AI, exploring solutions like EUV lithography, linear accelerators, and chip stacking. Industry innovates to overcome challenges and inspire talent.
The Software Crisis
The software crisis, coined in 1968, highlights challenges in managing software complexity. Despite advancements, issues persist, emphasizing responsible construction, user agency, and sustainable development practices through constrained abstractions and user empowerment.
Run Functions in Another Stack with Zig
Alessio Marchetti discusses running functions in a separate stack using Zig, covering stack pointer manipulation in assembly code, optimized code generation, and a detailed Fibonacci function example. Emphasizing learning through experimentation without optimization flags.
Devs need system design tools, not diagramming tools
The New Stack website provides resources on software engineering, emphasizing system design tools for developers. It covers various topics like AI, security, and trends in software development, offering insights and resources for the community.
1. Transmeta: https://en.wikipedia.org/wiki/Transmeta
2. Cell processor: https://en.wikipedia.org/wiki/Cell_(processor)
3. VAX: https://en.wikipedia.org/wiki/VAX (Was unusual for it's time, but many concepts have since been adopted)
4. IBM zArchitecture: https://en.wikipedia.org/wiki/Z/Architecture (This stuff is complete unlike conventional computing, particularly the "self-healing" features.)
5. IBM TrueNorth processor: https://open-neuromorphic.org/blog/truenorth-deep-dive-ibm-n... (Cognitive/neuromorphic computing)
"computational model": finite state machine, Turing machine, Petri nets, data-flow, stored program (a.k.a. Von Neumann, or Princeton), dual memory (a.k.a. Harvard), cellular automata, neural networks, quantum computers, analog computers for differential equations
"instruction set architecture": ARM, x86, RISC-V, IBM 360
"instruction set style": CISC, RISC, VLIW, MOVE (a.k.a TTA - Transport Triggered Architecture), Vector
"number of addresses": 0 (stack machine), 1 (accumulator machine), 2 (most CISCs), 3 (most RISCs), 4 (popular with sequential memory machines like Turing's ACE or the Bendix G-15)
"micro-architecture": single cycle, multi-cycle, pipelines, super-pipelined, out-of-order
"system organization": distributed memory, shared memory, non uniform memory, homogeneous, heterogeneous
With these different dimensions for "computer architecture" you will have different answers for which was the weirdest one.
Not 'weird' but any architecture that doesn't have an 8-bit byte causes questions and discussion.
EG: Texas Instruments DSP chip family for digital signal processing, they're all about deep pipelined FFT computations with floats and doubles, not piddling about with 8-bit ASCII .. there's no hardware level bit operations to speak of, and the smallest addressable memory size is either 32 or 64 bits.
It's the response to the observation that most of the transistors in a computer are idle at any given instant.
There are a full rabbit hole worth of advantages to this architecture once you really dig into it.
Description https://esolangs.org/wiki/Bitgrid
Emulator https://github.com/mikewarot/Bitgrid
> The name, from "transistor" and "computer", was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks in a larger integrated system, just as transistors had been used in earlier designs.
let us also not forget The Itanic
Mill CPU (so far only patent-ware but interesting nevertheless) : https://millcomputing.com/
zk-STARK virtual machines:
https://github.com/TritonVM/triton-vm
https://github.com/risc0/risc0
They're "just" bounded Turing machines with extra cryptography. The VM architectures have been optimized for certain cryptographic primitives so that you can prove properties of arbitrary programs, including the cryptographic verification itself. This lets you e.g. play turn-based games where you commit to make a move/action without revealing it (cryptographic fog-of-war):
https://www.ingonyama.com/blog/cryptographic-fog-of-war
The reason why this requires a specialised architecture is that in order to prove something about the execution of an arbitrary program, you need to arithmetize the entire machine (create a set of equations that are true when the machine performs a valid step, where these equations also hold for certain derivatives of those steps).
The basic limit is the curie point of the cores, and the source of clock drive signals.
It was quite a while ago and my memory is hazy tbh, but I put some quick notes here at the time: https://liza.io/alife-2020-soft-alife-with-splat-and-ulam/
Lightmatter: matrix multiply via optical interferometers
Parametron: coupled oscillator phase logic
rapid single flux quantum logic: high-speed pulse logic
asynchronous logic
https://beza1e1.tuxen.de/articles/accidentally_turing_comple...
https://en.wikipedia.org/wiki/Burroughs_Large_Systems
They had an attached scientific processor to do vector and array computations.
https://news.ycombinator.com/item?id=11425533 https://www.cs.virginia.edu/~robins/Computing_Without_Clocks...
There is also Soap Bubble Computing, or various form of annealing computing (like quantum annealing or Adiabatic quantum computation), where you set up your computation as the optimal value of a physical system you can define.
Computation Theory: Cognitive Processes
Smallest parts: Neurons
Largest parts: Brains
Lowest level language: Proprietary
First abstractions of programming: Bootstrapped / Self-learning
Software architecture: Maslow's Theory of Needs
User Interface: Sight, Sound
144 small computers in a grid that can communicate with each other
http://phys.org/news/2012-04-scientists-crab-powered.html
Yes, real crabs
If you are looking for strangeness, the 1990's to early 2000's microcontrollers had I/O ports, but every single I/O port was different. None of them had a standard so that we could (for example) plug in a 10-pin header and connect the same peripheral to any of the I/O ports on a single microcontroller, much less any microcontroller they made in a family of microcontrollers.
I think Transport-triggered architecture (https://en.wikipedia.org/wiki/Transport_triggered_architectu...) is something still not fully explored.
As for weird, try this: ENIAC instructions modified themselves. Back then, an "instruction" (they called them "orders") included the addresses of the operands and destination (which was usually the accumulator). So if you wanted to sum the numbers in an array, you'd put the address of the first element in the instructions, and as ENIAC repeated that instruction (a specified number of times), the address in the instruction would be auto-incremented.
Or how about this: a computer with NO 'jump' or 'branch' instruction? The ATLAS-1 was a landmark of computing, having invented most of the things we take for granted now, like virtual memory, paging, and multi-programming. But it had NO instruction for altering the control flow. Instead, the programmer would simply _write_ to the program counter (PC). Then the next instruction would be fetched from the address in the PC. If the programmer wanted to return to the previous location (a "subroutine call"), they'd be obligated to save what was in the PC before overwriting it. There was no stack, unless you count a programmer writing the code to save a specific number of PC values, and adding code to all subroutines to fetch the old value and restore it to the PC. I do admire the simplicity -- want to run code at a different address? Tell me what it is and I'll just go there, no questions asked.
Or maybe these shouldn't count as "weird", because no one had yet figured out what a computer should be. There was no "standard" model (despite Von Neumann) for the design of a machine, and cost considerations plus new developments (spurred by wanting better computers) meant that the "best" design was constantly changing.
Consider that post-WWII, some materials were hard to come by. So much so that one researcher used a Slinky (yes, the toy) as a memory storage device. And had it working. They wanted acoustic delay lines (the standard of the time), but the Slinky was more available. So it did the same job, just with a different medium.
I've spent a lot of time researching these early machines, wanting to find the path each item in a now-standard model of an idealized computer. It's full of twists and turns, dead ends and unintentional invention.
https://arstechnica.com/information-technology/2020/05/gears...
There is also the analog computer The Analog Thing https://the-analog-thing.org/
Compute with mushrooms, compute near black holes, etc.
great collection of interesting links - kudos to all! :=)
idk ... but isn't the "general" architecture of most of our computers "von neumann"!?
* https://en.wikipedia.org/wiki/Von_Neumann_architecture
but what i miss from the various lists, is the "transputer"-architecture / ecosystem from INMOS - a concept of heavily networked arrays of small cores from the 1980ties
about transputers
* https://en.wikipedia.org/wiki/Transputer
about INMOS
* https://en.wikipedia.org/wiki/Inmos
i had the possibility to take a look at a "real life" ATW - atari transputer workstation - back in the days at my university / CS department :))
mainly used with the Helios operating-system
* https://en.wikipedia.org/wiki/HeliOS
to be programmed in occam
* https://en.wikipedia.org/wiki/Occam_(programming_language)
the "atari transputer workstation" ~ more or less a "smaller" atari mega ST as the "host node" connected to an (extendable) array of extension-cards containing the transputer-chips:
* https://en.wikipedia.org/wiki/Atari_Transputer_Workstation
just my 0.02€
https://en.wikipedia.org/wiki/Logic_gate#Non-electronic_logi...
https://spectrum.ieee.org/superconductor-ics-the-100ghz-seco...
Related
HyperCard Simulator
A HyperCard simulator replicates HyperCard stacks and archives, detailing stack components, properties, fields, styles, and alignment. It covers account management, scripting language grammar, interface tools, and stack management options.
Extreme Measures Needed to Scale Chips
The July 2024 IEEE Spectrum issue discusses scaling compute power for AI, exploring solutions like EUV lithography, linear accelerators, and chip stacking. Industry innovates to overcome challenges and inspire talent.
The Software Crisis
The software crisis, coined in 1968, highlights challenges in managing software complexity. Despite advancements, issues persist, emphasizing responsible construction, user agency, and sustainable development practices through constrained abstractions and user empowerment.
Run Functions in Another Stack with Zig
Alessio Marchetti discusses running functions in a separate stack using Zig, covering stack pointer manipulation in assembly code, optimized code generation, and a detailed Fibonacci function example. Emphasizing learning through experimentation without optimization flags.
Devs need system design tools, not diagramming tools
The New Stack website provides resources on software engineering, emphasizing system design tools for developers. It covers various topics like AI, security, and trends in software development, offering insights and resources for the community.