Darwin Machines
The Darwin Machine theory proposes the brain uses evolution to efficiently solve problems. It involves minicolumns competing through firing patterns, leading to enhanced artificial intelligence and creativity through recombination in cortical columns.
Read original articleThe text discusses the concept of a Darwin Machine, inspired by William Calvin's theory outlined in "The Cerebral Code," which proposes that the brain implements evolution to navigate a vast problem space efficiently. The theory suggests that minicolumns in the brain act as arenas for evolution, with sensory inputs triggering specific firing patterns that compete for dominance. These patterns propagate across the brain's surface, with winning patterns being reinforced over time. The idea of cortical columns, composed of interconnected minicolumns, adds complexity and allows for a wide range of thoughts to be encoded. The theory of Darwin Machines is seen as a potential solution to enhancing artificial intelligence by enabling both quick, intuitive thinking (system one) and deeper, more thoughtful processing (system two). Additionally, the concept of recombination within cortical columns is highlighted as a key mechanism for creativity in evolution, allowing for the generation of novel ideas by swapping out specific firing patterns.
Related
Synthesizer for Thought
The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.
Uniquely human intelligence arose from expanded information capacity
Human intelligence's evolution is linked to enhanced information processing capacity, not specific biases. Genetic enhancements in processing abilities across memory, attention, and learning systems differentiate human cognition, impacting functions like rule representation and abstract thinking. This perspective redefines human intelligence evolution.
A Model of a Mind
The article presents a model for digital minds mimicking human behavior. It emphasizes data flow architecture, action understanding, sensory inputs, memory simulation, and learning enhancement through feedback, aiming to replicate human cognitive functions.
General Theory of Neural Networks
The article explores Universal Activation Networks (UANs) bridging biological gene regulatory networks and artificial neural networks. It discusses their evolution, structure, computational universality, and potential to advance research in both fields.
- Some commenters draw parallels between the theory and existing concepts in AI and neuroscience, such as capsule-routing algorithms and Hebbian learning.
- There is a call for more detailed explanations and visual aids to better understand the theory's concepts and mechanisms.
- Several comments highlight the importance of considering the brain's temporal dynamics and interconnected structures beyond just cortical columns.
- Others emphasize the significance of training data and environmental interaction in understanding brain function and intelligence.
- There are references to related works and resources, such as Jeff Hawkins' "1000 Brain Theory" and evolutionary algorithms, suggesting further reading and exploration.
popular deep artificial neural networks (lstms, llms, etc.) are highly recurrent, in which they are simulating not deep networks, but shallow networks that process information in loops many times.
> columns.. and that's about it.
recommend not to oversimplify structure here. what you describing is only high-level structure of single part of brain (neocortex).
1. brain has many other structures inside basal ganglia, cerebellum, midbrain, etc. each with different characteristic micro-circuits.
2. brain networks are highly interconnected on long range. neurons project (as in send signals) to very distant parts of the brain. similarly they get projections from other distant parts of brain too.
3. temporal dimension is important. your article is very ML-like focusing on information processing devoid of temporal dimension. if you want to draw parallels to real neurons in brain, need to explain how it fits into temporal dynamics (oscillations in neurons and circuits).
4. is this competition in realm of abeyant (what you can think in principle) or current (what you think now) representations? what's the timescales and neurological basis for this?
overall, my take it is a bit ML-like talk. if it describes real neurological networks it got to be closer and stronger neurological footing.
here is some good material, if you want to dive into neuroscience. "Principles of Neurobiology", Liqun Luo, 2020 and "Fundamental Neuroscience", McGraw Hill.
more resources can be found here:
In other words, there's some kind of iterative mechanism for higher-level layers to find which lower-level subnetworks are most in agreement about the input data, inducing learning.
Capsule-routing algorithms, proposed by Hinton and others, seek to implement precisely this idea, typically with some kind of expectation-maximization (EM) process.
There are quite a few implementations available on github:
https://github.com/topics/capsules
https://en.wikipedia.org/wiki/Artificial_life https://direct.mit.edu/artl https://alife.org
Among the many uses, they have been applied to ‘evolving’ neural networks.
Famously a guy whose name I can’t remember used to generate programs and mutations of programs.
My recommendation if you want to get into AI: avoid anything written in the last 10 years and explore some classics from the 70s
'Compete', 'winner', and 'reward' are all left undefined in the article. Even given that, the theory is not new information and seems incredibly analogous to Hebbian learning which is a long-standing theory in neuroscience. Additionally, the metaphor of evolution within the brain does not seem apt. Essentially what is said is that given a sensory input, we will see patterns emerge that correspond to a behaviour deemed successful. Other brain patterns may arise but are ignored or not reinforced by a reward. This is almost tautological, and the 'evolutionary process' (input -> brain activity -> behaviour -> reward) lacks explanatory power. This is exactly what we would expect to see. If we observe a behaviour that has been reinforced in some way, it would obviously correlate with the brain producing a specific activity pattern. I don't see any evidence that the brain will always produce several candidate activity patterns before judging a winner based on consensus. The tangent of cortical columns ignores key deep brain structures and is also almost irrelevant, the brain could use the proposed 'evolutionary' process with any architecture.
Searching the environment provides the data brain is trained on. I don't believe we can understand the brain in isolation without its data engine and the problem space where it develops.
Neural nets showed that given a dataset, you can obtain similar results with very different architectures, like transformer and diffusion models, or transformer vs Mamba. The essential ingredient is data, architecture only needs to pass some minimal bar for learning.
Studying just the brain misses the essential - we are search processes, the whole life is search for optimal actions, and evolution itself is search for environment fitness. These search processes made us what we are.
My current rabbit hole is using Combinatory Logic as the genetic material, and have been trying to evolve combinators, etc (there is some active research in this area).
Only slightly related to the author’s idea, its cool that others are interested in this space as well.
For example, there should be a relationship between rate of learning and the physical subcolumns - we should be able to identify when a single column starts up / is fully trained / is overused
Or use AI to try to mirror the learning process, creating an external replica that makes the same decisions as the person
Marvin Minsky was spot on about the general idea 50 years ago, seeing the brain as a collection of 1000s of atomic operators (society of mind?)
Lingua ex Machina: Reconciling Darwin and Chomsky with the Human [2000]
https://www.amazon.com/Lingua-Machina-Reconciling-Darwin-Cho...
Completely changed my worldview. Evolutionary processes every where.
My (turrible) recollection:
Darwinian processes for comprehending speech, the process of translating sounds into phenomes (?).
There's something like a brain song, where a harmony signal echoes back and forth.
Competition between and among hexagonal processing units (what Jeff Hawkins & Numenta are studying). My paraphrasing: meme PvP F4A battlefield where "winning" means converting your neighbor to your faction.
Speculation about the human brain leaped from proto-language (noun-verb) to Chomsky language (recursively composable noun-verb-object predicates). Further speculation how that might be encoding in our brains.
Etc.
> Looking down on the brain again, we can imagine projecting a pattern of equilateral triangles - like a fishing net - over the surface. Each vertex in the net will land on a minicolumn within the same network, leaving holes over minicolumns that don't belong to that network. If we were to project nets over the network until every minicolumn was covered by a vertex we would project 50-100 nets.
Around this part I had a difficult time visualizing the intent here. Are there any accompanying diagrams or texts? Thanks for the interesting read!
> First, if you look at a cross-section of the brain (eye-level with the table)
I thought it was flat on the table? Surely if we look at it side-on we just see the edge?
Without a clear idea of how to picture this, the other aspect (columns) doesn't make sense either.
For a more modern treatment on the subject, read this paper: An Attempt at a Unified Theory of the Neocortical Microcircuit in Sensory Cortex https://www.researchgate.net/publication/343269087_An_Attemp...
> Essentially, biology uses evolution because it is the best way to solve the problem of prediction (survival/reproduction) in a complex world.
1. This is anthropocentric in a way that meaningfully distorts the conclusion. The vast majority of life on earth, whether you count by raw number, number of species, weight, etc. do not have neurons. These organisms are of course, microbes (viruses and prokaryotes) and plants. Bacteria and viruses do not 'predict' in the way this post speaks of. Survival strategies that bacteria use (that we know about and understand) are hedging-based. For example, some portion of a population will stochastically switch certain survival genes on (e.g. sporulation, certain efflux pumps = antibiotic resistance genes) that have a cost benefit ratio that changes depending on the condition. This could be construed as a prediction in some sense: the genome that has enough plasticity to allow certain changes like this will, on average, produce copies in a large enough population that enable survival through a tremendous range of conditions. But that's a very different type of prediction than what the rest of the post talks about. In short, prediction is something neurons are good at, but it's not clear it's a 'favored' outcome in our biosphere.
> It relies on the same insight that produced biology: That evolution is the best algorithm for predicting valid "solutions" within a near infinite problem space.
2. This gets the teleology reversed. Biology doesn't use anything, it's not trying to solve anything, and evolution isn't an algorithm because it doesn't have an end goal or a teleology (and it's not predicting anything). Evolution is what you observe over time in a population of organisms that reproduce without perfect fidelity copy mechanisms. All we need say is that things that reproduce are more likely to be observed. We don't have to anthropomorphize the evolutionary process to get an explanation of the distribution of reproducing entities that we observe or the fact that they solve challenges to reproduction.
> Many people believe that, in biology, point mutations lead to the change necessary to drive novelty in evolution. This is rarely the case. Point mutations are usually disastrous and every organism I know of does everything in its power to minimize them. Think, for every one beneficial point mutation, there are thousands that don't have any effect, and hundreds that cause something awful like cancer. If you're building a skyscraper, having one in a hundred bricks be laid with some variation is not a good thing. Instead Biology relies on recombination. Swap one beneficial trait for another and there's a much smaller chance you'll end up with something harmful and a much higher chance you'll end up with something useful. Recombination is the key to the creativity of evolution, and Darwin Machines harness it.
3. An anthropocentric reading of evidence that distorts the conclusion. The fidelity (number of errors per cycle per base pair) of the polymerases varies by maybe 8 orders of magnitude across the tree of life. For a review, see figure 3 in ref [1]. I don't know about Darwin Machines, but the view that 'recombination' is the key to evolution is a conclusion you would draw if you examined only a part of the tree of life. We can quibble about viruses being alive or not, but they are certainly the most abundant reproducing thing on earth by orders of magnitude. Recombination doesn't seem as important for adaptation in them as it does in organisms with chromosomes.
4. There are arguments that seem interesting (and maybe not incompatible with some version of the post) that life should be abundant because it actually helps dissipate energy gradients. See the Quanta article on this [0].
[0] https://www.quantamagazine.org/a-new-thermodynamics-theory-o... [1] Sniegowski, P. D., Gerrish, P. J., Johnson, T., & Shaver, A. (2000). The evolution of mutation rates: separating causes from consequences. BioEssays, 22(12), 1057–1066. doi:10.1002/1521-1878(200012)22:12<1057::aid-bies3>3.0.co;2-w
I might have a go implementing something along these lines.
I've been tinkering with the idea in python but I just don't have enough ML experience.
If you, or anyone you know, is interested in Darwin Machines please reach out!
If you’ll pardon some woo, another argument I see in favour of message passing/consensus, is that it “fits” the self similar nature of life patterns.
Valid behaviours that replicate and persist, for only the reason that they do.
Culture, religion, politics, pop songs, memes… “Egregore” comes to mind. In some ways “recombination” could be seen as “cooperation”, even at the level of minicolumns.
(Edit: what I mean to say is that it kinda makes sense that the group dynamics between constituent units of one brain would be similar in some way to the group dynamics you get from a bunch of brains)
Related
Synthesizer for Thought
The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.
Uniquely human intelligence arose from expanded information capacity
Human intelligence's evolution is linked to enhanced information processing capacity, not specific biases. Genetic enhancements in processing abilities across memory, attention, and learning systems differentiate human cognition, impacting functions like rule representation and abstract thinking. This perspective redefines human intelligence evolution.
A Model of a Mind
The article presents a model for digital minds mimicking human behavior. It emphasizes data flow architecture, action understanding, sensory inputs, memory simulation, and learning enhancement through feedback, aiming to replicate human cognitive functions.
General Theory of Neural Networks
The article explores Universal Activation Networks (UANs) bridging biological gene regulatory networks and artificial neural networks. It discusses their evolution, structure, computational universality, and potential to advance research in both fields.