Artificial consciousness: a perspective from the free energy principle
The article explores artificial consciousness through the free energy principle, suggesting the need for additional factors beyond neural simulations to replicate consciousness in AI. Wanja Wiese emphasizes self-organizing systems and causal flow's role in genuine consciousness.
Read original articleThe article discusses the concept of artificial consciousness from the perspective of the free energy principle, focusing on whether a digital computational simulation of neural computations can replicate consciousness. The author, Wanja Wiese, argues that self-organizing systems share properties that could be realized in artificial systems but are not present in computers with a classical architecture. The free energy principle suggests an additional factor, denoted as "X," that may be needed to replicate consciousness in artificial intelligence. By minimizing surprisal, systems can ensure their survival, with the dynamics of internal states described in terms of variational free energy. This approach allows for a conjugate description of system dynamics, mapping internal states to a probability density over external states. The article highlights the distinction between systems that simulate consciousness and those that replicate it, emphasizing the importance of causal flow in determining genuine consciousness. The discussion also touches on the potential for mechanical theories to describe consciousness based on beliefs encoded by internal states, offering insights into the computational correlates of consciousness in living organisms.
Related
A Model of a Mind
The article presents a model for digital minds mimicking human behavior. It emphasizes data flow architecture, action understanding, sensory inputs, memory simulation, and learning enhancement through feedback, aiming to replicate human cognitive functions.
AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age
The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.
And his interest is with evaluating if there are or can be rigorous criteria for stating a computation system—embodied or not—is capable of “consciousness” (I’m adding the scare quotes).
It is only philosophy mumbo-jumbo if almost all philosophy (including Dennett, Churchland and many others) strikes you as mumbo-jumbo.
I find this a worthwhile contribution worthy of an attaboy, not knee-jerk derision.
But I appreciated the author's careful definition of the FEP and its use in his framework.
Search might be a better idea to focus on. It's about looking through possibilities, which we can study scientifically. Search is more about the process, while consciousness is vague. Search has a clear goal and space to look in, but we can't even agree on what consciousness is for.
Search happens everywhere, at all scales. It's behind protein folding, evolution, human thinking, cultural change, and AI. Search has some key features: it is compositional, it's discrete, recursive, it's social, and it uses language. Search needs to copy information, but also change it to explore new directions. Yes, I count DNA as a language, code and math too. Optimizing models is also search.
We can stick with the flawed idea of consciousness, or we can try something new. Search is more specific than consciousness in some ways, but also more general because it applies to so many things. It doesn't have the same problems as consciousness (like being subjective), and we can study it more easily.
If we think about it, search explains how we got here. It helps cross the explanatory gap.
One of the founders of Y-Combinator studied philosophy in his undergrad, I forget which one, but I remember he said in his bio that nobody should take classes in Philosophy, they should study history, the classics, and art history instead if they're interested in the humanities. I was a bit put off at first, but if this is what philosophy means to 90% of undergraduates, then I would strongly advise them all to avoid those classes. Unfortunately Art History might be the best shot at getting an actual critical education these days.
This is akin to magic, and utter nonsense.
Think about how a computer works and all of its individual components. The CPU has registers and a little bit of L1 L2 L3 cache. There is some stuff in RAM, highly fragmented because of virtual memory. Maybe some memory is swapped to disk. Maybe some of this memory is encrypted. You may have one or more GPUs with their own computations and memory.
Am I supposed to believe that this all somehow comes together and forms a meaningful conscious experience? That would be the greatest miracle the world has ever seen.
Let's be real. The brain has evolved to produce *meaningful* conscious experience. There's so many ways it can go wrong, need I say more than psychedelics? There's tons of evidence to support the theory that the brain evolved and is purpose built for consciousness and sentience, albeit we don't know how the brain actually does it. To assume that computers miraculously have the same ability is one of the dumbest pseudodcientific theories of our time.
Related
A Model of a Mind
The article presents a model for digital minds mimicking human behavior. It emphasizes data flow architecture, action understanding, sensory inputs, memory simulation, and learning enhancement through feedback, aiming to replicate human cognitive functions.
AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age
The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.