July 21st, 2024

Artificial consciousness: a perspective from the free energy principle

The article explores artificial consciousness through the free energy principle, suggesting the need for additional factors beyond neural simulations to replicate consciousness in AI. Wanja Wiese emphasizes self-organizing systems and causal flow's role in genuine consciousness.

Read original articleLink Icon
Artificial consciousness: a perspective from the free energy principle

The article discusses the concept of artificial consciousness from the perspective of the free energy principle, focusing on whether a digital computational simulation of neural computations can replicate consciousness. The author, Wanja Wiese, argues that self-organizing systems share properties that could be realized in artificial systems but are not present in computers with a classical architecture. The free energy principle suggests an additional factor, denoted as "X," that may be needed to replicate consciousness in artificial intelligence. By minimizing surprisal, systems can ensure their survival, with the dynamics of internal states described in terms of variational free energy. This approach allows for a conjugate description of system dynamics, mapping internal states to a probability density over external states. The article highlights the distinction between systems that simulate consciousness and those that replicate it, emphasizing the importance of causal flow in determining genuine consciousness. The discussion also touches on the potential for mechanical theories to describe consciousness based on beliefs encoded by internal states, offering insights into the computational correlates of consciousness in living organisms.

Link Icon 7 comments
By @robwwilliams - 3 months
Wiese provides a good but indirect definition of what he operationally means by “consciousness” in the footnote on the first page.

And his interest is with evaluating if there are or can be rigorous criteria for stating a computation system—embodied or not—is capable of “consciousness” (I’m adding the scare quotes).

It is only philosophy mumbo-jumbo if almost all philosophy (including Dennett, Churchland and many others) strikes you as mumbo-jumbo.

I find this a worthwhile contribution worthy of an attaboy, not knee-jerk derision.

By @riemannzeta - 3 months
I found the paper very interesting and thought provoking. But I wonder, as a practical matter, whether the heavy reliance placed upon the von Neumann architecture as providing a distinction between a conscious and non-conscious intelligence is meaningful. It seems like most modern computer architecture (and GPU architecture, in particular) doesn't easily fit clean definitions of von Neumann architecture. Does this mean that a machine learning model trained on GPUs might be conscious? The paper's explanation of what exactly is meant by unmediated causal flow is quite murky here.

But I appreciated the author's careful definition of the FEP and its use in his framework.

By @visarga - 3 months
Consciousness is a tricky concept, hard to pin down even after centuries of debate. It's not very useful for understanding how minds work.

Search might be a better idea to focus on. It's about looking through possibilities, which we can study scientifically. Search is more about the process, while consciousness is vague. Search has a clear goal and space to look in, but we can't even agree on what consciousness is for.

Search happens everywhere, at all scales. It's behind protein folding, evolution, human thinking, cultural change, and AI. Search has some key features: it is compositional, it's discrete, recursive, it's social, and it uses language. Search needs to copy information, but also change it to explore new directions. Yes, I count DNA as a language, code and math too. Optimizing models is also search.

We can stick with the flawed idea of consciousness, or we can try something new. Search is more specific than consciousness in some ways, but also more general because it applies to so many things. It doesn't have the same problems as consciousness (like being subjective), and we can study it more easily.

If we think about it, search explains how we got here. It helps cross the explanatory gap.

By @DiscourseFan - 3 months
Ok, unfortunately this is philosophy mumbo-jumbo, and says more about the sad state of philosophy today than any claims about consciousness. One could write a much more interesting and compelling paper about AI and consciousness by closely reading Kant, Heidegger, and of course Hubert Dreyfus, but it seems like the task of actually reading philosophy has been overwhelmed by the desire to be "scientific" about a task that eclipses all science.

One of the founders of Y-Combinator studied philosophy in his undergrad, I forget which one, but I remember he said in his bio that nobody should take classes in Philosophy, they should study history, the classics, and art history instead if they're interested in the humanities. I was a bit put off at first, but if this is what philosophy means to 90% of undergraduates, then I would strongly advise them all to avoid those classes. Unfortunately Art History might be the best shot at getting an actual critical education these days.

By @zug_zug - 3 months
Philosophy mumbo-jumbo. Consciousness is not a scientifically meaningful term if it is not defined in a falsifiable way.
By @poikroequ - 3 months
> computational functionalism, according to which performing the right computations is sufficient (and necessary) for consciousness.

This is akin to magic, and utter nonsense.

Think about how a computer works and all of its individual components. The CPU has registers and a little bit of L1 L2 L3 cache. There is some stuff in RAM, highly fragmented because of virtual memory. Maybe some memory is swapped to disk. Maybe some of this memory is encrypted. You may have one or more GPUs with their own computations and memory.

Am I supposed to believe that this all somehow comes together and forms a meaningful conscious experience? That would be the greatest miracle the world has ever seen.

Let's be real. The brain has evolved to produce *meaningful* conscious experience. There's so many ways it can go wrong, need I say more than psychedelics? There's tons of evidence to support the theory that the brain evolved and is purpose built for consciousness and sentience, albeit we don't know how the brain actually does it. To assume that computers miraculously have the same ability is one of the dumbest pseudodcientific theories of our time.