July 1st, 2024

A Model of a Mind

The article presents a model for digital minds mimicking human behavior. It emphasizes data flow architecture, action understanding, sensory inputs, memory simulation, and learning enhancement through feedback, aiming to replicate human cognitive functions.

Read original articleLink Icon
A Model of a Mind

The article discusses a model of how minds might work, focusing on creating digital minds that can mimic human behavior. The model includes components for agency, learning, thinking, and introspection, emphasizing a data flow architecture between modules. It explores the concept of an action model that understands and produces actions, similar to language models. The model incorporates sensory inputs, motor control, emotional states, and memory modules to simulate human-like processes. The author aims to build a system that behaves like a human without delving into the complexities of defining consciousness. The model is designed to enable agency through two-way conversations and memory formation based on experiences. It highlights the importance of updating weights in response to feedback to enhance learning capabilities. The ultimate goal is to create digital minds that can replicate human cognitive functions, offering insights into both artificial intelligence development and human brain functioning.

Related

Some Thoughts on AI Alignment: Using AI to Control AI

Some Thoughts on AI Alignment: Using AI to Control AI

The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.

Lessons About the Human Mind from Artificial Intelligence

Lessons About the Human Mind from Artificial Intelligence

In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.

Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]

Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]

The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.

Synthesizer for Thought

Synthesizer for Thought

The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.

Edelman's Steps Toward a Conscious Artifact (2021)

Edelman's Steps Toward a Conscious Artifact (2021)

Gerald Edelman's roadmap for a Conscious Artifact, discussed in 2006, influences neuroscience and AI. The unpublished paper outlines key steps, spanning neurons, cognition, and AI, with two versions updated until May 2021.

Link Icon 22 comments
By @tylerneylon - 4 months
Author here: I'm grateful for the comments; thanks especially for interesting references.

Context for the article: I'm working on an ambitious long-term project to write a book about consciousness from a scientific and analytic (versus, say, a meditation-oriented) perspective. I didn't write this fact in the article, but what I'd love to happen is that I meet people with a similar optimistic perspective, and to learn and improve my communication skills via follow-up conversations.

If anyone is interested in chatting more about the topic of the article, please do email me. My email is in my HN profile. Thanks!

By @bubblyworld - 4 months
Something that strikes me about this model is that it's bottom up - sensory data feeds in in its entirety, the action centre processes everything, makes a decision, sends a command to the motor centre.

There's a theory that real brains subvert this, and what we perceive is actually our internal model of our self/environment. The only data that makes it through from our sense organs is the difference between the two.

This kind of top-down processing is more efficient energy-wise but I wonder if it's deeper than that? You can view perception and action as two sides of the same coin - both are ways to modify your internal model to better fit the sensory signals you expect.

Anyway, I guess the point I'm making is you should be careful which way you point your arrows, and of designating a single aspect of a mind (the action centre) as fundamental. Reality might work very differently, and that maybe says something? I don't know haha.

By @privacyonsec - 4 months
I don’t see any scientific citations on how the mind works, about the different parts in this article. Is it all speculation or science fiction?
By @paulmooreparks - 4 months
I've lately begun to think of conciousness as the ability to read and react to one's own log output. I don't like hypothesis by analogy, but it seems an apt description for what conscious entities do. I just don't see anything mystical about it.
By @ilaksh - 4 months
It's a really fascinating topic, but I wonder if this article could benefit from any of the extensive prior work in some way. There is actually quite a lot of work on AGI and cognitive architecture out there. For a more recent and popular take centered around LLMs, see David Shapiro.

Before that you can look into the AGI conference people like Ben Goertzel, Pei Wang. And actually the whole history of decades of AI research before it became about narrow AI.

I'd also like to suggest that creating something that truly closely simulates a living intelligent digital person is incredibly dangerous, stupid, and totally unnecessary. The reason I say that is because we already have superhuman capabilities in some ways, and the hardware, software and models are being improved rapidly. We are on track to have AI that is dozens if not hundreds of times faster than humans at thinking and much more capable.

If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

Don't get me wrong, I love AI and my whole life is planned around agents and AI. But I no longer believe it is wise to try to go all the way and create a "real" living digital species. And I know it's not necessary -- we can create effective AI agents without actually emulating life. We certainly don't need full autonomy, self preservation, real suffering, reproductive instincts, etc. But that is the goal he seems to be down in this article. I suggest leaving some of that out very deliberately.

By @devodo - 4 months
> (Pro-strong-AI)... This is basically a disbelief in the ability of physics to correctly describe what happens in the world — a well-established philosophical position. Are you giving up on physics?

This is a very strong argument. Certainly all the ingredients to replicate a mind must exist within our physical reality.

But does an algorithm running on a computer have access to all the physics required?

For example, there are known physical phenomena, such as quantum entanglement, that are not possible to emulate with classical physics. How do we know our brains are not exploiting these, and possibly even yet unknown, physical phenomena?

An algorithm running on a classical computer is executing in a very different environment than a brain that is directly part of physical reality.

By @monocasa - 4 months
Reminds me a lot of the work done on the SOAR cognitive architecture.

https://en.wikipedia.org/wiki/Soar_%28cognitive_architecture...

By @Jensson - 4 months
> Now the LLM can choose to switch, at its own discretion, back and forth between a talking and listening mode

How would it intelligently do this? What data would you train on? You don't have trillions words of text where humans wrote what they thought silently interwoven with what they wrote publicly.

History has shown over and over that hard coded ad hoc solutions to these "simple problems" never work to create intelligent agents, you need to train the model to do that from the start you can't patch in intelligence after the fact. Those additions can be useful, but they have never been intelligent.

Anyway, such a model I'd call "stream of mind model" rather than a language model, it would fundamentally solve many of the problems with current LLM where their thinking is reliant on the shape of the answer, while a stream of mind model would shape its thinking to fit the problem and then shape the formatting to fit the communication needs.

Such a model as this guy describes would be a massive step forward, so I agree with this, but it is way too expensive to train, not due to lack of compute but due to lack of data. And I don't see that data being done within the next decade if ever, humans don't really like writing down their hidden thoughts, and you'd need to pay them to generate data amounts equivalent to the internet...

By @abcde777666 - 4 months
My instinct is that this is probably on the naive side. For instance, we use separation of concerns in our systems because we're too cognitively limited to create and manage deeply integrated systems. Nature doesn't have that problem.

For instance, the idea that we can neatly have the emotion system separate from the motor control system. Emotions are a cacophony of chemicals and signals traversing the entire body - they're not an enum of happy/angry/sad - we just interpret them as such. So you probably don't get to isolate them off in a corner.

Basically I think it's very tempting to severely underestimate the complexity of a problem when we're still only in theory land.

By @m0llusk - 4 months
Would recommend reading The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning by Daniel Bor for a lot of ideas strongly connected to recent research. My interpretation of this is the mind ends up being a story processing machine that builds stories about what has happened and is happening and constructs and compares stories about what might happen or be made to happen. Of course it is difficult to summarize a whole book rich with references in a sentence, but the model seems arguably more simple and well established than what you are currently putting forward.

Very much looking forward to seeing continuing progress in all this.

By @whitten - 4 months
I think reading some Roger Schank's books on different kinds of memories like episodic memory this might be useful too:

https://kar.kent.ac.uk/21525/2/A_theory_of_the_acquisition_o...

Memory Organisation Packets might also deal with issues encountered.

https://www.cambridge.org/core/books/abs/dynamic-memory-revi...

By @jcynix - 4 months
> I’m motivated by the success of AI-based language models to look at the future of digital minds.

When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.

Minsky, as quoted in https://www.newyorker.com/magazine/1981/12/14/a-i

By @visarga - 4 months
The model is good. Environment -> Perception -> Planning/Imagining -> Acting -> Learning from feedback.

What is missing from this picture is the social aspect. No agent got too smart alone, it's always an iterative "search and learn" process, distributed over many agents. Even AlphaZero had evolutionary selection and extensive self play against its variants.

Basically we can think of culture as compressed prior experience, or compressed search.

By @navigate8310 - 4 months
The author talks about agency which require being able to independently take actions apart from reacting to an input. However, the feedback provided by a two-input model also limits the mind model as it now reacts to the feedback it receives when in listening mode. Isn't it contradictory to the concept if agency?
By @freilanzer - 4 months
How is this blog generated? With code and latex formulas, it would be exactly what I'm looking for.
By @sonink - 4 months
The model is interesting. This is similar in parts to what we are building at nonbios. So for example sensory inputs are not required to simulate a model of a mind. If a human cannot see, the human mind is still clearly human.
By @mensetmanusman - 4 months
Whatever the mind is, it’s a damn cool subset of the universe.
By @Simplicitas - 4 months
Any discussion of a model for consciousness that doesn't include Daniel Dennett's take is a bit lacking from the get go.
By @bbor - 4 months
You’re on the right track :). Check out The Science of Logic, Neurophilosophy, I am A Strange Loop, Brainstorms, and Yudkowsky’s earlier work, if you haven’t! Based on what you have here, you’d love em. It’s a busy field, and a lively one IME. Sadly, the answer is no: the anxiety never goes away
By @miika - 4 months
Ever since LLM’s came out many of us has been wondering these things. It would be easy to say that perhaps our attention and senses somehow come together to formulate prompts and thoughts etc what appears in the mind is the output. And everything we ever experienced has trained the model.

But of course we can be assured it’s not quite like that in reality. This is just another example of how our models for explaining the life are reflection of the current technological state.

Nobody considers that old clockwork universe now, and these AI inspired ideas are going to fall short all the same. Yet, progress is happening and all these ideas and talks are probably important steps that carry us forward.

By @0xWTF - 4 months
Complete aside, but love the Tufte styles.
By @antiquark - 4 months
Nice ideas... now build it!