A Model of a Mind
The article presents a model for digital minds mimicking human behavior. It emphasizes data flow architecture, action understanding, sensory inputs, memory simulation, and learning enhancement through feedback, aiming to replicate human cognitive functions.
Read original articleThe article discusses a model of how minds might work, focusing on creating digital minds that can mimic human behavior. The model includes components for agency, learning, thinking, and introspection, emphasizing a data flow architecture between modules. It explores the concept of an action model that understands and produces actions, similar to language models. The model incorporates sensory inputs, motor control, emotional states, and memory modules to simulate human-like processes. The author aims to build a system that behaves like a human without delving into the complexities of defining consciousness. The model is designed to enable agency through two-way conversations and memory formation based on experiences. It highlights the importance of updating weights in response to feedback to enhance learning capabilities. The ultimate goal is to create digital minds that can replicate human cognitive functions, offering insights into both artificial intelligence development and human brain functioning.
Related
Some Thoughts on AI Alignment: Using AI to Control AI
The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]
The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.
Synthesizer for Thought
The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.
Edelman's Steps Toward a Conscious Artifact (2021)
Gerald Edelman's roadmap for a Conscious Artifact, discussed in 2006, influences neuroscience and AI. The unpublished paper outlines key steps, spanning neurons, cognition, and AI, with two versions updated until May 2021.
Context for the article: I'm working on an ambitious long-term project to write a book about consciousness from a scientific and analytic (versus, say, a meditation-oriented) perspective. I didn't write this fact in the article, but what I'd love to happen is that I meet people with a similar optimistic perspective, and to learn and improve my communication skills via follow-up conversations.
If anyone is interested in chatting more about the topic of the article, please do email me. My email is in my HN profile. Thanks!
There's a theory that real brains subvert this, and what we perceive is actually our internal model of our self/environment. The only data that makes it through from our sense organs is the difference between the two.
This kind of top-down processing is more efficient energy-wise but I wonder if it's deeper than that? You can view perception and action as two sides of the same coin - both are ways to modify your internal model to better fit the sensory signals you expect.
Anyway, I guess the point I'm making is you should be careful which way you point your arrows, and of designating a single aspect of a mind (the action centre) as fundamental. Reality might work very differently, and that maybe says something? I don't know haha.
Before that you can look into the AGI conference people like Ben Goertzel, Pei Wang. And actually the whole history of decades of AI research before it became about narrow AI.
I'd also like to suggest that creating something that truly closely simulates a living intelligent digital person is incredibly dangerous, stupid, and totally unnecessary. The reason I say that is because we already have superhuman capabilities in some ways, and the hardware, software and models are being improved rapidly. We are on track to have AI that is dozens if not hundreds of times faster than humans at thinking and much more capable.
If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.
Don't get me wrong, I love AI and my whole life is planned around agents and AI. But I no longer believe it is wise to try to go all the way and create a "real" living digital species. And I know it's not necessary -- we can create effective AI agents without actually emulating life. We certainly don't need full autonomy, self preservation, real suffering, reproductive instincts, etc. But that is the goal he seems to be down in this article. I suggest leaving some of that out very deliberately.
This is a very strong argument. Certainly all the ingredients to replicate a mind must exist within our physical reality.
But does an algorithm running on a computer have access to all the physics required?
For example, there are known physical phenomena, such as quantum entanglement, that are not possible to emulate with classical physics. How do we know our brains are not exploiting these, and possibly even yet unknown, physical phenomena?
An algorithm running on a classical computer is executing in a very different environment than a brain that is directly part of physical reality.
https://en.wikipedia.org/wiki/Soar_%28cognitive_architecture...
How would it intelligently do this? What data would you train on? You don't have trillions words of text where humans wrote what they thought silently interwoven with what they wrote publicly.
History has shown over and over that hard coded ad hoc solutions to these "simple problems" never work to create intelligent agents, you need to train the model to do that from the start you can't patch in intelligence after the fact. Those additions can be useful, but they have never been intelligent.
Anyway, such a model I'd call "stream of mind model" rather than a language model, it would fundamentally solve many of the problems with current LLM where their thinking is reliant on the shape of the answer, while a stream of mind model would shape its thinking to fit the problem and then shape the formatting to fit the communication needs.
Such a model as this guy describes would be a massive step forward, so I agree with this, but it is way too expensive to train, not due to lack of compute but due to lack of data. And I don't see that data being done within the next decade if ever, humans don't really like writing down their hidden thoughts, and you'd need to pay them to generate data amounts equivalent to the internet...
For instance, the idea that we can neatly have the emotion system separate from the motor control system. Emotions are a cacophony of chemicals and signals traversing the entire body - they're not an enum of happy/angry/sad - we just interpret them as such. So you probably don't get to isolate them off in a corner.
Basically I think it's very tempting to severely underestimate the complexity of a problem when we're still only in theory land.
Very much looking forward to seeing continuing progress in all this.
https://kar.kent.ac.uk/21525/2/A_theory_of_the_acquisition_o...
Memory Organisation Packets might also deal with issues encountered.
https://www.cambridge.org/core/books/abs/dynamic-memory-revi...
When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.
Minsky, as quoted in https://www.newyorker.com/magazine/1981/12/14/a-i
What is missing from this picture is the social aspect. No agent got too smart alone, it's always an iterative "search and learn" process, distributed over many agents. Even AlphaZero had evolutionary selection and extensive self play against its variants.
Basically we can think of culture as compressed prior experience, or compressed search.
But of course we can be assured it’s not quite like that in reality. This is just another example of how our models for explaining the life are reflection of the current technological state.
Nobody considers that old clockwork universe now, and these AI inspired ideas are going to fall short all the same. Yet, progress is happening and all these ideas and talks are probably important steps that carry us forward.
Related
Some Thoughts on AI Alignment: Using AI to Control AI
The GitHub content discusses AI alignment and control, proposing Helper models to regulate AI behavior. These models monitor and manage the primary AI to prevent harmful actions, emphasizing external oversight and addressing implementation challenges.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]
The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.
Synthesizer for Thought
The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.
Edelman's Steps Toward a Conscious Artifact (2021)
Gerald Edelman's roadmap for a Conscious Artifact, discussed in 2006, influences neuroscience and AI. The unpublished paper outlines key steps, spanning neurons, cognition, and AI, with two versions updated until May 2021.