September 7th, 2024

Will humans ever become conscious?

Jiddu Krishnamurti expressed concerns about AI fostering a mechanistic mindset, urging the cultivation of non-mechanical aspects of the mind to preserve human identity and encourage introspection amid technological advancement.

Read original articleLink Icon
Will humans ever become conscious?

Jiddu Krishnamurti's reflections on artificial intelligence (AI) provide a unique perspective on the ongoing debates about consciousness and the implications of machines mimicking human thought. In the 1980s, Krishnamurti expressed concern that the rise of AI could lead to a crisis not just in society but also in the philosophical and psychological understanding of what it means to be human. He argued that if machines could replicate human cognitive processes, it raises the question of human identity and purpose. Rather than fearing machines that might achieve human-like intelligence, Krishnamurti warned against humans adopting a mechanistic mindset, which could render them replaceable by machines. His insights suggest that the focus should be on cultivating the non-mechanical aspects of the human mind to avoid a future where humans become mere extensions of technology. Krishnamurti's thoughts resonate with contemporary concerns about AI's impact on culture and meaning, emphasizing the need for introspection and transformation in the face of technological advancement. His work invites a deeper exploration of the relationship between AI and human cognition, urging individuals to reflect on their mental states and the potential consequences of a society increasingly reliant on machines.

- Jiddu Krishnamurti raised concerns about AI leading to a mechanistic human mindset.

- He emphasized the importance of cultivating non-mechanical aspects of the mind.

- Krishnamurti's insights challenge the notion of human identity in the age of AI.

- His reflections encourage introspection regarding the implications of AI on culture and meaning.

- The discourse on AI should include philosophical and psychological dimensions of human cognition.

Link Icon 3 comments
By @musicale - 7 months
The Chinese Room argument convinced me that John Searle is likely a non-intelligent imitation of a philosophy professor.
By @rramadass - 7 months
One good framework to think about the problem of "Consciousness" in Humans vs. AI/AGI is the Samkhya School of Hindu Philosophy - https://en.wikipedia.org/wiki/Samkhya

See my past comments here for some background - https://news.ycombinator.com/item?id=40479388

This is how i work it out;

1) "Consciousness" defined as attribute-less pure awareness/witness and immutable "Self" (aka Purusha in Samkhya) will never be possible in AI/AGI.

2) All other human mental states which are mutable and considered as evolutes of Prakriti in Samkhya are possible in AI/AGI. A simple mapping would be;

2.1) Buddhi in Samkhya ~ Metacognition - https://en.wikipedia.org/wiki/Metacognition

2.2) Manas in Samkhya ~ Phenomenology - https://en.wikipedia.org/wiki/Phenomenology_(psychology)

2.3) Ahamkara in Samkhya ~ Emergent property from above due to "I Think therefore I am".

Now realize that (1) is purely experiential/internal and only (2) is observable from an external pov and you get the idea why AI/AGI may seem "Conscious" (in the common usage of the word) to external observers.

By @tim333 - 7 months
>...worried that an insufficiently cultivated mind ... would be perfectly imitable and thus replaceable by computers and other machines

I'm not sure cultivating our minds will buy much time there. But they will never be an exact replacement in the same way humans are not an exact replacement for cats and dogs in spite of being smarter.