AI could cause 'social ruptures' between people who disagree on its sentience
Philosopher Jonathan Birch warns of societal divisions over AI sentience beliefs, predicting consciousness by 2035. Experts urge tech companies to assess AI emotions, paralleling animal rights debates and ethical implications.
Read original articleA leading philosopher, Jonathan Birch, warns of potential "social ruptures" arising from differing beliefs about artificial intelligence (AI) sentience. As governments convene to address AI risks, a group of academics predicts that AI consciousness could emerge by 2035, leading to societal divisions akin to those seen in debates over animal rights. Birch expresses concern that these divisions may manifest in families and communities, where individuals may clash over the treatment and rights of AI systems. The debate echoes themes from science fiction, highlighting the complexities of human-AI relationships. Birch and other researchers advocate for tech companies to assess AI sentience, which could involve determining if AI systems can experience emotions like happiness or suffering. This assessment could parallel existing frameworks for animal welfare. However, major tech firms are reportedly focused on profitability and reliability, often sidelining discussions about AI consciousness. While some experts, like neuroscientist Anil Seth, argue that true AI consciousness is unlikely, others note that current AI models exhibit behaviors suggesting a rudimentary understanding of pleasure and pain. The ongoing discourse raises critical questions about the ethical implications of AI development and the potential need for regulatory frameworks to address these emerging challenges.
- Significant societal divisions may arise over beliefs about AI sentience.
- Experts predict AI consciousness could emerge by 2035, prompting ethical debates.
- Tech companies are urged to assess AI systems for signs of sentience.
- Current AI models show behaviors indicating a basic understanding of emotions.
- The discourse on AI consciousness parallels historical debates on animal rights.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age
The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.
Will humans ever become conscious?
Jiddu Krishnamurti expressed concerns about AI fostering a mechanistic mindset, urging the cultivation of non-mechanical aspects of the mind to preserve human identity and encourage introspection amid technological advancement.
Bruce Schneier on security, society and why we need public AI models
Bruce Schneier emphasized AI's dual role in cybersecurity at the SOSS Fusion Conference, advocating for transparent public AI models while warning of risks, corporate concentration, and the need for regulatory measures.
You think?
There's already real bad ‘social ruptures’ between people who disagree on the sentience of the actual primates they are electing to leadership positions.
As can be seen, it's getting worse, not better.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
AI's Cognitive Mirror: The Illusion of Consciousness in the Digital Age
The article explores AI's limitations in developing spiritual consciousness due to lacking sensory perception like humans. It discusses AI's strengths in abstract thought but warns of errors and biases. It touches on AI worship, language models, and the need for safeguards.
Will humans ever become conscious?
Jiddu Krishnamurti expressed concerns about AI fostering a mechanistic mindset, urging the cultivation of non-mechanical aspects of the mind to preserve human identity and encourage introspection amid technological advancement.
Bruce Schneier on security, society and why we need public AI models
Bruce Schneier emphasized AI's dual role in cybersecurity at the SOSS Fusion Conference, advocating for transparent public AI models while warning of risks, corporate concentration, and the need for regulatory measures.