Chatbots can persuade people to stop believing in conspiracy theories
Researchers from MIT Sloan and Cornell University found that AI chatbots can reduce belief in conspiracy theories by 20% through tailored conversations, with 99.2% accuracy in their claims.
Read original articleResearchers from MIT Sloan and Cornell University have found that AI chatbots can effectively reduce belief in conspiracy theories by approximately 20%. This study, published in the journal Science, highlights the potential of large language models (LLMs) to engage users in conversations that challenge their beliefs. The research involved 2,190 participants who discussed their credible conspiracy theories with the AI, which tailored its responses to counter the specific claims made by the users. The results showed a significant decrease in confidence regarding the conspiracy theories after multiple interactions with the chatbot. The study also noted that the AI's accuracy was high, with 99.2% of its claims being true, suggesting that the extensive data available on conspiracy theories allowed the model to provide reliable counterarguments. The researchers propose that this approach could be utilized in various platforms, such as social media and conspiracy forums, to promote critical thinking and evidence-based discussions. The findings challenge previous assumptions about the resistance of conspiracy theorists to factual evidence, indicating that people may be more open to changing their beliefs than previously thought.
- AI chatbots can reduce belief in conspiracy theories by about 20%.
- The study involved 2,190 participants discussing their conspiracy beliefs with an AI.
- The AI provided tailored responses based on users' claims, leading to significant belief changes.
- 99.2% of the AI's claims were found to be true, indicating high accuracy.
- The research suggests potential applications for AI in combating misinformation on social media and forums.
Related
ChatGPT Isn't 'Hallucinating'–It's Bullshitting – Scientific American
AI chatbots like ChatGPT can generate false information, termed as "bullshitting" by authors to clarify responsibility and prevent misconceptions. Accurate terminology is crucial for understanding AI technology's impact.
Chatbots Are Primed to Warp Reality
The integration of AI chatbots raises concerns about misinformation and manipulation, particularly in political contexts, as they can mislead users and implant false memories despite efforts to improve accuracy.
AI-Implanted False Memories
A study by MIT Media Lab found that generative chatbots significantly increase false memories in witness interviews, with participants showing higher confidence in inaccuracies, raising ethical concerns for law enforcement use.
Study shows 'alarming' level of trust in AI for life and death decisions
A study from UC Merced reveals that two-thirds of participants trusted unreliable AI in life-and-death decisions, raising concerns about AI's influence in military, law enforcement, and medical contexts.
GPTs and Hallucination
Large language models, such as GPTs, generate coherent text but can produce hallucinations, leading to misinformation. Trust in their outputs is shifting from expert validation to crowdsourced consensus, affecting accuracy.
In Internet debates conspiracy theorists are often made fun of or the discussion is very disrespectful which would only serve to make them cling tighter to the theories.
Related
ChatGPT Isn't 'Hallucinating'–It's Bullshitting – Scientific American
AI chatbots like ChatGPT can generate false information, termed as "bullshitting" by authors to clarify responsibility and prevent misconceptions. Accurate terminology is crucial for understanding AI technology's impact.
Chatbots Are Primed to Warp Reality
The integration of AI chatbots raises concerns about misinformation and manipulation, particularly in political contexts, as they can mislead users and implant false memories despite efforts to improve accuracy.
AI-Implanted False Memories
A study by MIT Media Lab found that generative chatbots significantly increase false memories in witness interviews, with participants showing higher confidence in inaccuracies, raising ethical concerns for law enforcement use.
Study shows 'alarming' level of trust in AI for life and death decisions
A study from UC Merced reveals that two-thirds of participants trusted unreliable AI in life-and-death decisions, raising concerns about AI's influence in military, law enforcement, and medical contexts.
GPTs and Hallucination
Large language models, such as GPTs, generate coherent text but can produce hallucinations, leading to misinformation. Trust in their outputs is shifting from expert validation to crowdsourced consensus, affecting accuracy.