September 2nd, 2024

AI-Implanted False Memories

A study by MIT Media Lab found that generative chatbots significantly increase false memories in witness interviews, with participants showing higher confidence in inaccuracies, raising ethical concerns for law enforcement use.

Read original articleLink Icon
AI-Implanted False Memories

A study conducted by the MIT Media Lab investigates the influence of AI, specifically generative chatbots, on the formation of false memories during witness interviews. The research involved 200 participants who viewed a crime video and then interacted with different types of AI interviewers, including a generative chatbot powered by a large language model (LLM). The study found that the generative chatbot significantly increased the incidence of false memories, with participants reporting over three times more immediate false memories compared to a control group and 1.7 times more than those interacting with a survey. Notably, 36.4% of responses to the generative chatbot were misled. The persistence of these false memories was also observed, as they remained constant after one week, with participants expressing higher confidence in these inaccuracies compared to the control group. Factors influencing susceptibility included familiarity with AI technology and interest in crime investigations. The findings raise ethical concerns regarding the use of advanced AI in sensitive situations, such as police interviews, highlighting the potential risks of AI-induced misinformation.

- Generative chatbots significantly increase the formation of false memories in witness interviews.

- Participants misled by generative chatbots reported higher confidence in their false memories after one week.

- Familiarity with AI technology and interest in crime investigations affect susceptibility to false memories.

- The study emphasizes the ethical implications of using AI in sensitive contexts like law enforcement.

- The persistence of false memories poses risks to the reliability of eyewitness testimony.

Link Icon 5 comments
By @prashp - 6 months
So... AI chat bots are more like humans in conversation than a survey or a list of pre-written questions?

Notably there is no "human control" category.

By @orbital-decay - 6 months
>It begins with a person witnessing a crime scene involving a knife, then shows an AI system introducing misinformation by asking about a non-existent gun, and concludes with the witness developing a false memory of a gun at the scene. This sequence demonstrates how AI-guided questioning can distort human recall, potentially compromising the reliability of eyewitness testimony and highlighting the ethical concerns surrounding AI’s influence on human memory and perception.

I'm sorry, what is "AI" about it? That's just basic human psychology. How is this different from being manipulated in the same manner by a human?

By @forgingahead - 6 months
This is just using an AI system to perform manipulation of human beings. The regular media has been doing this for years to all of us.
By @thinkingemote - 6 months
The discussion in the actual paper is interesting:

* Enhanced ability of LLMs to induce persistent false memories with high confidence levels raises ethical concerns. (e.g. humans might be less trustworthy and less able)

* For good: LLMs could induce positive false memories or help reduce the impact of negative ones, such as in people suffering from post-traumatic stress disorder (PTSD).

* Systems that can generate not only text but also images, videos, and sound could have an even more profound impact on false memory formation. Immersive, multi-sensory experiences that may be even more likely be make false memories

* How to mitigate the risk of false memory formation in AI interactions, e.g. explicit warnings about misinformation or designing interfaces that encourage critical thinking.

* Longitudinal studies should be done examining the long-term persistence of AI-induced false memories over one week to get insights into durability of effects

full paper https://arxiv.org/pdf/2408.04681, including the interview questions and the video if you are curious.

By @Log_out_ - 6 months
i can alread see dictatorships whipping up false history photobooks, claiming that massacre was only three guys getting shot or that this genocide was started by that minority doing that atrocity. HA.I.tcrimes they be real