August 20th, 2024

Copilot turns a court reporter into a child molester

Microsoft's Copilot mistakenly identified journalist Martin Bernklau as a criminal, leading to false allegations. This incident raises concerns about AI misinformation and compliance with GDPR, prompting calls for better protections.

Read original articleLink Icon
Copilot turns a court reporter into a child molester

Microsoft's Copilot has mistakenly identified journalist Martin Bernklau as a child molester and other criminal figures due to its inability to differentiate between the individuals he reports on and the cases themselves. This confusion arose when the AI was asked about Bernklau, leading to the dissemination of false information, including his personal details. Despite Bernklau filing a criminal complaint, it was dismissed because the AI does not have a clear author. The Bavarian State Office's data protection officer found that the false claims could not be retrieved initially, but they reappeared shortly after. This incident raises concerns about the implications of AI-generated misinformation, particularly for professionals like journalists, lawyers, and judges who frequently interact with sensitive cases. The situation highlights challenges in adhering to GDPR regulations, as AI models cannot easily rectify or delete false information without affecting all related data. The case has drawn attention from privacy advocacy groups, emphasizing the need for better safeguards against the spread of misinformation by AI systems.

- Microsoft Copilot mistakenly labeled journalist Martin Bernklau as a criminal.

- The AI confused Bernklau with defendants he reported on, leading to false allegations.

- A criminal complaint filed by Bernklau was dismissed due to lack of identifiable authorship.

- The incident raises concerns about AI's compliance with GDPR regulations.

- Privacy advocates are calling for better protections against AI-generated misinformation.

Link Icon 7 comments
By @OJFord - 8 months
Paediatricians are somewhat frequently^ the victim of idiot attackers who think they are (based on the prefix of that job) paedophiles.

LLMs giving confidently incorrect information like this is so much worse, takes much less of an idiot to take it at face value, and if so minded to attack the innocent journalist (or anyone similarly associated with the real criminals, as the article points out).

(^I mean, relative to the incidence of paedophilia in the field, or certainly to attacks on other professions based on misguided assumptions; far too frequently, several occurrences in the last 24 years it seems (I was initially just wanting to check details on the one case I dimly recalled), but not like it's happening every week.)

By @pjc50 - 8 months
Search engines have long used the defence that they're not publishers, they merely guide you to publications by other people. Replacing search with LLM changes that. Who's the publisher when Bing publishes the output of an LLM? I would say that it's Microsoft. And therefore Microsoft are liable when their machine libels people.
By @polotics - 8 months
This is interesting: "people in the EU also have the right to access the information stored about them under the GDPR" I think I am going to have to ask OpenAI about all the weights that are related to me. You should do the same.