Cops are using AI chatbots to write crime reports. Will they hold up in court?
Police departments are adopting AI technology to quickly draft crime reports from body camera audio, improving efficiency but raising concerns about accuracy, bias, and the need for ethical oversight.
Read original articlePolice departments, including Oklahoma City, are beginning to utilize AI technology to draft crime reports from body camera audio. The software, named Draft One and developed by Axon, can generate reports in seconds, significantly reducing the time officers spend on paperwork. Officers have reported that the AI-generated reports are accurate and well-structured, sometimes capturing details they might have missed. However, there are concerns from legal experts and community activists regarding the implications of using AI in this capacity. Critics worry that reliance on AI could lead to inaccuracies in reports, particularly due to the technology's potential for "hallucination," where it generates false information. Additionally, there are fears that the automation of report writing could exacerbate existing biases in policing, particularly against marginalized communities. Currently, the AI tool is primarily used for minor incidents, with caution advised for more serious cases. As the technology evolves, discussions about its ethical implications and the need for oversight are becoming increasingly important. The integration of AI in police work is seen as a potential game changer, but it raises significant questions about accountability and the integrity of the criminal justice process.
- Police departments are using AI to draft crime reports from body camera audio.
- The AI tool can produce reports in seconds, improving efficiency for officers.
- Concerns exist regarding the accuracy and potential biases of AI-generated reports.
- The technology is currently limited to minor incidents, with caution advised for serious cases.
- Ongoing discussions about ethical implications and oversight are crucial as AI use expands in policing.
Related
The AI job interviewer will see you now
AI job interview systems are being adopted by companies to streamline hiring, with 10% of U.S. firms using them and 30% planning to. Concerns about bias and transparency persist.
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
Britain to use "AI" to answer taxpayer's letters
The UK Treasury is using AI to manage taxpayer complaints, claiming a 30% productivity increase. However, the PCS union warns of miscommunication risks due to inadequate AI training and oversight.
Companies ground Microsoft Copilot over data governance concerns
Many enterprises are pausing Microsoft Copilot implementations due to data governance concerns, with half of surveyed chief data officers restricting use over security issues and complex data access permissions.
Microsoft Copilot falsely accuses court reporter of crimes he covered
German journalist Martin Bernklau was falsely accused of serious crimes by Microsoft's Copilot due to its contextual misunderstanding, leaving him without legal recourse and highlighting risks of AI misinformation.
That’s a fundamental issue with so many of the proposed use cases for these things. You can’t throw an AI in jail for filing a false police report. Yes you can simply say officer must review but we all know folks are going to cut corners. Until someone goes to jail for filing a false police report and fails with a “but the AI did it” defense, I’d expect full blown shenanigans here.
We got a preview of this with the lawyer that filed an error ridden court filing and the judge threw the book at the author attorney and wasn’t having any of his “but the AI wrote it” defense.
Perhaps the AI transcription isn't such a bad idea if the original output is preserved as a diff to the policeman's report.
The chatbot result should be alongside the video, with links to the appropriate section of the video. Then the defense can look for discrepancies and the judge can easily check them.
The article even mentions officers being more thoughtful with words during the stops so that it can be summarized easily later on.
I for one welcome new technology to reduce human error and biases.
This I don't understand:
> He said automating those reports will “ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.”
This premise is anything that makes a police officers job easier means they can do more policing, which equates to making the lives of some people worse?
From one perspective, there should be no harm in simply using LLMs as a text processor, if the accounts written down in said text are genuine and verified as such by the officer.
From a more practical perspective, an account written by a witness itself will always be more accurate to what they actually saw and how they interpreted it, than a statement written by any third party. It doesn’t matter if the third party is human or not.
Also, LLMs show heavy Silicon Valley ethical bias. This is not news to anyone here. To ingest this bias into our legal system, I think grants too much power to the tech companies. Especially in common law, where precedents can be established.
On the other hand, the justice system will be largely supervised by humans, so perhaps they will discern what is right and wrong, moral and immoral, ethical and unethical, or what has perverted ethics.
Then again, if this system becomes more AI-based over time, we may lose human control and our legal system may be in large part controlled by tech companies. This is not a good slope to slide down, and how slippery it is remains to be seen.
Also, what if the halo effect created by the way LLM expresses itself hides institutional prejudice? You can always push ChatGPT to come up with 20 polite ways to express your racist beliefs, for example. But if you wrote your own statements, they may be more visible. I’m sure police already know whatever LLMs produce is more PR friendly as a benefit, but is there a dark side to this?
Lastly, what about data protection? What if I don’t want my data to be ingested by LLMs for training, but I am, let’s say, a victim of a crime? Do I have no reasonable expectation of privacy anymore? Remember what Google said about people who hand over their data to third parties and how they argued it is evidence that such people don’t have a reasonable expectation of privacy.
The responsible adult thing to do would be to run research on it and find how the LLMs bias police statements and what kind of effects that can have to a justice system — proper double blind studies. Done by a committee of experts from judges to institutes and projects for civil justice.
With so many ways to slip off the thin tightrope where everyone acts with integrity and not only best interest, but also competent best interest of one another and justice… I think this is bound to end poorly if we go balls to the walls with this sort of thing.
And then to top this off, other commenters present strong arguments that further complicate the matter.
1) Most police reports are never read after they are written. Literally by nobody. Not the cop, not their superior, not the prosecutor, not the defense, not the court. Most criminal cases end in plea deals and the police reports will never be pulled or looked at. Most charges are filed by an on-duty low-level prosecutor who fields phone calls all day from officers who re-iterate the facts of the crime and ask for the statutes that have been violated.
2) Police have to testify at a trial, usually from memory. This often happens one, two, three years or more after they wrote the report. They are usually handed their old report outside the courtroom before they go in to refresh their memory. This is problematic as they will naturally conform their sworn testimony to whatever is written on the report.
3) Police reports right now are VERY badly written. They are impossibly short, impossibly vague and ridiculously low on detail. If you go back to some code you wrote last year and wonder what the hell it does because you didn't comment any of it, it's a bit like that.
4) Police reports are usually horseshit, for various reasons. They tell one side of a story. The suspect will have another view. The truth generally lies somewhere in the middle.
5) I think this might help, because the AI is naturally more neutral than an officer who's whole job is to apprehend criminals and their natural tendency is to make themselves look good and just and right and the suspect look like a really horrible human being.
6) The problem is that the AI might mishear, misunderstand or just plain hallucinate. At first the officers will re-read it, but after a few of these they will get lazy and just click OK on every report. I've been using AI to generate Alt Text for images and I've been getting lazy at checking it. Just yesterday it told me this cup[0] said "COFFEE" on it, which is a good guess, but it actually says OCEAN FEELS. It guessed, and it was 100% sure it was right.
Just my two centimes.
Related
The AI job interviewer will see you now
AI job interview systems are being adopted by companies to streamline hiring, with 10% of U.S. firms using them and 30% planning to. Concerns about bias and transparency persist.
Ask HN: Will AI make us unemployed?
The author highlights reliance on AI tools like ChatGPT and GitHub Copilot, noting a 30% efficiency boost and concerns about potential job loss due to AI's increasing coding capabilities.
Britain to use "AI" to answer taxpayer's letters
The UK Treasury is using AI to manage taxpayer complaints, claiming a 30% productivity increase. However, the PCS union warns of miscommunication risks due to inadequate AI training and oversight.
Companies ground Microsoft Copilot over data governance concerns
Many enterprises are pausing Microsoft Copilot implementations due to data governance concerns, with half of surveyed chief data officers restricting use over security issues and complex data access permissions.
Microsoft Copilot falsely accuses court reporter of crimes he covered
German journalist Martin Bernklau was falsely accused of serious crimes by Microsoft's Copilot due to its contextual misunderstanding, leaving him without legal recourse and highlighting risks of AI misinformation.