August 26th, 2024

Cops are using AI chatbots to write crime reports. Will they hold up in court?

Police departments are adopting AI technology to quickly draft crime reports from body camera audio, improving efficiency but raising concerns about accuracy, bias, and the need for ethical oversight.

Read original articleLink Icon
Cops are using AI chatbots to write crime reports. Will they hold up in court?

Police departments, including Oklahoma City, are beginning to utilize AI technology to draft crime reports from body camera audio. The software, named Draft One and developed by Axon, can generate reports in seconds, significantly reducing the time officers spend on paperwork. Officers have reported that the AI-generated reports are accurate and well-structured, sometimes capturing details they might have missed. However, there are concerns from legal experts and community activists regarding the implications of using AI in this capacity. Critics worry that reliance on AI could lead to inaccuracies in reports, particularly due to the technology's potential for "hallucination," where it generates false information. Additionally, there are fears that the automation of report writing could exacerbate existing biases in policing, particularly against marginalized communities. Currently, the AI tool is primarily used for minor incidents, with caution advised for more serious cases. As the technology evolves, discussions about its ethical implications and the need for oversight are becoming increasingly important. The integration of AI in police work is seen as a potential game changer, but it raises significant questions about accountability and the integrity of the criminal justice process.

- Police departments are using AI to draft crime reports from body camera audio.

- The AI tool can produce reports in seconds, improving efficiency for officers.

- Concerns exist regarding the accuracy and potential biases of AI-generated reports.

- The technology is currently limited to minor incidents, with caution advised for serious cases.

- Ongoing discussions about ethical implications and oversight are crucial as AI use expands in policing.

Link Icon 15 comments
By @JCM9 - 8 months
The challenge here is the broader challenge with AI (eg self driving cars). Folks want the benefits but while also absolving themselves of the responsibility, liability and accountability for the quality of the output.

That’s a fundamental issue with so many of the proposed use cases for these things. You can’t throw an AI in jail for filing a false police report. Yes you can simply say officer must review but we all know folks are going to cut corners. Until someone goes to jail for filing a false police report and fails with a “but the AI did it” defense, I’d expect full blown shenanigans here.

We got a preview of this with the lawyer that filed an error ridden court filing and the judge threw the book at the author attorney and wasn’t having any of his “but the AI wrote it” defense.

By @riiii - 8 months
I've seen a policeman write a crime report. He cherry picked and refurbished facts in a way that AI would have blushed.

Perhaps the AI transcription isn't such a bad idea if the original output is preserved as a diff to the policeman's report.

By @theamk - 8 months
This is scary. There will be inevitable hallucinations, but this time they might land innocent people in jail.
By @Animats - 8 months
This does create evidence problems.

The chatbot result should be alongside the video, with links to the appropriate section of the video. Then the defense can look for discrepancies and the judge can easily check them.

By @golergka - 8 months
LLMs are an excellent text processing tool. But it doesn't matter what tools you used, you're responsible for whatever you wrote.
By @kazinator - 8 months
I suspect crime reports with spelling and grammar fixed by AI will might hold up better in court than unassisted reports.
By @more_corn - 8 months
Ai chatbots struggle conforming to facts. This will end badly.
By @jcims - 8 months
What might be better is have a large language model interview the officer and the whole thing recorded and transcribed.
By @russdpale - 8 months
This is an absolutely terrible idea, people's lives are going to be ruined by this.
By @hypeatei - 8 months
I honestly don't see an issue with this. This is the type of work that AI is made for: augmenting human tasks. Someone still reviews this before final submission (because court)

The article even mentions officers being more thoughtful with words during the stops so that it can be summarized easily later on.

By @bko - 8 months
I think the right balance is one of liability. There is a person responsible and can be creative as to how he creates the report. But if there is something wrong with the report, he would be responsible same as if he wrote the report himself.

I for one welcome new technology to reduce human error and biases.

This I don't understand:

> He said automating those reports will “ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.”

This premise is anything that makes a police officers job easier means they can do more policing, which equates to making the lives of some people worse?

By @caseyy - 8 months
This is very multi-faceted and complex.

From one perspective, there should be no harm in simply using LLMs as a text processor, if the accounts written down in said text are genuine and verified as such by the officer.

From a more practical perspective, an account written by a witness itself will always be more accurate to what they actually saw and how they interpreted it, than a statement written by any third party. It doesn’t matter if the third party is human or not.

Also, LLMs show heavy Silicon Valley ethical bias. This is not news to anyone here. To ingest this bias into our legal system, I think grants too much power to the tech companies. Especially in common law, where precedents can be established.

On the other hand, the justice system will be largely supervised by humans, so perhaps they will discern what is right and wrong, moral and immoral, ethical and unethical, or what has perverted ethics.

Then again, if this system becomes more AI-based over time, we may lose human control and our legal system may be in large part controlled by tech companies. This is not a good slope to slide down, and how slippery it is remains to be seen.

Also, what if the halo effect created by the way LLM expresses itself hides institutional prejudice? You can always push ChatGPT to come up with 20 polite ways to express your racist beliefs, for example. But if you wrote your own statements, they may be more visible. I’m sure police already know whatever LLMs produce is more PR friendly as a benefit, but is there a dark side to this?

Lastly, what about data protection? What if I don’t want my data to be ingested by LLMs for training, but I am, let’s say, a victim of a crime? Do I have no reasonable expectation of privacy anymore? Remember what Google said about people who hand over their data to third parties and how they argued it is evidence that such people don’t have a reasonable expectation of privacy.

The responsible adult thing to do would be to run research on it and find how the LLMs bias police statements and what kind of effects that can have to a justice system — proper double blind studies. Done by a committee of experts from judges to institutes and projects for civil justice.

With so many ways to slip off the thin tightrope where everyone acts with integrity and not only best interest, but also competent best interest of one another and justice… I think this is bound to end poorly if we go balls to the walls with this sort of thing.

And then to top this off, other commenters present strong arguments that further complicate the matter.

By @speckx - 8 months
Yes. Until it's challenged.
By @silveira - 8 months
Cops will be fine, I can tell you that.
By @qingcharles - 8 months
I feel qualified to comment on this as I've read and dissected thousands of police reports.

1) Most police reports are never read after they are written. Literally by nobody. Not the cop, not their superior, not the prosecutor, not the defense, not the court. Most criminal cases end in plea deals and the police reports will never be pulled or looked at. Most charges are filed by an on-duty low-level prosecutor who fields phone calls all day from officers who re-iterate the facts of the crime and ask for the statutes that have been violated.

2) Police have to testify at a trial, usually from memory. This often happens one, two, three years or more after they wrote the report. They are usually handed their old report outside the courtroom before they go in to refresh their memory. This is problematic as they will naturally conform their sworn testimony to whatever is written on the report.

3) Police reports right now are VERY badly written. They are impossibly short, impossibly vague and ridiculously low on detail. If you go back to some code you wrote last year and wonder what the hell it does because you didn't comment any of it, it's a bit like that.

4) Police reports are usually horseshit, for various reasons. They tell one side of a story. The suspect will have another view. The truth generally lies somewhere in the middle.

5) I think this might help, because the AI is naturally more neutral than an officer who's whole job is to apprehend criminals and their natural tendency is to make themselves look good and just and right and the suspect look like a really horrible human being.

6) The problem is that the AI might mishear, misunderstand or just plain hallucinate. At first the officers will re-read it, but after a few of these they will get lazy and just click OK on every report. I've been using AI to generate Alt Text for images and I've been getting lazy at checking it. Just yesterday it told me this cup[0] said "COFFEE" on it, which is a good guess, but it actually says OCEAN FEELS. It guessed, and it was 100% sure it was right.

Just my two centimes.

[0] https://imgur.com/a/9UEtzAD