What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".
Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.
In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.
This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.
Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.
This does sound problematic, but if a police officer's report contradicts the body-worn camera or other evidence, it already undermines their credibility, whether they blame AI or not. My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report
The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.
If an officer misremembers something about you, you go to jail . If you misremember something about the event, you also go to jail. Yeah, I guess it tracks
That's why we need a greatly reduced standard of proof for officer misconduct, especially when it comes to consequences like just losing your job (as opposed to, e.g., jail time).
While I agree that officers should be accountable. More enforcement of them will not suddenly make them good officers. Other nations train their police for years prior to putting them into the thick of it. US police spend far less time studying, and it shows, in everything from de-escalation tactics to general legal understanding. If you create a pipeline to weed out bad officers, then there needs to be a pipeline producing better officers
The experiments of AI agents sending emails to grown-ups are good I think – AIs are doing much more dangerous stuff like these AI Police Reports. I don't think making a fuss over every agent-sent email is going to cause other AI incursion into our society to slow down. The Police Report writer is a non-human partially autonomous participant like a K9 officer. It's wishful thinking that AIs aren't going to be set loose doing jobs. The cat is out of the bag.
> In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that.
> Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.
Policing and Hallucinations. Can’t wait to see this replicated globally.
In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.
This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.
Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.
That should be 'reining in'. "Reign" is -- ironically - - what monarchs do.
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report
The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.
That's because it's a very difficult thing to prove. Bad memories and even completely false memories are real things.
Perjury isn't a commonly prosecuted crime.
> Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.
Policing and Hallucinations. Can’t wait to see this replicated globally.
You guys are so fucked.
"You guys"? Everyone is fucked. This is going to be everywhere. Coming to your neighborhood, eventually.
I'd be more worried that you aren't reading articles about it than if you were.
There are countries on this planet that are not actively digging their own graves.