August 25th, 2024

Man Arrested for Creating Child Porn Using AI

Phillip Michael McCorkle was arrested in Florida for creating and distributing AI-generated child pornography, facing 20 obscenity counts, highlighting concerns over generative AI's role in child exploitation.

Read original articleLink Icon
Man Arrested for Creating Child Porn Using AI

A Florida man, Phillip Michael McCorkle, has been arrested and faces 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography. The arrest occurred while he was employed at a movie theater in Vero Beach, Florida, following an investigation initiated by the Indian River County Sheriff's Office after receiving tips about his activities on the social media app Kik. This case underscores the growing concern regarding the misuse of generative AI technology for criminal purposes, particularly in the realm of child exploitation. The increasing prevalence of AI-generated child sexual abuse imagery has prompted lawmakers at various levels to consider legislation aimed at criminalizing such content. However, the effectiveness of these measures remains uncertain. In 2022, the National Center for Missing & Exploited Children reported 4,700 instances of AI-generated child pornography, with some offenders using generative AI to create deepfakes of real children for extortion. Experts warn that the problem is exacerbated by the availability of open-source software that can be modified and run locally, making it difficult to combat.

- Phillip Michael McCorkle faces 20 counts of obscenity for creating AI-generated child pornography.

- His arrest highlights the dangers of generative AI being used for child exploitation.

- Lawmakers are considering legislation to address the rise of AI-generated child sexual abuse imagery.

- The National Center for Missing & Exploited Children reported thousands of AI-generated child porn cases in 2022.

- Experts indicate that the use of open-source software complicates efforts to combat this issue.

Link Icon 15 comments
By @tmtvl - 8 months
Obvious disclaimer first: I do not condone imagery of child abuse, I believe this guy has some kind of psychological problem and needs professional help, AI-generated deepfakes can actually cause real harm and proper regulation is needed.

With that out of the way:

> the generative AI wrinkle in this particular arrest shows how technology is generating new avenues for crime and child abuse.

Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.

> The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified, [...] that is a much harder problem to fix.

Sarcastic: people downloading and modifying open source software is a major problem indeed, hopefully a solution can be found.

By @evanjrowley - 8 months
On the opposite end of this spectrum, there is interest among digital forensics examiners in some kind of automated capability for detecting child porn. Such a capability would speed up the process and reduce the mental/emotional load on examiners having to deal with this sensitive type of content. Automated processing could also reduce the risks of this material being mishandled as evidence. In a report presented at the 2019 Digital Forensics Research Conference, a survey of forensics examiners showed heightened interest in AI/ML models for this application. They discuss some prior work, recent attempts, and challenges in reaching this goal.

https://dfrws.org/wp-content/uploads/2019/11/2019_USA_pres-a...

https://dfrws.org/wp-content/uploads/2019/06/2019_USA_paper-...

By @pessimizer - 8 months
It's typical for authoritarians, not specific to AI. There are laws hiding all over the place that make drawing pictures illegal. Another Florida Man, Mike Diana, was convicted and sentenced for his (absurdly far from pornography or realism) zine-comic book Boiled Angel, which had a circulation of 300.

For 3 years of probation, he was banned from drawing at all, even for personal use.

https://cbldf.org/2016/09/mike-diana-case-still-resonates-in...

By @telecuda - 8 months
Important callout beyond the headline: "Last year, the National Center for Missing & Exploited Children received 4,700 reports of generated AI child porn, with some criminals even using generative AI to make deepfakes of real children to extort them."
By @rendall - 8 months
The original rationale for criminalizing the possession of child porn was that a crime was inherently committed in its creation, and possession is participation in that crime. I think this is a correct conclusion.

I can't see this rationale extending to generated imagery (deepfakes aside). No victim exists.

Vaguely gesturing at social harm is not principled enough, in my opinion. One can point to actual crime and actual harm for filmed and photographed child pornography. For generated imagery, one can only point to one's own personal revulsion as "harm".

By @rainy59 - 8 months
Florida law defines anything under the age of 18 as "child porn" - so if your catgirl doesn't look at least 30 you are probably heading to prison or an expensive court battle or both
By @bun_terminator - 8 months
Are we still pretending that we punish harm or can we admit that we just want to punish people for doing things we don't like?
By @sandworm101 - 8 months
People are not talking about how revolutionary this is. This isn't AI generating content to compete with artists. This is software running on standard hardware that turns electricity into material more illegal than cocaine. Just think of how that would impact the market for cocaine if suddenly everyone could make easily it at home from common ingredients.

We are spiraling very quickly towards a "media creation box", standalone software that will generate whatever content a person might want without any external connections. There will be huge societal ramifications. We think media bubbles are bad now, but just wait until everyone can live inside their own bubble filled with locally-generated content to match their increasingly warped world views.

By @sulandor - 8 months
seems concerning although the result would probably be similar if he drew it by hand
By @00_hum - 8 months
crazy how the same people who say fake pictures can cause enough harm to warrant jail time also are fine with dissemination of religious materials, violence porn on netflix, all of which is causing way more harm. by a mile. its not the harm, its just you picking and choosing.
By @m3kw9 - 8 months
So now is not downloading pics, but downloading AI models that can generate it. Are there people fine tuning models with these illegal images? Or is it just a jail broken model. In the former case the fine tuner needs to be traced, the latter case is new law territory
By @jrflowers - 8 months
In the US if I get caught selling you a gram of pure cocaine I get the same punishment as I would if I sold you a gram that’s only 20% pure. If I sold you a gram of some random powder and told you it is cocaine I am likely to be prosecuted all the same whether I knew it was fake or not.

That aside, the “fully synthetic CSAM with no children involved at all” idea relies very, very heavily on taking the word of the guy who you just busted with a hard drive full of CSAM.

His defense would essentially have to be “Your honor I pinky swear that I used the txt2img tab of automatic1111 instead of the img2img tab” or “I did start with real CSAM but the img2img tab acts as an algorithmic magic wand imbued with the power to retroactively erase the previous harm caused by the source material”

There is no coherent defense to this activity that boils down to anything other than the idea that the existence of image generators should — and does — constitute an acceptable means of laundering CSAM and/or providing plausible deniability for anyone caught with it.

The idea that there would be any pushback to arresting or investigating people for distributing this stuff boggles the mind. Inventing a new type of armor to specifically protect child abusers from scrutiny is a choice, not some sort of emergent moral ground truth caused by the popularization of diffusion models.