Man Arrested for Creating Child Porn Using AI
Phillip Michael McCorkle was arrested in Florida for creating and distributing AI-generated child pornography, facing 20 obscenity counts, highlighting concerns over generative AI's role in child exploitation.
Read original articleA Florida man, Phillip Michael McCorkle, has been arrested and faces 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography. The arrest occurred while he was employed at a movie theater in Vero Beach, Florida, following an investigation initiated by the Indian River County Sheriff's Office after receiving tips about his activities on the social media app Kik. This case underscores the growing concern regarding the misuse of generative AI technology for criminal purposes, particularly in the realm of child exploitation. The increasing prevalence of AI-generated child sexual abuse imagery has prompted lawmakers at various levels to consider legislation aimed at criminalizing such content. However, the effectiveness of these measures remains uncertain. In 2022, the National Center for Missing & Exploited Children reported 4,700 instances of AI-generated child pornography, with some offenders using generative AI to create deepfakes of real children for extortion. Experts warn that the problem is exacerbated by the availability of open-source software that can be modified and run locally, making it difficult to combat.
- Phillip Michael McCorkle faces 20 counts of obscenity for creating AI-generated child pornography.
- His arrest highlights the dangers of generative AI being used for child exploitation.
- Lawmakers are considering legislation to address the rise of AI-generated child sexual abuse imagery.
- The National Center for Missing & Exploited Children reported thousands of AI-generated child porn cases in 2022.
- Experts indicate that the use of open-source software complicates efforts to combat this issue.
Related
Spain sentences 15 schoolchildren over AI-generated naked images
Fifteen Spanish schoolchildren receive probation for creating AI-generated deepfake images of classmates, sparking concerns about technology misuse. They face education on gender equality and responsible tech use. Families stress societal reflection.
Deepfake Porn Prompts Tech Tools and Calls for Regulations
Deepfake pornographic content creation prompts new protection industry. Startups develop tools like visual and facial recognition to combat issue. Advocates push for legislative changes to safeguard individuals from exploitation.
Is A.I. Art Stealing from Artists? (2023)
Artists Kelly McKernan, Sarah Andersen, and Karla Ortiz have filed a class-action lawsuit against A.I. image generators for copyright infringement, raising concerns about the impact of A.I. on artistic jobs and rights.
AI-powered 'undressing' websites are getting sued
San Francisco's City Attorney has filed a lawsuit against 16 AI websites for creating non-consensual nude images, seeking civil penalties and shutdowns due to violations of pornography laws.
Popular AI "nudify" sites sued amid rise in victims globally
San Francisco's city attorney is suing 16 websites for creating non-consensual intimate imagery, seeking fines and shutdowns to protect victims amid rising harassment linked to AI-generated content.
With that out of the way:
> the generative AI wrinkle in this particular arrest shows how technology is generating new avenues for crime and child abuse.
Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.
> The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified, [...] that is a much harder problem to fix.
Sarcastic: people downloading and modifying open source software is a major problem indeed, hopefully a solution can be found.
https://dfrws.org/wp-content/uploads/2019/11/2019_USA_pres-a...
https://dfrws.org/wp-content/uploads/2019/06/2019_USA_paper-...
For 3 years of probation, he was banned from drawing at all, even for personal use.
https://cbldf.org/2016/09/mike-diana-case-still-resonates-in...
I can't see this rationale extending to generated imagery (deepfakes aside). No victim exists.
Vaguely gesturing at social harm is not principled enough, in my opinion. One can point to actual crime and actual harm for filmed and photographed child pornography. For generated imagery, one can only point to one's own personal revulsion as "harm".
We are spiraling very quickly towards a "media creation box", standalone software that will generate whatever content a person might want without any external connections. There will be huge societal ramifications. We think media bubbles are bad now, but just wait until everyone can live inside their own bubble filled with locally-generated content to match their increasingly warped world views.
That aside, the “fully synthetic CSAM with no children involved at all” idea relies very, very heavily on taking the word of the guy who you just busted with a hard drive full of CSAM.
His defense would essentially have to be “Your honor I pinky swear that I used the txt2img tab of automatic1111 instead of the img2img tab” or “I did start with real CSAM but the img2img tab acts as an algorithmic magic wand imbued with the power to retroactively erase the previous harm caused by the source material”
There is no coherent defense to this activity that boils down to anything other than the idea that the existence of image generators should — and does — constitute an acceptable means of laundering CSAM and/or providing plausible deniability for anyone caught with it.
The idea that there would be any pushback to arresting or investigating people for distributing this stuff boggles the mind. Inventing a new type of armor to specifically protect child abusers from scrutiny is a choice, not some sort of emergent moral ground truth caused by the popularization of diffusion models.
Related
Spain sentences 15 schoolchildren over AI-generated naked images
Fifteen Spanish schoolchildren receive probation for creating AI-generated deepfake images of classmates, sparking concerns about technology misuse. They face education on gender equality and responsible tech use. Families stress societal reflection.
Deepfake Porn Prompts Tech Tools and Calls for Regulations
Deepfake pornographic content creation prompts new protection industry. Startups develop tools like visual and facial recognition to combat issue. Advocates push for legislative changes to safeguard individuals from exploitation.
Is A.I. Art Stealing from Artists? (2023)
Artists Kelly McKernan, Sarah Andersen, and Karla Ortiz have filed a class-action lawsuit against A.I. image generators for copyright infringement, raising concerns about the impact of A.I. on artistic jobs and rights.
AI-powered 'undressing' websites are getting sued
San Francisco's City Attorney has filed a lawsuit against 16 AI websites for creating non-consensual nude images, seeking civil penalties and shutdowns due to violations of pornography laws.
Popular AI "nudify" sites sued amid rise in victims globally
San Francisco's city attorney is suing 16 websites for creating non-consensual intimate imagery, seeking fines and shutdowns to protect victims amid rising harassment linked to AI-generated content.