July 15th, 2024

Deepfake Porn Prompts Tech Tools and Calls for Regulations

Deepfake pornographic content creation prompts new protection industry. Startups develop tools like visual and facial recognition to combat issue. Advocates push for legislative changes to safeguard individuals from exploitation.

Read original articleLink Icon
Deepfake Porn Prompts Tech Tools and Calls for Regulations

Deepfake pornography, created using generative AI tools, has led to the emergence of a new protection industry. A report revealed that it takes minimal effort to produce deepfake pornographic content, with women and girls being the primary victims. Startups like That’sMyFace and Alecto AI are developing tools to combat this issue, offering visual recognition and facial recognition technologies to identify and remove deepfake content. While regulations are being considered to address image-based sexual abuse, the effectiveness and scope of these regulations vary across different regions. Advocates like Susanna Gibson are pushing for legislative changes at the state level to protect individuals from deepfake exploitation. Despite efforts from companies and advocates, the lack of comprehensive regulations poses challenges in combating deepfake porn effectively. The situation highlights the urgent need for technological advancements and regulatory measures to address the growing threat of deepfake pornography and protect individuals from exploitation.

Related

Jack Dorsey says we won't know what is real anymore in the next 5-10 years

Jack Dorsey says we won't know what is real anymore in the next 5-10 years

Jack Dorsey and Elon Musk express concerns about AI-generated deep fakes blurring reality. OpenAI's efforts fall short, emphasizing the importance of human intervention in maintaining authenticity amidst AI advancements.

Google Researchers Publish Paper About How AI Is Ruining the Internet

Google Researchers Publish Paper About How AI Is Ruining the Internet

Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.

Google's Nonconsensual Explicit Images Problem Is Getting Worse

Google's Nonconsensual Explicit Images Problem Is Getting Worse

Google is struggling with the rise of nonconsensual explicit image sharing online. Despite some efforts to help victims remove content, advocates push for stronger measures to protect privacy, citing the company's capability based on actions against child sexual abuse material.

Effective CSAM filters are impossible because what CSAM is depends on context

Effective CSAM filters are impossible because what CSAM is depends on context

Filters struggle to detect child sexual exploitation materials due to contextual nuances. Technical solutions like hashing or AI lack context, leading to privacy invasion and mislabeling. Effective prevention requires holistic interventions over technological reliance.

Spain sentences 15 schoolchildren over AI-generated naked images

Spain sentences 15 schoolchildren over AI-generated naked images

Fifteen Spanish schoolchildren receive probation for creating AI-generated deepfake images of classmates, sparking concerns about technology misuse. They face education on gender equality and responsible tech use. Families stress societal reflection.

Link Icon 4 comments
By @ninininino - 3 months
It's pretty interesting that the first thing that's alarming people with gen AI video is this type of use case when gen AI applied to fraud or scamming or impersonation and account takeovers is way more likely to ruin lives.

People already have their lives completely ruined when they lose their life savings or retirement funds to scams and social engineering attacks, you add Gen AI video to the mix and it's getting so incredibly worse.

Having a fake video of you having sex seems so quaint in comparison. In 20 years no one will care.

But a fake video of Biden calling for the actions of a few days ago to be replicated and another attempt to be made would have disastrous implications.

By @jacknews - 3 months
Is this really what matters?

"The world took notice of this new reality in January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views"

And how many actually believed it was Taylor? And so what if they did, since it provably wasn't.

I'm not condoning deepfakes at all, I'm sure it must be distressing to victims, just like any form of bullying, libel, etc, but it's nothing really new or especially horrifying. It should be easy to brush off as obviously fake, and in fact the same tactic could be used for real nudes - just claim they're fakes.