Google's 'Reimagine' tool helped us add wrecks, disasters, and corpses to photos
Google's "Reimagine" AI photo editing tool enables users to add elements to photos via text prompts, raising concerns about misinformation due to the ease of creating disturbing, unidentifiable images.
Read original articleGoogle's new AI photo editing tool, "Reimagine," included in the Pixel 9 series, allows users to add various elements to their photos using text prompts. This feature extends the capabilities of the previous Magic Editor, enabling the addition of realistic and sometimes disturbing imagery, such as car wrecks and corpses. During testing, users found it alarmingly easy to bypass the tool's safeguards to create unsettling images. Google acknowledged that while they have policies to prevent misuse, the effectiveness of these guardrails is limited, and the potential for abuse is significant. The AI-generated images lack clear identification markers, making it difficult to distinguish them from authentic photos. This raises concerns about the rapid advancement of photo manipulation technology outpacing the ability to detect and regulate misleading content. The ease of creating and sharing such images could lead to widespread misinformation, prompting a call for increased skepticism regarding the authenticity of online visuals.
- Google's "Reimagine" tool allows for the addition of realistic elements to photos using text prompts.
- Users can create disturbing imagery by creatively bypassing the tool's safeguards.
- The lack of clear identification for AI-generated images raises concerns about misinformation.
- The rapid advancement of photo editing technology outpaces detection and regulation efforts.
- Increased skepticism is advised when evaluating the authenticity of online images.
Related
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
Mapping the Misuse of Generative AI
New research from Google DeepMind and partners analyzes the misuse of generative AI, identifying tactics like exploitation and compromise. It suggests initiatives for public awareness and safety to combat these issues.
Google Releases Powerful AI Image Generator You Can Use for Free
Google launched Imagen 3, a free AI image generator in the U.S., producing images in 30 seconds with improved detail. It has restrictions on certain requests and raises copyright concerns.
The AI photo editing era is here, and it's every person for themselves
Google's Pixel 9 enhances photo editing with AI features, allowing easy alterations. This sparks a counter-movement favoring vintage cameras, highlighting the debate over authenticity in edited images versus genuine memories.
- "Google Docs" allowed me to write death threat letters.
- My Brother printer allowed to print them.
- The postal service delivered them
- My Sony camera allowed me to take nude pictures of my neighbor through the bathroom window
We can't safeguard every tool. And I predict negative consequences will come from trying.
Censorship is never a good thing.
What are they objecting to? Art? I can look at disturbing imagery by closing my eyes and imagining it. Let's ban my visual cortex.
Stuff like this gives journalists a bad name; it's selfish. It erodes trusts in the institution of the press for nothing more than a deadline and some clicks.
Maybe something will break, and the general population will become excellent at citing and verifying sources as a response to rampant fakes. However, given the generally sorry state of news and journalism, and seeing how many people on social media believe that AI slop is real, I'm skeptical.
We're basically already at the point where images and videos of unknown provenance can't be assumed to be real so how come people pay attention to journalists getting the vapors about scandalous things AI tools can do? Wouldnt everyone rather have a completely unlocked tool to do with as they will?
They are not very smart people, in general, but very good at optimizing for the thing that gets them views: ragebait.
In this case, there’s nothing to be done for it. Ideally, Google spins off image models to a separate company that doesn’t hurt the brand.
The rest of us will have this tool. But perhaps it’s too much for the normies.
I will be requesting the addition of safeguards for everyone's protection.
Yes, let's kneecap it because it's way too good. Safeguards just make users migrate to other services to generate what they want.
I think this shouldn't be newsworthy - the tool is just doing what you asked. It's the same as complaining with $pencil_producer that their pencils allowed you to draw disturbing images.
I think it would be more "newsworthy" if it would produce racist outcomes (e.g: asking it to draw a criminal and the tool produces always the same minority / output), but we're also probably past that - we've already seen those news articles.
So what? I can Photoshop some powder into a picture too. It might look better, but not really that much. I think the media needs to accept that images are no longer trustworthy unless there's some chain of evidence tied to them.
I can say "John was on the floor with a bucket of cocaine", that doesn't make it true.
and watching all the luddites on social media agree with the genAI person
I feel like this is similar to all technical progress. Once, only dedicated wizards could do something. Then, they are outraged when the general public can do it.
Misinformation sucks, but restricting access to photo tools is not the solution. Better education is. It's the solution to pretty much all problems. (And even then, people aren't as dumb as you may think. Trump is heavily using AI photos to claim that people are endorsing him, and I don't think anyone thinks that Taylor Swift is actually cosplaying Uncle Sam and endorsing Trump.)
Related
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn that generative AI contributes to the spread of fake content, complicating the distinction between truth and deception, and potentially undermining public understanding and accountability in digital information.
Mapping the Misuse of Generative AI
New research from Google DeepMind and partners analyzes the misuse of generative AI, identifying tactics like exploitation and compromise. It suggests initiatives for public awareness and safety to combat these issues.
Google Releases Powerful AI Image Generator You Can Use for Free
Google launched Imagen 3, a free AI image generator in the U.S., producing images in 30 seconds with improved detail. It has restrictions on certain requests and raises copyright concerns.
The AI photo editing era is here, and it's every person for themselves
Google's Pixel 9 enhances photo editing with AI features, allowing easy alterations. This sparks a counter-movement favoring vintage cameras, highlighting the debate over authenticity in edited images versus genuine memories.