Want to spot a deepfake? Look for the stars in their eyes
Researchers at the Royal Astronomical Society found a method to detect deepfake images by analyzing reflections in individuals' eyes. This innovative approach provides a valuable tool in the fight against fake images.
Read original articleIn a study presented at the Royal Astronomical Society’s National Astronomy Meeting, researchers found a way to detect deepfake images by analyzing reflections in the eyes of individuals. The research, led by University of Hull MSc student Adejumoke Owolabi, compared reflections in the eyeballs of real and AI-generated images. By applying methods used in astronomy to quantify these reflections, they discovered that inconsistencies between reflections in each eye can indicate a deepfake. Professor Kevin Pimbblet explained that real images typically show consistent reflections in both eyes, unlike deepfakes. While this method is not foolproof, it provides a valuable tool in the ongoing battle against deepfakes. By using techniques like the Gini coefficient and CAS parameters, researchers were able to identify differences in reflections that distinguish real images from AI-generated ones. This innovative approach offers a new strategy to combat the proliferation of fake images in the digital landscape.
Related
AI can beat real university students in exams, study suggests
A study from the University of Reading reveals AI outperforms real students in exams. AI-generated answers scored higher, raising concerns about cheating. Researchers urge educators to address AI's impact on assessments.
Mind-reading AI recreates what you're looking at with accuracy
Artificial intelligence excels in reconstructing images from brain activity, especially when focusing on specific regions. Umut Güçlü praises the precision of these reconstructions, enhancing neuroscience and technology applications significantly.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
Deepfake Porn Prompts Tech Tools and Calls for Regulations
Deepfake pornographic content creation prompts new protection industry. Startups develop tools like visual and facial recognition to combat issue. Advocates push for legislative changes to safeguard individuals from exploitation.
Want to spot a deepfake? Look for the stars in their eyes
A study at the Royal Astronomical Society's National Astronomy Meeting proposes using eye reflections to detect deepfake images. Analyzing differences in reflections between eyes can reveal AI-generated fakes, resembling astronomers' galaxy studies. Led by University of Hull researchers, the method employs CAS and Gini indices to compare reflections for identification. This approach aids in distinguishing real images from deepfakes.
- Many commenters believe that AI will eventually overcome this detection method by improving training and datasets.
- Some point out that professional photo editing, such as adding reflections in Photoshop, could produce similar inconsistencies, making the method less reliable.
- There is skepticism about the current effectiveness of the detection software, with some noting poor performance in identifying reflections.
- Several comments highlight the ongoing arms race between AI generation and detection, suggesting that new detection methods will continually be needed.
- Others draw parallels to cultural references like "Blade Runner" and discuss broader implications for AI and human perception.
So that kind of "clues" only shows that the picture has been processed, not that the people on the picture doesn't exists or is a deepfake
Even in the real photos, you can see that the reflections are different in both position and shape, because the two eyeballs aren't perfectly aligned and reflections are going to be genuinely different.
And then when you look at the actual "reflections" their software is supposedly detecting (highlighted in green and blue) and you compare with the actual photo, their software is doing a terrible job detecting reflections in the first place -- missing some, and spuriously adding others that don't exist.
Maybe this is a valuable tool for spotting deepfakes, but this webpage is doing a terrible job at convincing me of that.
(Not to mention that reflections like these are often added in Photoshop for professional photography, which might have similar subtle positioning errors, and training on those photos reproduces them. So then this wouldn't tell you at all that it's an AI photo -- it might just be a real photo that someone photoshopped reflections into.)
Interesting, I’d only heard of the Gini coefficient as an econometric measure of income inequality.
Comments are always one of these two types:
1 -> AI is awesome and perfect, if it isn't, another AI will make it perfect 2 -> AI is just garbage and will always be garbage
The masses having access to things wasn’t a cutoff point for me.
There's a lot of comments here discussing how generative AI will deal with this, which is really interesting.
But if somebody's actual goal was to pass off a doctored/AI-generated image as authentic, it would be very easy to just correct the eye reflection (and other flaws) manually, no?
The film Blade Runner for a large but was about determining hunting down androids that were so close to being human.
Not part of the test, but a nifty party of the film, was about using a photograph to see what clues were in a picture by looking deeply into reflections.
As has been said, this omission can be added as a test in generating the AI images in time, but I just loved how this inadvertently reminded me of Blade Runner.
Explained pretty well why I thought even non realistic ones felt ... uncanny.
Can someone explain?
Or was that corrected?
Related
AI can beat real university students in exams, study suggests
A study from the University of Reading reveals AI outperforms real students in exams. AI-generated answers scored higher, raising concerns about cheating. Researchers urge educators to address AI's impact on assessments.
Mind-reading AI recreates what you're looking at with accuracy
Artificial intelligence excels in reconstructing images from brain activity, especially when focusing on specific regions. Umut Güçlü praises the precision of these reconstructions, enhancing neuroscience and technology applications significantly.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
Deepfake Porn Prompts Tech Tools and Calls for Regulations
Deepfake pornographic content creation prompts new protection industry. Startups develop tools like visual and facial recognition to combat issue. Advocates push for legislative changes to safeguard individuals from exploitation.
Want to spot a deepfake? Look for the stars in their eyes
A study at the Royal Astronomical Society's National Astronomy Meeting proposes using eye reflections to detect deepfake images. Analyzing differences in reflections between eyes can reveal AI-generated fakes, resembling astronomers' galaxy studies. Led by University of Hull researchers, the method employs CAS and Gini indices to compare reflections for identification. This approach aids in distinguishing real images from deepfakes.