July 18th, 2024

Want to spot a deepfake? Look for the stars in their eyes

Researchers at the Royal Astronomical Society found a method to detect deepfake images by analyzing reflections in individuals' eyes. This innovative approach provides a valuable tool in the fight against fake images.

Read original articleLink Icon
SkepticismCriticismCuriosity
Want to spot a deepfake? Look for the stars in their eyes

In a study presented at the Royal Astronomical Society’s National Astronomy Meeting, researchers found a way to detect deepfake images by analyzing reflections in the eyes of individuals. The research, led by University of Hull MSc student Adejumoke Owolabi, compared reflections in the eyeballs of real and AI-generated images. By applying methods used in astronomy to quantify these reflections, they discovered that inconsistencies between reflections in each eye can indicate a deepfake. Professor Kevin Pimbblet explained that real images typically show consistent reflections in both eyes, unlike deepfakes. While this method is not foolproof, it provides a valuable tool in the ongoing battle against deepfakes. By using techniques like the Gini coefficient and CAS parameters, researchers were able to identify differences in reflections that distinguish real images from AI-generated ones. This innovative approach offers a new strategy to combat the proliferation of fake images in the digital landscape.

Related

AI can beat real university students in exams, study suggests

AI can beat real university students in exams, study suggests

A study from the University of Reading reveals AI outperforms real students in exams. AI-generated answers scored higher, raising concerns about cheating. Researchers urge educators to address AI's impact on assessments.

Mind-reading AI recreates what you're looking at with accuracy

Mind-reading AI recreates what you're looking at with accuracy

Artificial intelligence excels in reconstructing images from brain activity, especially when focusing on specific regions. Umut Güçlü praises the precision of these reconstructions, enhancing neuroscience and technology applications significantly.

Google Researchers Publish Paper About How AI Is Ruining the Internet

Google Researchers Publish Paper About How AI Is Ruining the Internet

Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.

Deepfake Porn Prompts Tech Tools and Calls for Regulations

Deepfake Porn Prompts Tech Tools and Calls for Regulations

Deepfake pornographic content creation prompts new protection industry. Startups develop tools like visual and facial recognition to combat issue. Advocates push for legislative changes to safeguard individuals from exploitation.

Want to spot a deepfake? Look for the stars in their eyes

Want to spot a deepfake? Look for the stars in their eyes

A study at the Royal Astronomical Society's National Astronomy Meeting proposes using eye reflections to detect deepfake images. Analyzing differences in reflections between eyes can reveal AI-generated fakes, resembling astronomers' galaxy studies. Led by University of Hull researchers, the method employs CAS and Gini indices to compare reflections for identification. This approach aids in distinguishing real images from deepfakes.

AI: What people are saying
The article's method for detecting deepfake images by analyzing eye reflections has sparked a diverse discussion.
  • Many commenters believe that AI will eventually overcome this detection method by improving training and datasets.
  • Some point out that professional photo editing, such as adding reflections in Photoshop, could produce similar inconsistencies, making the method less reliable.
  • There is skepticism about the current effectiveness of the detection software, with some noting poor performance in identifying reflections.
  • Several comments highlight the ongoing arms race between AI generation and detection, suggesting that new detection methods will continually be needed.
  • Others draw parallels to cultural references like "Blade Runner" and discuss broader implications for AI and human perception.
Link Icon 37 comments
By @jmmcd - 6 months
They love saying things like "generative AI doesn't know physics". But the constraint that both eyes should have consistent reflection patterns is just another statistical regularity that appears in real photographs. Better training, larger models, and larger datasets, will lead to models that capture this statistical regularity. So this "one weird trick" will disappear without any special measures.
By @olivierduval - 6 months
Warning: photoshopped portraits (and most of pro portraits ARE photoshopped, even slightly) may add "catch lights" in the eyes, to make the portrait more "alive"

So that kind of "clues" only shows that the picture has been processed, not that the people on the picture doesn't exists or is a deepfake

By @crazygringo - 6 months
I don't know, the example photos of deepfakes here seem... pretty good. If that's the worst they could find, then this doesn't seem useful at all.

Even in the real photos, you can see that the reflections are different in both position and shape, because the two eyeballs aren't perfectly aligned and reflections are going to be genuinely different.

And then when you look at the actual "reflections" their software is supposedly detecting (highlighted in green and blue) and you compare with the actual photo, their software is doing a terrible job detecting reflections in the first place -- missing some, and spuriously adding others that don't exist.

Maybe this is a valuable tool for spotting deepfakes, but this webpage is doing a terrible job at convincing me of that.

(Not to mention that reflections like these are often added in Photoshop for professional photography, which might have similar subtle positioning errors, and training on those photos reproduces them. So then this wouldn't tell you at all that it's an AI photo -- it might just be a real photo that someone photoshopped reflections into.)

By @adwi - 6 months
> The Gini coefficient is normally used to measure how the light in an image of a galaxy is distributed among its pixels. This measurement is made by ordering the pixels that make up the image of a galaxy in ascending order by flux and then comparing the result to what would be expected from a perfectly even flux distribution.

Interesting, I’d only heard of the Gini coefficient as an econometric measure of income inequality.

https://en.m.wikipedia.org/wiki/Gini_coefficient

By @brabel - 6 months
Well, nice find, but now all the fakes have to do is add a new layer of AI that knows how to fix the eyes.
By @gyosko - 6 months
It seems that even discussion about AI is getting really polarized like everything else these days.

Comments are always one of these two types:

1 -> AI is awesome and perfect, if it isn't, another AI will make it perfect 2 -> AI is just garbage and will always be garbage

By @bqmjjx0kac - 6 months
I wouldn't be shocked if phone cameras accidentally produced weird effects like this. Case in point: https://www.theverge.com/2023/12/2/23985299/iphone-bridal-ph...
By @keybored - 6 months
> In an era when the creation of artificial intelligence (AI) images is at the fingertips of the masses, the ability to detect fake pictures – particularly deepfakes of people – is becoming increasingly important.

The masses having access to things wasn’t a cutoff point for me.

By @neom - 6 months
Random thought: GCHQ and IDF specifically seek out dyslexic employees to put on spotting "things out of place" be it a issue in a large amount of data, or something that seems wrong on a map, to a picture that contains something impossible in physics. Something about dyslexic processing provides an advantage here (not sure if I'd take this or reading at 1 word per hour), given GPTs are just NNs, I wonder if there is any "dyslexic specific" neurology you could build a NN around and apply it to problems neurodivergent minds are good at? Not sure what I'm really saying here as I only have armchair knowledge.
By @Y_Y - 6 months
If you can see the difference than so can the computer. If the computer can see it we have a discriminator than we can use in a GAN-like fashion to train the network not to make that mistake again.
By @RobotToaster - 6 months
Interesting, some portrait photographers use cross polarised light to eliminate reflection from glasses, but it has the side effect of eliminating reflection from eyes.
By @plasticeagle - 6 months
Any algorithm that claims the ability to detect AI automatically must always be possible to circumvent. All one has to do is incorporate the algorithm in your image generation process, and perturb or otherwise modify your output until the image passes the test.
By @threatripper - 6 months
I really wonder where the limit is for AI. Reality has an incredible amount of detail that you can't just simulate or emulate entirely. However, our perception is limited, and we can't process all those details. AI only has to be good enough to fool our perception, and I'm confident that every human-understandable method for identifying fakes can be fooled by generative AI. It will probably be up to AI to identify AI-generated content. Even then, noise and limited resolution will mask the flaws. For many forms of content, there will simply be no way to determine what's real.
By @singingwolfboy - 6 months
By @raisedbyninjas - 6 months
The sample images don't show a large difference between the real and generated photo. The light sources in the real photo must have been pretty close to the subject.
By @symisc_devel - 6 months
Well, they are relatively easy to spot with the current AI software used to generate them especially if you are dealing on a daily basis with presentation attacks aka deepfakes for facial recognition. FACEIO has already deployed a very powerful model to deter such attacks for the purpose of facial authentication: https://faceio.net/security-best-practice#faceSpoof
By @grvbck - 6 months
Am I missing something here, or are the authors incorrectly using the term "deepfake" where "AI-generated" would have been more appropriate?

There's a lot of comments here discussing how generative AI will deal with this, which is really interesting.

But if somebody's actual goal was to pass off a doctored/AI-generated image as authentic, it would be very easy to just correct the eye reflection (and other flaws) manually, no?

By @zeristor - 6 months
Enhance.

The film Blade Runner for a large but was about determining hunting down androids that were so close to being human.

Not part of the test, but a nifty party of the film, was about using a photograph to see what clues were in a picture by looking deeply into reflections.

As has been said, this omission can be added as a test in generating the AI images in time, but I just loved how this inadvertently reminded me of Blade Runner.

By @constantcrying - 6 months
These articles are incredibly unhelpful. I have no doubt that in a short amount of time AI models will also have learned the statistical dependence between reflections in eyes. It is inevitable that this "trick" will become obsolete. This just gives a the false impression, both about AI and how to spot AI generation.
By @SXX - 6 months
I wonder how true this is for face swap. Since actual scammers likely wouldn't generate deepfakes completely from scratch or static image.
By @ch33zer - 6 months
I suspect that detecting ai generated content will becomes an arms race just like spam filtering and seo. Business will be built on using secret ml models detecting smaller and smaller irregularities in images and text. It'll be interesting to see who wins
By @nottorp - 6 months
I can't read TFA because it's probably HNed. However an artist friend of mine said generated images are easy to spot because every pixel is "perfect". Not only eyes.

Explained pretty well why I thought even non realistic ones felt ... uncanny.

By @butlike - 6 months
I don't understand the "galaxy" terminology in the sentence: "To measure the shapes of galaxies, we analyse whether they're centrally compact, whether they're symmetric, and how smooth they are"

Can someone explain?

By @chefandy - 6 months
Ah! This is a great technique! Surely now that it's published it would be easily remediable in a compositing program like Nuke, but for more casual efforts, it's a solid test.
By @AlbertCory - 6 months
I took a film lighting class a long, long time ago at a community college. Even then, you could look at a closeup and tell where the lights were by the reflections in the eyes.
By @batch12 - 6 months
How does the deep fake have the same eye shape, same cropping and same skin blemishes as the real image? Did they inpaint eyes and call it deepfake for training?
By @ggm - 6 months
Out of interest, how many CAPTCHA are or were part of training? Is there any factual basis to the belief that's what it descended to?
By @notorandit - 6 months
Once you highlight any inconsistency in AI-generated content, IMHO, it will take a nothingth of a second to "fix" that.
By @Rury - 6 months
The necklace in the right photo is a more obvious giveaway, whereas you have to look closely at eyes to see if they match.
By @leidenfrost - 6 months
AFAIK deepfakes can't mimic strong gesticulations very well, nor mimic correctly a head facing sideways.

Or was that corrected?

By @throw4847285 - 6 months
Also be on the look out for high flyin' clouds and people dancin' on a string.
By @ziofill - 6 months
Ok. But it does feel like we’re scraping the bottom of the barrel.
By @GaggiX - 6 months
Did they try using this method on something that is not StyleGAN?
By @HumblyTossed - 6 months
Isn't it easier to simply look for all the 6 fingered hands?
By @borgchick - 6 months
spy vs spy, round n