NYPD Coppelgänger: Exploring Cop Data
Sam Lavigne's Coppelgänger tool uses machine learning to match users' faces with NYPD officers, highlighting facial recognition technology's accessibility and raising concerns about its implications for law enforcement and privacy.
Read original articleSam Lavigne introduces Coppelgänger, a tool that uses machine learning to identify which NYPD officer a user resembles based on facial recognition technology. The project began with Lavigne's exploration of publicly available datasets, including images of NYPD officers, to create a facial recognition system. He initially attempted to gather data from NYC OpenData and the official NYPD personnel roster but ultimately sourced around 11,000 images from 50-a.org, a site that indexes complaints against NYPD officers.
The facial recognition process involves three main steps: detecting faces, creating embeddings (numerical representations of faces), and comparing these embeddings to find similarities. Lavigne utilized the DeepFace library for facial recognition, opting for a combination of models that prioritize speed over accuracy. The resulting system allows users to upload their images and find potential matches among the NYPD officers in the database.
Despite its limitations, such as only including about 30% of NYPD officers and prioritizing speed over accuracy, Coppelgänger serves as an educational tool to demonstrate how facial recognition systems operate. Users can visit the website to discover their "cop doppelgänger" by comparing their images to those of NYPD officers. The project highlights the accessibility of facial recognition technology and raises questions about its implications in law enforcement and privacy.
Related
Selfie-based authentication raises eyebrows among infosec experts
Selfie-based authentication gains global momentum, Vietnam mandates face scans for transactions over $400. Concerns arise over leaked Singaporean selfies on the dark web. Experts note increased interest in selfie verification but highlight challenges in data protection and privacy laws. Organizations enhance security with liveness checks, biometric comparisons, and machine learning. Inclusivity and security balance remain crucial considerations.
Google testing facial recognition technology for security near Seattle
Google is testing facial recognition for security at its Seattle campus. Cameras compare faces to badges for unauthorized individuals. Privacy concerns arise amid past security issues. Other tech giants face similar scrutiny.
When Facial Recognition Helps Police Target Black Faces
Karl Ricanek, an AI engineer, reflects on facial recognition technology's moral implications. His work evolved from US Navy projects to commercial use, despite early awareness of biases. Real-world misidentifications stress the need for ethical considerations.
Want to spot a deepfake? Look for the stars in their eyes
A study at the Royal Astronomical Society's National Astronomy Meeting proposes using eye reflections to detect deepfake images. Analyzing differences in reflections between eyes can reveal AI-generated fakes, resembling astronomers' galaxy studies. Led by University of Hull researchers, the method employs CAS and Gini indices to compare reflections for identification. This approach aids in distinguishing real images from deepfakes.
Want to spot a deepfake? Look for the stars in their eyes
Researchers at the Royal Astronomical Society found a method to detect deepfake images by analyzing reflections in individuals' eyes. This innovative approach provides a valuable tool in the fight against fake images.
Related
Selfie-based authentication raises eyebrows among infosec experts
Selfie-based authentication gains global momentum, Vietnam mandates face scans for transactions over $400. Concerns arise over leaked Singaporean selfies on the dark web. Experts note increased interest in selfie verification but highlight challenges in data protection and privacy laws. Organizations enhance security with liveness checks, biometric comparisons, and machine learning. Inclusivity and security balance remain crucial considerations.
Google testing facial recognition technology for security near Seattle
Google is testing facial recognition for security at its Seattle campus. Cameras compare faces to badges for unauthorized individuals. Privacy concerns arise amid past security issues. Other tech giants face similar scrutiny.
When Facial Recognition Helps Police Target Black Faces
Karl Ricanek, an AI engineer, reflects on facial recognition technology's moral implications. His work evolved from US Navy projects to commercial use, despite early awareness of biases. Real-world misidentifications stress the need for ethical considerations.
Want to spot a deepfake? Look for the stars in their eyes
A study at the Royal Astronomical Society's National Astronomy Meeting proposes using eye reflections to detect deepfake images. Analyzing differences in reflections between eyes can reveal AI-generated fakes, resembling astronomers' galaxy studies. Led by University of Hull researchers, the method employs CAS and Gini indices to compare reflections for identification. This approach aids in distinguishing real images from deepfakes.
Want to spot a deepfake? Look for the stars in their eyes
Researchers at the Royal Astronomical Society found a method to detect deepfake images by analyzing reflections in individuals' eyes. This innovative approach provides a valuable tool in the fight against fake images.