Federal civil rights watchdog sounds alarm over Feds use of facial recognition
The U.S. Commission on Civil Rights criticized federal agencies' use of facial recognition technology, highlighting concerns over standardization, oversight, and civil rights violations, particularly affecting women and people of color.
Read original articleThe U.S. Commission on Civil Rights (UCCR) has raised significant concerns regarding the use of facial recognition technology (FRT) by the Department of Justice (DOJ), Department of Homeland Security (DHS), and Department of Housing and Urban Development (HUD). A report released by the commission highlights issues such as a lack of standardization, transparency, and oversight, which can lead to wrongful arrests and discrimination, particularly against women and people of color. While DOJ and DHS have implemented interim policies for FRT use, HUD lacks any formal guidelines. The report emphasizes the absence of public databases to track FRT usage, making it difficult to monitor potential civil rights violations. Customs and Border Patrol employs FRT in various locations, while HUD uses it in public housing without adequate tracking of its impact. Recommendations from the report include requiring federal agencies to disclose their FRT usage, provide training for personnel, and ensure that FRT is not the sole factor in arrests. Congress is urged to direct the National Institute of Standards and Technology (NIST) to evaluate FRT technologies and report on their accuracy across demographic groups. The report underscores the need for greater accountability and oversight in the deployment of emerging technologies in public services.
- UCCR report criticizes the use of facial recognition technology by federal agencies.
- Lack of standardization and oversight raises concerns about civil rights violations.
- Women and people of color are disproportionately affected by misidentification.
- Recommendations include better training, transparency, and public accountability for FRT use.
- Congress is urged to evaluate FRT technologies and their impact on civil rights.
Related
When Facial Recognition Helps Police Target Black Faces
Karl Ricanek, an AI engineer, reflects on facial recognition technology's moral implications. His work evolved from US Navy projects to commercial use, despite early awareness of biases. Real-world misidentifications stress the need for ethical considerations.
You can opt out of airport face scans
Passengers can opt out of airport facial recognition for US domestic flights by following specific steps. Challenges faced during opt-out prompt the Algorithmic Justice League's "Freedom Flyers" campaign. Concerns include data security, bias, and surveillance normalization. Opting out advocates for biometric rights.
DHS plans to collect biometric data from migrant children "down to the infant"
The U.S. DHS plans to collect facial images of migrant children to enhance facial recognition technology, raising concerns about privacy, consent, and ethical implications amid unclear implementation status and confirmed funding.
Americans Are Uncomfortable with Automated Decision-Making
A Consumer Reports survey shows 72% of Americans are uncomfortable with AI in job interviews, and 66% with its use in banking and housing, highlighting concerns over transparency and data accuracy.
Access your brain? The creepy race to read workers' minds (2023)
The use of neurotechnology and AI in hiring raises ethical concerns, potentially increasing racial disparities and disadvantaging neurodiverse candidates. Regulations are needed to protect mental privacy and ensure informed consent.
- Concerns about mass surveillance and the legality of government use of facial recognition technology are prevalent.
- There is skepticism regarding the effectiveness of regulatory bodies like the U.S. Commission on Civil Rights.
- Many commenters express alarm over the normalization of facial recognition in public spaces, such as airports.
- Debates arise over the balance between privacy rights and the potential benefits of facial recognition technology.
- Some argue that private companies may exploit facial recognition if government regulations are imposed.
It should be straight-out illegal. Governments do not have "free speech" rights.
As for private citizens doing it: I think there are already sufficient laws about recording people without their consent. You can hire someone to stand on a corner and watch for a specific person to walk by, but a law prohibiting you from recording everyone who walks by is most likely going to withstand a court review. It's a question of scale.
Well not quite. They keep getting money every year, but officially their mandate expired in the 90’s and has never been renewed. As crazy as that is by itself, they certainly don’t have the authority to do anything.
Related
When Facial Recognition Helps Police Target Black Faces
Karl Ricanek, an AI engineer, reflects on facial recognition technology's moral implications. His work evolved from US Navy projects to commercial use, despite early awareness of biases. Real-world misidentifications stress the need for ethical considerations.
You can opt out of airport face scans
Passengers can opt out of airport facial recognition for US domestic flights by following specific steps. Challenges faced during opt-out prompt the Algorithmic Justice League's "Freedom Flyers" campaign. Concerns include data security, bias, and surveillance normalization. Opting out advocates for biometric rights.
DHS plans to collect biometric data from migrant children "down to the infant"
The U.S. DHS plans to collect facial images of migrant children to enhance facial recognition technology, raising concerns about privacy, consent, and ethical implications amid unclear implementation status and confirmed funding.
Americans Are Uncomfortable with Automated Decision-Making
A Consumer Reports survey shows 72% of Americans are uncomfortable with AI in job interviews, and 66% with its use in banking and housing, highlighting concerns over transparency and data accuracy.
Access your brain? The creepy race to read workers' minds (2023)
The use of neurotechnology and AI in hiring raises ethical concerns, potentially increasing racial disparities and disadvantaging neurodiverse candidates. Regulations are needed to protect mental privacy and ensure informed consent.