Photos of your kids are being used to train AI and there's nothing you can do
Human Rights Watch's audit reveals children's images are used for AI training without consent, raising privacy concerns. The Supreme Court's ruling limits federal regulation, shifting responsibility to states for data privacy protections.
Read original articleHuman Rights Watch recently conducted an audit revealing that images of children, often taken from private social media accounts, are being used to train artificial intelligence models without parental consent. This practice raises significant privacy concerns, particularly as many of these images are not publicly accessible. The report highlights the risks associated with "sharenting," where parents share their children's lives online, often without considering the long-term implications. Children cannot consent to their images being shared, and the potential for their identities to be traced from these images poses serious privacy risks.
The Supreme Court's recent ruling against the Chevron doctrine complicates the situation further by limiting the power of federal agencies like the Federal Trade Commission to regulate such practices. This decision shifts the responsibility for privacy legislation to state governments, which may not act swiftly enough to address these concerns. As a result, the legality of using children's data for AI training may vary significantly depending on state laws.
The author argues for the need for stronger data privacy protections and ethical guidelines for technology companies, emphasizing that without meaningful regulations, Big Tech will continue to operate without accountability. Until such protections are established, parents are advised to reconsider sharing their children's images online, as the risks associated with data privacy remain high.
Related
My Memories Are Just Meta's Training Data Now
Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.
AI Companies Need to Be Regulated: Open Letter
AI companies face calls for regulation due to concerns over unethical practices highlighted in an open letter by MacStories to the U.S. Congress and European Parliament. The letter stresses the need for transparency and protection of content creators.
Can AI Be Meaningfully Regulated, or Is Regulation a Deceitful Fudge?
Governments consider regulating AI due to its potential and risks, focusing on generative AI controlled by Big Tech. Challenges include balancing profit motives with ethical development. Various regulation models and debates on effectiveness persist.
The Data That Powers A.I. Is Disappearing Fast
A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and companies like OpenAI, Google, and Meta. Challenges prompt exploration of new data access tools and alternative training methods.
NYT: The Data That Powers AI Is Disappearing Fast
A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and researchers. Companies explore partnerships and new tools amid data challenges.
Related
My Memories Are Just Meta's Training Data Now
Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.
AI Companies Need to Be Regulated: Open Letter
AI companies face calls for regulation due to concerns over unethical practices highlighted in an open letter by MacStories to the U.S. Congress and European Parliament. The letter stresses the need for transparency and protection of content creators.
Can AI Be Meaningfully Regulated, or Is Regulation a Deceitful Fudge?
Governments consider regulating AI due to its potential and risks, focusing on generative AI controlled by Big Tech. Challenges include balancing profit motives with ethical development. Various regulation models and debates on effectiveness persist.
The Data That Powers A.I. Is Disappearing Fast
A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and companies like OpenAI, Google, and Meta. Challenges prompt exploration of new data access tools and alternative training methods.
NYT: The Data That Powers AI Is Disappearing Fast
A study highlights a decline in available data for training A.I. models due to restrictions from web sources, affecting A.I. developers and researchers. Companies explore partnerships and new tools amid data challenges.