LinkedIn does not use European users' data for training its AI
LinkedIn has faced criticism for using user data for AI training without consent, prompting privacy concerns. EU users are excluded, highlighting GDPR's effectiveness, while critics demand explicit consent for data usage.
Read original articleLinkedIn has faced criticism for using user data to train its generative AI tool without prior consent, a practice that has raised concerns about privacy and transparency. This issue came to light on September 18, 2024, when users noticed changes in LinkedIn's settings regarding data usage. Notably, LinkedIn has excluded users from the EU, EEA, and Switzerland from this data scraping, highlighting the effectiveness of EU privacy regulations like GDPR in protecting user data. Other social media platforms, including Meta and X, have also attempted to use user data for AI training but faced backlash from European privacy authorities, leading to the suspension of such practices in Europe. Critics argue that companies should not automatically opt users in for data usage, emphasizing the need for explicit consent. LinkedIn users can opt out of future data usage for AI training, but any data already collected cannot be recovered. The situation underscores the importance of robust privacy regulations to safeguard user rights in the digital landscape.
- LinkedIn has begun using user data for AI training without prior consent, sparking privacy concerns.
- The platform has excluded EU users from this practice, indicating the strength of EU privacy laws.
- Other companies like Meta and X have faced similar backlash in Europe, leading to halted AI training.
- Critics advocate for explicit user consent rather than automatic opt-ins for data usage.
- Users can opt out of future data usage, but previously collected data remains unaffected.
Related
My Memories Are Just Meta's Training Data Now
Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.
Musk's X under pressure from regulators over data harvesting for Grok AI
Elon Musk's platform X is under investigation by UK and Irish regulators for using default settings to collect user data for its Grok AI chatbot without proper consent, violating GDPR regulations.
X targeted with nine complaints after grabbing EU users’ data for training Grok
X is facing nine privacy complaints for processing EU users' data without consent, prompting legal action from the Irish DPC, as privacy advocates demand explicit user consent for AI training.
LinkedIn scraped user data for training before updating its terms of service
LinkedIn trained AI models using user data without prior consent updates, affecting U.S. users. An opt-out option was added later, prompting calls for investigations into its data practices.
LinkedIn Is Training AI on User Data
LinkedIn is using user data for generative AI training without consent, allowing users to opt out for future use but not past training. Privacy measures are claimed to be in place.
- Many users express frustration with LinkedIn's data practices, calling for better privacy protections and explicit consent.
- There is curiosity about how LinkedIn determines European users and the implications for those outside the EU.
- Some commenters highlight the potential bias in AI training due to the exclusion of European data.
- Critics question the value of LinkedIn data and the ethics of using it for AI training.
- Several users advocate for stronger privacy regulations in the U.S. similar to those in the EU.
There are some consumer protections that I really do wish we imported into the U.S., especially food safety and chemical usage. Too much regulatory capture for that, though.
https://www.linkedin.com/mypreferences/m/settings/data-for-a...
Why drop the case? A large amount of data was still collected, was it not? I'm sure X is happy to agree to not collecting data in the future, seeing as how they've already collected, what, 18 years of data already?
I'm of two minds.
Someone who created their account from EU IP-address, someone who says they live in EU, someone who says the work from a company (division) based in EU?
Given the UK’s privacy and data protection environment is still largely the same as the EU’s, I wonder whether this was an oversight.
Who will ever know if they do or not...
But consider the other side of this coin : one of the biggest risks identified for AI is bias in training sets and there are actual demands that companies explicitly make their training sets as inclusive as possible to incorporate all cultures, genders, etc etc.
So if Europeans find out they are being excluded from job opportunities later on because employers are using AI tools within LinkedIn to process candidates and it simply doesn't understand the background of European candidates - will they be upset? Will they be demanding LinkedIn be fined for not including Europeans in the training set?
I am very curious how all this will play out long term as these competing tensions get worked through.
Related
My Memories Are Just Meta's Training Data Now
Meta's use of personal content from Facebook and Instagram for AI training raises privacy concerns. European response led to a temporary pause, reflecting the ongoing debate on tech companies utilizing personal data for AI development.
Musk's X under pressure from regulators over data harvesting for Grok AI
Elon Musk's platform X is under investigation by UK and Irish regulators for using default settings to collect user data for its Grok AI chatbot without proper consent, violating GDPR regulations.
X targeted with nine complaints after grabbing EU users’ data for training Grok
X is facing nine privacy complaints for processing EU users' data without consent, prompting legal action from the Irish DPC, as privacy advocates demand explicit user consent for AI training.
LinkedIn scraped user data for training before updating its terms of service
LinkedIn trained AI models using user data without prior consent updates, affecting U.S. users. An opt-out option was added later, prompting calls for investigations into its data practices.
LinkedIn Is Training AI on User Data
LinkedIn is using user data for generative AI training without consent, allowing users to opt out for future use but not past training. Privacy measures are claimed to be in place.