September 22nd, 2024

LinkedIn does not use European users' data for training its AI

LinkedIn has faced criticism for using user data for AI training without consent, prompting privacy concerns. EU users are excluded, highlighting GDPR's effectiveness, while critics demand explicit consent for data usage.

Read original articleLink Icon
FrustrationConcernSkepticism
LinkedIn does not use European users' data for training its AI

LinkedIn has faced criticism for using user data to train its generative AI tool without prior consent, a practice that has raised concerns about privacy and transparency. This issue came to light on September 18, 2024, when users noticed changes in LinkedIn's settings regarding data usage. Notably, LinkedIn has excluded users from the EU, EEA, and Switzerland from this data scraping, highlighting the effectiveness of EU privacy regulations like GDPR in protecting user data. Other social media platforms, including Meta and X, have also attempted to use user data for AI training but faced backlash from European privacy authorities, leading to the suspension of such practices in Europe. Critics argue that companies should not automatically opt users in for data usage, emphasizing the need for explicit consent. LinkedIn users can opt out of future data usage for AI training, but any data already collected cannot be recovered. The situation underscores the importance of robust privacy regulations to safeguard user rights in the digital landscape.

- LinkedIn has begun using user data for AI training without prior consent, sparking privacy concerns.

- The platform has excluded EU users from this practice, indicating the strength of EU privacy laws.

- Other companies like Meta and X have faced similar backlash in Europe, leading to halted AI training.

- Critics advocate for explicit user consent rather than automatic opt-ins for data usage.

- Users can opt out of future data usage, but previously collected data remains unaffected.

AI: What people are saying
The comments reflect a range of opinions on LinkedIn's use of user data for AI training and the implications of GDPR.
  • Many users express frustration with LinkedIn's data practices, calling for better privacy protections and explicit consent.
  • There is curiosity about how LinkedIn determines European users and the implications for those outside the EU.
  • Some commenters highlight the potential bias in AI training due to the exclusion of European data.
  • Critics question the value of LinkedIn data and the ethics of using it for AI training.
  • Several users advocate for stronger privacy regulations in the U.S. similar to those in the EU.
Link Icon 20 comments
By @slowmovintarget - 7 months
Which implies that elsewhere...

There are some consumer protections that I really do wish we imported into the U.S., especially food safety and chemical usage. Too much regulatory capture for that, though.

By @rockwotj - 7 months
By @rchaud - 7 months
> .. End of July when X automatically enabled the training of its Grok AI on all its users – European accounts included. Just a few days after the launch, consumer organizations filed a formal privacy complaint with the Irish Data Protection Commission (DPC)[...]. The Irish Court has now dropped the privacy case against X as the platform agreed to permanently halt the collection of EU users' personal data [...].

Why drop the case? A large amount of data was still collected, was it not? I'm sure X is happy to agree to not collecting data in the future, seeing as how they've already collected, what, 18 years of data already?

By @Kim_Bruning - 7 months
Of course this then later leads to: "Linkedin AI has non-European bias"

I'm of two minds.

By @JSDevOps - 7 months
How fucked In the head do you have to be to train ANY AI on LinkedIn Data.
By @wodenokoto - 7 months
What is a European user anyway.

Someone who created their account from EU IP-address, someone who says they live in EU, someone who says the work from a company (division) based in EU?

By @left-struck - 7 months
I wonder how this works for Europeans abroad. More to the point, as an Australian can I trick these tech companies into thinking I’m a European because my government won’t protect me.
By @Chris_Newton - 7 months
In the UK, I found the relevant setting was present (and had been turned on) for my profile when this came to my attention last week. Curiously, I can no longer find that setting now.

Given the UK’s privacy and data protection environment is still largely the same as the EU’s, I wonder whether this was an oversight.

By @Brajeshwar - 7 months
It might sound naive, but how does LinkedIn confirm and be sure that someone is a European user?
By @amarcheschi - 7 months
Yup, I noticed this a few days ago in some subreddit like /r/assholedesign, I think a few months ago we had a similar feature on instagram and perhaps fb, I don't know if it's still active in EU on those meta products
By @BurnGpuBurn - 7 months
Correction, LinkedIn SAYS the're not using the data of European users to train their AI.

Who will ever know if they do or not...

By @ilrwbwrkhv - 7 months
LinkedIn is the bottom of the barrel of the labour pool. Wonder why even train the data with any of them.
By @rkagerer - 7 months
Defaulting this setting to ON is inexcusable. We never gave permission, and it's a blatant breach of privacy. It's a serious lapse of judgement on the part of LinkedIn and a breach of their users' trust.
By @cyanydeez - 7 months
Imagine an AI that's only allowed to mine the data of people to stupid to elect representatives that protect their privacy.
By @bastard_op - 7 months
Just imagine if we had the same privacy protections as the EU in the US.
By @ein0p - 7 months
I wonder how they use US data too. LinkedIn is so cringe, the value of its data in the mix is probably negative.
By @s-skl - 7 months
Good.
By @yazzku - 7 months
Fuck LinkedIn. They should have already been sued for their exploitation of people's identities a long time ago.
By @zmmmmm - 7 months
I know Europeans will probably be all high fiving each other and congratulating themselves on how much better their regulatory environment is.

But consider the other side of this coin : one of the biggest risks identified for AI is bias in training sets and there are actual demands that companies explicitly make their training sets as inclusive as possible to incorporate all cultures, genders, etc etc.

So if Europeans find out they are being excluded from job opportunities later on because employers are using AI tools within LinkedIn to process candidates and it simply doesn't understand the background of European candidates - will they be upset? Will they be demanding LinkedIn be fined for not including Europeans in the training set?

I am very curious how all this will play out long term as these competing tensions get worked through.