January 23rd, 2025

Why is Big Tech hellbent on making AI opt-out?

Big Tech companies are increasingly adopting opt-out AI features, raising user concerns about unsolicited integrations. Critics argue this prioritizes shareholder interests over customer preferences, calling for user consent before implementation.

Read original articleLink Icon
Why is Big Tech hellbent on making AI opt-out?

Big Tech companies like Microsoft, Apple, and Google are increasingly making AI features opt-out rather than opt-in, raising concerns among users and regulators. Recent examples include Microsoft 365's Copilot and Apple's Intelligence, which are activated by default in their respective applications. This trend contrasts with the growing demand for opt-in consent for marketing services, as users express dissatisfaction with unsolicited AI integrations. Many customers feel overwhelmed by AI features that clutter their applications, leading to a perception of "enshittification" of services. Despite some benefits of AI, such as improved classification and search capabilities, the lack of user consent for these features is troubling. Critics argue that this approach prioritizes shareholder interests over customer preferences, as companies can showcase higher AI usage metrics without genuine user approval. The article calls for a reevaluation of this strategy, suggesting that companies should seek customer consent before implementing AI services. The current situation reflects a broader issue in the tech industry, where user feedback is often overlooked in favor of rapid innovation and profit.

- Big Tech is shifting to opt-out AI features, raising user concerns.

- Users express dissatisfaction with unsolicited AI integrations in applications.

- Critics argue this trend prioritizes shareholder interests over customer preferences.

- There is a call for companies to seek user consent before implementing AI services.

- The situation reflects a broader issue of neglecting user feedback in tech innovation.

Link Icon 12 comments
By @epistasis - 3 months
My annoyance with the AI additions is only slightly higher than the constant UI changes in, say, a Google UI. So f&@@&?@&!& annoying to login to do something and hav things moved around for zero benefit, but it really is the icing on the cake when they put a pop-up over the UI declaring the change with pride, which further distracts me from accomplishing the task I need to.

So now whenever I type in GMail I have an annoying "Polish" button to distract me from the writing. Not the worst change, but certainly a net negative. And why would I ever want to translate to Polish anyway? You won't even find any Polish text in my email anywhere.

By @strict9 - 3 months
Because such decisions are for shareholders and not users. I guess we should be thankful we can disable it at all in some cases.

Apple Intelligence was particularly a let down. It just got in the way, adding noise and distraction and not adding anything useful.

By @datadrivenangel - 3 months
Stock prices and promotions.

If you're a product person at a Big Tech, making your AI feature opt-out means you can justify a promotion with "My new generative AI feature has seen record fast adoption with 200 MILLION Weekly Active Users"

And if you're an executive, you can claim any customer value comes from the AI features, which the stock market likes right now.

By @Ukv - 3 months
> Regulators worldwide are keen to ensure that marketing and similar services are opt-in [...] but for some reason, forcing AI on customers is acceptable. [...] there needs to be a moratorium on adding AI services without first asking customers for consent

Consent should be required if you're collecting private data for marketing or for training an LLM/etc.

But, I don't think adding some ML functionality to an application (say, OCR to a PDF reader, or translation to posts on a social network) inherently requires getting customers' consent any more than adding any other feature would.

I can appreciate the annoyance over "formerly pristine applications cluttered with AI features" since it's the current trend - but see it as similar to the annoyance over flat design (which rarely even had an opt-out) as opposed to something that needs a legal prohibition.

By @twoodfin - 3 months
Because they believe (correctly, I think) that AI is going to be foundational to most of the computing experiences we have, moreso over time.

I don't think it's particularly absurd to make an analogy to other foundational technologies. When relational databases were invented, and enabled a ton of new & better capabilities across a host of user-facing software, it would have been strange to let users "opt-out" into the more limited, simpler experiences that could still be supported on earlier data architectures. And what would it even mean, practically? Maintain two separate data management systems, one relational and one legacy?

AI is almost certainly going to be like that, threaded throughout the designs of essentially everything above a certain level of computational complexity.

By @5555624 - 3 months
From a general consumer point of view? If it has AI,it's the latest and greatest. It's got to be better than something without AI.

It's opt-put because most people won't opt-out, if only because they can't be bothered. Lacking strong privacy laws, the user agreements probably say that all their data can be used for any variety of uses. "We're going to use your data to help make our AI features better? Oh, sure, that's great!" (Most people won't think of the privacy implications.)

By @silverquiet - 3 months
There's an opt out?
By @oidar - 3 months
Firstly, To justify easing the product into subscriptions if it isn't already. Secondly, for product teams to justify their work. Thirdly, to signal that new features are being developed. Finally, when generative AI works in a product is is pretty awesome (but sadly it doesn't work well all the time).
By @zeroonetwothree - 3 months
Have to justify all those billions spent on it somehow.
By @fullshark - 3 months
They want data / feedback to improve the model