August 14th, 2024

Microsoft tweaks fine print to warn everyone not to take its AI seriously

Microsoft's updated Services Agreement, effective September 30, 2024, clarifies AI service limitations, prohibits data extraction, addresses Xbox user privacy, and outlines mass arbitration processes for customer claims.

Read original articleLink Icon
Microsoft tweaks fine print to warn everyone not to take its AI seriously

Microsoft has updated its Services Agreement, effective September 30, 2024, to clarify the limitations of its AI services. The company emphasizes that its Assistive AI should not be relied upon for significant decisions or as a substitute for professional advice. The revised agreement includes a prohibition against using AI services for data extraction and reverse engineering, explicitly stating that users cannot attempt to uncover the underlying components of Microsoft's AI models. Additionally, the update addresses privacy concerns for Xbox users, indicating that third-party platforms may require data sharing. The agreement also outlines restrictions on using AI services to create or improve other AI systems. Furthermore, it clarifies the handling of mass arbitration cases, defining related cases involving multiple customers under the American Arbitration Association's Mass Arbitration Supplementary Rules. Overall, the changes aim to set clear expectations regarding the use of Microsoft's AI technologies and the associated legal implications.

- Microsoft warns that its AI services are not suitable for important decisions.

- Users are prohibited from data extraction and reverse engineering of AI models.

- Xbox users may need to share data with third-party platforms.

- The agreement clarifies mass arbitration processes for related customer claims.

- The updates reflect ongoing concerns about AI reliability and user privacy.

Related

Antitrust: Europe's Vestager warns Microsoft, OpenAI 'the story is not over'

Antitrust: Europe's Vestager warns Microsoft, OpenAI 'the story is not over'

The European Commission, under Margrethe Vestager, scrutinizes Microsoft's OpenAI partnership for potential monopolistic practices. Concerns include undue influence, impact on competitors, and anticompetitive behavior in the AI industry. Regulatory scrutiny extends to Google and Apple.

Has Microsoft's AI Chief Just Made Windows Free?

Has Microsoft's AI Chief Just Made Windows Free?

Microsoft's AI chief challenges traditional licensing agreements by suggesting online content should be treated as "freeware." This sparks debate on copyright protection, AI training, and legal complexities in content usage.

AI companies promised to self-regulate one year ago. What's changed?

AI companies promised to self-regulate one year ago. What's changed?

AI companies like Amazon, Google, and Microsoft committed to safe AI development with the White House. Progress includes red-teaming practices and watermarks, but lacks transparency and accountability. Efforts like red-teaming exercises, collaboration with experts, and information sharing show improvement. Encryption and bug bounty programs enhance security, but independent verification and more actions are needed for AI safety and trust.

Microsoft says OpenAI is now a competitor in AI and search

Microsoft says OpenAI is now a competitor in AI and search

Microsoft has classified OpenAI as a competitor in its annual report, reflecting a shift in their relationship amid OpenAI's SearchGPT launch, despite ongoing collaboration and Microsoft's significant investment in OpenAI.

Microsoft is losing a staggering amount of money on AI

Microsoft is losing a staggering amount of money on AI

Microsoft reported $19 billion in AI-related losses for the quarter ending June, despite $36.8 billion in cloud revenue. Analysts express skepticism about AI's profitability, raising concerns over long-term viability.

Link Icon 5 comments
By @scohesc - 8 months
Amazing.

Bing's AI told me a picture of a (what I now know) stinging nettle in my backyard was a hemp plant - I gave it several different pictures at different angles, and it was confident it was hemp.

So I cracked the stem, touched the liquid inside, wiped sweat off my forehead - and eventually stinging, swelling, burning sensation on my hands and forehead.

I asked Google AI and it correctly identified it as a stinging nettle plant the first time.

Amusingly enough, I told Bing AI it was wrong, it's actually stinging nettle, and I have been physically harmed by your response - it immediately ended the chat, didn't go on further to say anything like "here's some help, call poison control, here's some remedies", literally NOTHING. (though, if it's messed up this much, i don't think I'd ask for further help from it!)

AI is a toy - it's not ready for any real use or identification purposes. It's a shame that these companies are so strapped for cash they're rushing like madmen to deploy this new "forefront" of technology that they don't stop to think that they're inadvertently hurting people because of their decisions.

It's sad, someone is going to do something potentially even more dangerous and risky, trusting the AI's that these companies make, they'll get even more hurt (or die!) and these companies will still be able to hide behind "it's the algorithm!", or "you saw the disclaimer!"

And the politicians will keep allowing this to happen because shareholders and money.

By @bionhoward - 8 months
I’m surprised they managed to make this part of the agreement even worse than it already was. Seems like sane legal terms are a strong competitive advantage.

This section of the Microsoft Services Agreement is not sane because it protects [obsolete] chatbot speech more than human speech, by virtue of the prohibition of use to create, train, or improve AI, which evidently does not apply to humans since Mustafa Suleyman (head of Microsoft AI!) said everything on the web was “fair game.”

Translation: everything on the web is fair game, _except the stuff from Microsoft._

Furthermore, it’s vague about what services (GitHub Copilot?) are included or excluded.

Will other companies miss the opportunity to eat Microsoft’s lunch right now by writing more clear and less draconian terms?

By @reneberlin - 8 months
/rant It's a small but favourable step into the correct insight, that Microsoft might actually always have been creating dark-patterned toys to enslave humanity and enterprises and steal time and make people suffer from icons, notifications and ads (!). With a redicolous toy-OS the misson got started and the saga continues. /rant

They do it because they want to cut the costs for legal battles.

By @jqpabc123 - 8 months
In other words, AI is mostly just for fun and marketing.

It should only be used if the results don't really matter or if they will be independently verified.