December 1st, 2024

Why 'open' AI systems are closed, and why this matters

The article critiques the misrepresentation of 'open' AI, highlighting its failure to disrupt power concentration among large companies, while emphasizing the need for a nuanced understanding of openness in AI.

Read original articleLink Icon
Why 'open' AI systems are closed, and why this matters

The article discusses the concept of 'open' artificial intelligence (AI) and critiques the way it is often misrepresented. It argues that claims of openness in AI frequently lack precision and fail to address the significant concentration of power among a few large companies in the AI sector. The authors highlight that while 'open' AI can offer benefits such as transparency, reusability, and extensibility, it does not inherently disrupt the existing power dynamics in the industry. The paper emphasizes that the rhetoric surrounding open AI is often used by corporations to influence policy in ways that may serve their interests rather than the public good. The authors also point out that the definition of AI itself is contested, complicating discussions about what constitutes openness. They argue that the economic incentives and market conditions surrounding AI development limit the competitiveness of smaller players, despite the potential for openness to foster innovation. Ultimately, the authors call for a more nuanced understanding of openness in AI, recognizing that it can be co-opted by powerful entities in ways that exacerbate existing inequalities rather than alleviate them.

- The concept of 'open' AI is often misrepresented and lacks precision.

- Claims of openness do not necessarily disrupt the concentration of power in the AI sector.

- Openness can provide transparency and reusability but does not guarantee equitable market conditions.

- The definition of AI is contested, complicating discussions about openness.

- Economic incentives and market conditions limit the competitiveness of smaller AI players.

Link Icon 1 comments
By @blackeyeblitzar - 5 months
This is a worthwhile read (especially the conclusion) for anyone worried about AI safety, the concentration of power, privacy abuses, censorship, and the likelihood for a few megacorps to take all the gains from this ongoing technology revolution. I am not sure if I agree with everything they say and conclude with, but think this is an important quote:

“Unless pursued alongside other strong measures to address the concentration of power in AI, including antitrust enforcement and data privacy protections, the pursuit of openness on its own will be unlikely to yield much benefit. This is because the terms of transparency, and the infrastructures required for reuse and extension, will continue to be set by these same powerful companies, who will be unlikely to consent to meaningful checks that conflict with their profit and growth incentives.”

We need a total rewrite of antitrust laws, privacy laws, copyright laws, and aggressive enforcement to avoid the coming concentration of power and information. Until then, it’s pretty disappointing to see everyone from Yann LeCun of Meta to Clem from Hugging Face misuse the term “open source” for mostly closed systems that only share the weights (the output of the “compilation” process that is training). Meta/LeCun are basically open washing for their own gain. In contrast, AI2’s OLMo is an example of what real open source looks like:

https://venturebeat.com/ai/truly-open-source-llm-from-ai2-to...