Why 'open' AI systems are closed, and why this matters
The article critiques the misrepresentation of 'open' AI, highlighting its failure to disrupt power concentration among large companies, while emphasizing the need for a nuanced understanding of openness in AI.
Read original articleThe article discusses the concept of 'open' artificial intelligence (AI) and critiques the way it is often misrepresented. It argues that claims of openness in AI frequently lack precision and fail to address the significant concentration of power among a few large companies in the AI sector. The authors highlight that while 'open' AI can offer benefits such as transparency, reusability, and extensibility, it does not inherently disrupt the existing power dynamics in the industry. The paper emphasizes that the rhetoric surrounding open AI is often used by corporations to influence policy in ways that may serve their interests rather than the public good. The authors also point out that the definition of AI itself is contested, complicating discussions about what constitutes openness. They argue that the economic incentives and market conditions surrounding AI development limit the competitiveness of smaller players, despite the potential for openness to foster innovation. Ultimately, the authors call for a more nuanced understanding of openness in AI, recognizing that it can be co-opted by powerful entities in ways that exacerbate existing inequalities rather than alleviate them.
- The concept of 'open' AI is often misrepresented and lacks precision.
- Claims of openness do not necessarily disrupt the concentration of power in the AI sector.
- Openness can provide transparency and reusability but does not guarantee equitable market conditions.
- The definition of AI is contested, complicating discussions about openness.
- Economic incentives and market conditions limit the competitiveness of smaller AI players.
Related
Not all 'open source' AI models are open: here's a ranking
Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
Policymakers Should Let Open Source Play a Role in the AI Revolution
The R Street Institute highlights the significance of open-source AI for innovation, noting a rise in investment from $900 million in 2022 to $2.9 billion in 2023, urging balanced regulations.
Open-Access AI: Lessons from Open-Source Software
Open-access AI models, like Meta's Llama, impose usage restrictions, misleadingly labeled as "open-source." Access to training data is essential for innovation, raising concerns about monopolistic control in AI advancements.
AI Industry Is Trying to Subvert the Definition of "Open Source AI"
The OSI's definition of "open source AI" faces criticism for permitting secrecy in AI development, prompting calls for clearer distinctions and emphasizing the need for genuine open source practices and public AI options.
“Unless pursued alongside other strong measures to address the concentration of power in AI, including antitrust enforcement and data privacy protections, the pursuit of openness on its own will be unlikely to yield much benefit. This is because the terms of transparency, and the infrastructures required for reuse and extension, will continue to be set by these same powerful companies, who will be unlikely to consent to meaningful checks that conflict with their profit and growth incentives.”
We need a total rewrite of antitrust laws, privacy laws, copyright laws, and aggressive enforcement to avoid the coming concentration of power and information. Until then, it’s pretty disappointing to see everyone from Yann LeCun of Meta to Clem from Hugging Face misuse the term “open source” for mostly closed systems that only share the weights (the output of the “compilation” process that is training). Meta/LeCun are basically open washing for their own gain. In contrast, AI2’s OLMo is an example of what real open source looks like:
https://venturebeat.com/ai/truly-open-source-llm-from-ai2-to...
Related
Not all 'open source' AI models are open: here's a ranking
Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
Policymakers Should Let Open Source Play a Role in the AI Revolution
The R Street Institute highlights the significance of open-source AI for innovation, noting a rise in investment from $900 million in 2022 to $2.9 billion in 2023, urging balanced regulations.
Open-Access AI: Lessons from Open-Source Software
Open-access AI models, like Meta's Llama, impose usage restrictions, misleadingly labeled as "open-source." Access to training data is essential for innovation, raising concerns about monopolistic control in AI advancements.
AI Industry Is Trying to Subvert the Definition of "Open Source AI"
The OSI's definition of "open source AI" faces criticism for permitting secrecy in AI development, prompting calls for clearer distinctions and emphasizing the need for genuine open source practices and public AI options.