Trust Issues: The closed corporate ecosystem is the problem
The essay "Trust Issues" discusses AI's evolution from military origins to corporate dominance, emphasizing the need for transparency, accountability, and open-source models to prioritize public interest over profit.
Read original articleThe essay "Trust Issues" by Bruce Schneier and Nathan E. Sanders discusses the evolution of artificial intelligence (AI) and its current corporate-dominated landscape. They argue that while AI has roots in military funding, its development has been largely shaped by venture capitalists and Big Tech, similar to the internet's transformation from a military project to a corporate entity. The authors acknowledge the improvements in AI capabilities, citing advancements in models like ChatGPT, but emphasize the critical issue of trust. They highlight the lack of transparency in how major companies train their AI models, which raises concerns about biases and accountability. The authors advocate for the development of open-source AI models that prioritize public interest over profit, citing examples like the BLOOM model and Singapore's SEA-LION. They suggest that democratic governments and civil society should invest in AI as a public good to ensure transparency and accountability, contrasting this with the exploitative nature of corporate AI. Ultimately, they call for a future where AI serves societal needs rather than merely enriching corporate owners.
- AI has evolved from military origins to a corporate-dominated ecosystem.
- Trust in AI is crucial, yet corporate models lack transparency and accountability.
- Open-source AI models may offer more trustworthy alternatives.
- Investment in AI as a public good is necessary for societal benefit.
- Democratic governance can help ensure AI development aligns with public interests.
Related
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
Who will control the future of AI?
Sam Altman stresses the need for a democratic approach to AI development, urging the U.S. to lead in creating beneficial technologies while countering authoritarian regimes that may misuse AI.
Bruce Schneier on security, society and why we need public AI models
Bruce Schneier emphasized AI's dual role in cybersecurity at the SOSS Fusion Conference, advocating for transparent public AI models while warning of risks, corporate concentration, and the need for regulatory measures.
Why 'open' AI systems are closed, and why this matters
The article critiques the misrepresentation of 'open' AI, highlighting its failure to disrupt power concentration among large companies, while emphasizing the need for a nuanced understanding of openness in AI.
Marc Andreessen Warns Against 'Government-Protected Cartel' of Major AI
Marc Andreessen warns against a potential "government-protected cartel" among major AI firms, advocating for competitive development, open-source access, and collaboration between governments and the private sector to mitigate risks.
Related
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
Who will control the future of AI?
Sam Altman stresses the need for a democratic approach to AI development, urging the U.S. to lead in creating beneficial technologies while countering authoritarian regimes that may misuse AI.
Bruce Schneier on security, society and why we need public AI models
Bruce Schneier emphasized AI's dual role in cybersecurity at the SOSS Fusion Conference, advocating for transparent public AI models while warning of risks, corporate concentration, and the need for regulatory measures.
Why 'open' AI systems are closed, and why this matters
The article critiques the misrepresentation of 'open' AI, highlighting its failure to disrupt power concentration among large companies, while emphasizing the need for a nuanced understanding of openness in AI.
Marc Andreessen Warns Against 'Government-Protected Cartel' of Major AI
Marc Andreessen warns against a potential "government-protected cartel" among major AI firms, advocating for competitive development, open-source access, and collaboration between governments and the private sector to mitigate risks.