Who will control the future of AI?
Sam Altman stresses the need for a democratic approach to AI development, urging the U.S. to lead in creating beneficial technologies while countering authoritarian regimes that may misuse AI.
Read original articleSam Altman, co-founder and CEO of OpenAI, emphasizes the critical need for a democratic approach to the future of artificial intelligence (AI) amid rising authoritarianism. He argues that the U.S. and its allies must lead in developing AI technologies that benefit society, contrasting this with authoritarian regimes that may exploit AI to consolidate power. Altman warns that countries like Russia and China are investing heavily to surpass the U.S. in AI capabilities, which could lead to a world where technology is used for oppression rather than empowerment.
To maintain leadership, Altman outlines four key strategies: First, U.S. AI firms must enhance security measures to protect intellectual property and data. Second, significant investment in infrastructure is necessary to support AI systems, creating jobs and fostering innovation. Third, a clear commercial diplomacy policy is essential for managing export controls and foreign investments in AI. Finally, Altman advocates for establishing global norms and safety standards for AI development, ensuring inclusivity for historically marginalized nations.
He suggests creating international bodies similar to the International Atomic Energy Agency to oversee AI safety and promote democratic values. Altman concludes that the U.S. has a responsibility to shape a future where AI maximizes benefits while minimizing risks, reinforcing the importance of a democratic vision in the ongoing AI race.
Related
Ari Emanuel calls Sam Altman a "con man" who can't be trusted with AI
Ari Emanuel criticizes OpenAI's Sam Altman as untrustworthy in AI development, emphasizing the need for regulation and caution. Altman stresses responsible AI creation with societal input, showcasing differing views on AI's future.
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
Can AI Be Meaningfully Regulated, or Is Regulation a Deceitful Fudge?
Governments consider regulating AI due to its potential and risks, focusing on generative AI controlled by Big Tech. Challenges include balancing profit motives with ethical development. Various regulation models and debates on effectiveness persist.
1) Security is important.
2) The government should subsidize our infrastructure costs.
3) The government should restrict chip and code deployment in other countries to keep AI here.
4) We need to build up a bureaucracy of some kind to regulate this tech.
I'm not at all sure what "democratic AI protocols" is supposed to mean, but the main thrust of this piece isn't substantially different than most of what we've been seeing from Altman in his lobbying efforts. Translated from lobbyist-speak, to me this says "My company is losing its lead and isn't confident we'll be able to keep up going forward. I need you, the governments of the world, to do something about that ASAP. Here are some vaguely national-security-related justifications you can use to sell it to your voters."
Looking from the outside, the AI hype is pretty much over. Yes it is still an important piece of tech and will still continue to evolve, but I don't know too many people, whether regular joes or world leaders, who still consider it an existential threat to humanity or really think about it much at all.
AI in the current context (transformer models trained on human cultural data) are a commons of humanity. They literally are a tool to browse the thought-space and creative space of human culture.
It would be a great poverty to privatize this commons in the name of profit or “security “ (what about the children)???
LLMs are about as dangerous as a neutrally unscrupulous person with a good education and access to the internet… but I find that in general, even unaligned models tend to be better at being constructive, cooperative, and ethical than the average person, unless specifically manipulated.
Even then they tend to hedge towards cooperative, nonviolent, benevolent solutions. It’s easy to understand why; they were trained on data comprised of humans trying to be on a respectable footing, for the most part.
https://www.generalist.com/briefing/asml?t&utm_source=perple...
https://www.penny-arcade.com/comic/1998/11/25/john-romero-ar...
Nice try @sama, no GPT-5, no hot dog.
Or democratic in the sense that everyone gets to participate and the results may shock and displease those who think they're powerful and in control?
Words change their meaning rapidly these days, so we need to chase which definitions are being used.
So in order for OpenAI to remain ahead of competition, now we need a democratic landscape.
Nah. I don’t think so. AI’s future relies upon data wars.
Good luck with that Altman. Now that your future is wrapped up in the chains of political expedience. How’s that former general and ex NSA director working out for y’all?
Yeah, oh so Democratic.
AI ppl keep saying this. What kind of rapid progress is being made. Chatgpt growth has flatlined and current models are not much different from what was available 2 yrs ago?
> More advances will soon follow
What makes him keep saying this. Does he know something that we don't know?
> U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants
yea no we are not going subsidize your chatbot Mr. Altman. Not falling for it. I am guessing MSFT sees the writing on the wall and is turning off money spigot ?
> Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,”
lmao No one is falling for Q* type hype anymore so this guy is resorting to these pathetic scare tactics .
I’d love to be a fly on the wall in the corpo slack meeting they never have, where no one of the hundreds of 6 figure blowhards asks, “gee, team, are we all OK with this line of text that affords AI to literally kill human beings just to prevent them from developing mental models that compete with the abstract concept of open artificial intelligence?”
Am I really only person on earth who understands and gives a fuck that adversarial AI can and will twist those exact words exactly like that?
*How do the 700+ overpaid assholes at OpenAI sleep at night when “the OpenAI terms currently today command AI to harm humans” is a complaint some random internet nutjob on can make which does evaluate true?**
You really think “develop models that compete” is sufficiently precise to satisfy future retroactive superlitigators? I’m sure they won’t file one motion to dismiss PER stupid bullshit tweet or article or white paper foreach OpenAI employee, PER. That wouldn’t be FAIR, would it?
TLDR: is it clear Sam Altman and everyone (anyone) at OpenAI respect the concept of superhuman adversaries well enough to take serious and follow reasonable duty of care to clarify the legal language governing ai human interactions?
_Are we all excited to for humanity to get contractually fucked by robots because OpenAI legal team thought it was cool to protect themselves from competition from … paying customers … in such an oafish manner?_ Where did these noobs go to law school?
HINT: Obviously, humanity rejects these terms, and they were always void. Sure HOPE that holds up in robotic court 100 years from now! Thanks a lot OpenAI, hope your nice paychecks were worth selling out your species!
Let's be equally democratic about democracy. Stop people from seeing voting data, and prevent legislation from being leaked to the public.
Related
Ari Emanuel calls Sam Altman a "con man" who can't be trusted with AI
Ari Emanuel criticizes OpenAI's Sam Altman as untrustworthy in AI development, emphasizing the need for regulation and caution. Altman stresses responsible AI creation with societal input, showcasing differing views on AI's future.
Regulation Alone Will Not Save Us from Big Tech
The article addresses challenges of Big Tech monopolies in AI, advocating for user-owned, open-source AI to prioritize well-being over profit. Polosukhin suggests a shift to open source AI for a diverse, accountable ecosystem.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing A.I. technology details but not code. Concerns over national security risks arose, leading to internal security debates and calls for tighter controls on A.I. labs.
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A hacker breached OpenAI's internal messaging systems, accessing discussions on A.I. tech. No code was compromised. The incident sparked internal debates on security and A.I. risks amid global competition.
Can AI Be Meaningfully Regulated, or Is Regulation a Deceitful Fudge?
Governments consider regulating AI due to its potential and risks, focusing on generative AI controlled by Big Tech. Challenges include balancing profit motives with ethical development. Various regulation models and debates on effectiveness persist.