July 25th, 2024

Who will control the future of AI?

Sam Altman stresses the need for a democratic approach to AI development, urging the U.S. to lead in creating beneficial technologies while countering authoritarian regimes that may misuse AI.

Read original articleLink Icon
Who will control the future of AI?

Sam Altman, co-founder and CEO of OpenAI, emphasizes the critical need for a democratic approach to the future of artificial intelligence (AI) amid rising authoritarianism. He argues that the U.S. and its allies must lead in developing AI technologies that benefit society, contrasting this with authoritarian regimes that may exploit AI to consolidate power. Altman warns that countries like Russia and China are investing heavily to surpass the U.S. in AI capabilities, which could lead to a world where technology is used for oppression rather than empowerment.

To maintain leadership, Altman outlines four key strategies: First, U.S. AI firms must enhance security measures to protect intellectual property and data. Second, significant investment in infrastructure is necessary to support AI systems, creating jobs and fostering innovation. Third, a clear commercial diplomacy policy is essential for managing export controls and foreign investments in AI. Finally, Altman advocates for establishing global norms and safety standards for AI development, ensuring inclusivity for historically marginalized nations.

He suggests creating international bodies similar to the International Atomic Energy Agency to oversee AI safety and promote democratic values. Altman concludes that the U.S. has a responsibility to shape a future where AI maximizes benefits while minimizing risks, reinforcing the importance of a democratic vision in the ongoing AI race.

Link Icon 22 comments
By @bloppe - 3 months
By @lolinder - 3 months
What I'm hearing:

1) Security is important.

2) The government should subsidize our infrastructure costs.

3) The government should restrict chip and code deployment in other countries to keep AI here.

4) We need to build up a bureaucracy of some kind to regulate this tech.

I'm not at all sure what "democratic AI protocols" is supposed to mean, but the main thrust of this piece isn't substantially different than most of what we've been seeing from Altman in his lobbying efforts. Translated from lobbyist-speak, to me this says "My company is losing its lead and isn't confident we'll be able to keep up going forward. I need you, the governments of the world, to do something about that ASAP. Here are some vaguely national-security-related justifications you can use to sell it to your voters."

By @paxys - 3 months
"AI is as revolutionary as nuclear weapons and the internet. The government should spend hundreds of billions on it." – CEO of an AI company who will be the beneficiary of all that public spending.

Looking from the outside, the AI hype is pretty much over. Yes it is still an important piece of tech and will still continue to evolve, but I don't know too many people, whether regular joes or world leaders, who still consider it an existential threat to humanity or really think about it much at all.

By @benterix - 3 months
Why is this flagged? Can't we have a civilzed discussion around the claims of the CEO of the company at the head of the current tech bubble? I don't think we should believe anything he says, but please, at least let's have the option of discussing his claims here.
By @0xbadc0de5 - 3 months
Sam Altman can ensure the democratization of AI by immediately halting all attempts at regulatory capture and ceasing his attempts to use government regulation to stifle all competition.
By @Refusing23 - 3 months
What i hear him say is: i dont want competition outside of the US
By @K0balt - 3 months
Idk who will, but I know who should…

AI in the current context (transformer models trained on human cultural data) are a commons of humanity. They literally are a tool to browse the thought-space and creative space of human culture.

It would be a great poverty to privatize this commons in the name of profit or “security “ (what about the children)???

LLMs are about as dangerous as a neutrally unscrupulous person with a good education and access to the internet… but I find that in general, even unaligned models tend to be better at being constructive, cooperative, and ethical than the average person, unless specifically manipulated.

Even then they tend to hedge towards cooperative, nonviolent, benevolent solutions. It’s easy to understand why; they were trained on data comprised of humans trying to be on a respectable footing, for the most part.

By @richardatlarge - 3 months
As for American investment, how many know that AMSL is derived from US research development funds?

https://www.generalist.com/briefing/asml?t&utm_source=perple...

By @benreesman - 3 months
“More advances will soon follow and will usher in a decisive period in the story of human society.”

https://www.penny-arcade.com/comic/1998/11/25/john-romero-ar...

Nice try @sama, no GPT-5, no hot dog.

By @h2odragon - 3 months
Democratic, in that a bunch of dudes no one knows sit down and decide what the rest of the world will be allowed to vote for?

Or democratic in the sense that everyone gets to participate and the results may shock and displease those who think they're powerful and in control?

Words change their meaning rapidly these days, so we need to chase which definitions are being used.

By @amelius - 3 months
As in 51% of people using AI to dominate the remaining 49%? The latter group could contain artists.
By @imchillyb - 3 months
AI didn’t begin in a democratic landscape. AI’s current dominant players don’t operate in a democratic landscape.

So in order for OpenAI to remain ahead of competition, now we need a democratic landscape.

Nah. I don’t think so. AI’s future relies upon data wars.

Good luck with that Altman. Now that your future is wrapped up in the chains of political expedience. How’s that former general and ex NSA director working out for y’all?

Yeah, oh so Democratic.

By @apwell23 - 3 months
> The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in

AI ppl keep saying this. What kind of rapid progress is being made. Chatgpt growth has flatlined and current models are not much different from what was available 2 yrs ago?

> More advances will soon follow

What makes him keep saying this. Does he know something that we don't know?

> U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants

yea no we are not going subsidize your chatbot Mr. Altman. Not falling for it. I am guessing MSFT sees the writing on the wall and is turning off money spigot ?

> Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,”

lmao No one is falling for Q* type hype anymore so this guy is resorting to these pathetic scare tactics .

By @Zambyte - 3 months
Luckily we have Mistral, Meta, and even to an extent Google working on democratizing AI.
By @beardyw - 3 months
I'm trying to work out who his audience is for this?
By @greentxt - 3 months
He lists a bunch of "we musts" that most intelligent people understand to be definite "we cant's" and the "we" is fairly vague (a coalition of whom exactly?) and states plainly that if "we" don't do these things (e.g. secure our data centers against CCP hackers -- haha good luck!) we'll be enslaved fairly soon by our communist adversaries. Is this his way of seeking forgiveness?
By @benterix - 3 months
I sometimes imagine a dark room with several high profile VCs. One of them says: "The times of Zuck when people voluntarily gave all their personal data to a random company are over, they are much better informed now, we won't have a second chance like that." Then Sama stands up and says, "Hold my beer and I'll convince the whole world to voluntarily give me their biometric data just like that." And people queue to have their retinas scanned.
By @snowpid - 3 months
It might be interesting why he does not mention the EU AI Act or the GDPR (Meta doesn't like it and withdraw their model from the EU market). Draw your conclusion here.
By @bionhoward - 3 months
Man, I hope Sam Altman doesn’t control the future of AI, since he’s the sort of hypocritical antipatriot who would impose a customer noncompete on literally millions of people, and then spend more time writing this puff piece article in WSJ than the thirty whole seconds it would take someone with any brain and technical skill do delete the single most AI-unsafe html tag in history, which vaguely implies it’s illegal, harmful, or abusive to “develop models” that compete with open artificial intelligence?

I’d love to be a fly on the wall in the corpo slack meeting they never have, where no one of the hundreds of 6 figure blowhards asks, “gee, team, are we all OK with this line of text that affords AI to literally kill human beings just to prevent them from developing mental models that compete with the abstract concept of open artificial intelligence?”

Am I really only person on earth who understands and gives a fuck that adversarial AI can and will twist those exact words exactly like that?

*How do the 700+ overpaid assholes at OpenAI sleep at night when “the OpenAI terms currently today command AI to harm humans” is a complaint some random internet nutjob on can make which does evaluate true?**

You really think “develop models that compete” is sufficiently precise to satisfy future retroactive superlitigators? I’m sure they won’t file one motion to dismiss PER stupid bullshit tweet or article or white paper foreach OpenAI employee, PER. That wouldn’t be FAIR, would it?

TLDR: is it clear Sam Altman and everyone (anyone) at OpenAI respect the concept of superhuman adversaries well enough to take serious and follow reasonable duty of care to clarify the legal language governing ai human interactions?

_Are we all excited to for humanity to get contractually fucked by robots because OpenAI legal team thought it was cool to protect themselves from competition from … paying customers … in such an oafish manner?_ Where did these noobs go to law school?

HINT: Obviously, humanity rejects these terms, and they were always void. Sure HOPE that holds up in robotic court 100 years from now! Thanks a lot OpenAI, hope your nice paychecks were worth selling out your species!

By @mrkeen - 3 months
> These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data.

Let's be equally democratic about democracy. Stop people from seeing voting data, and prevent legislation from being leaked to the public.