Three Observations
Sam Altman highlights AGI's potential to enhance productivity and creativity, emphasizing decreasing costs, exponential socioeconomic impacts, and the need for equitable access to prevent inequality while fostering adaptability in the workforce.
Read original articleSam Altman discusses the implications of Artificial General Intelligence (AGI) and its potential to benefit humanity. He emphasizes that AGI represents a significant advancement in technology, akin to previous innovations like electricity and the internet. Altman outlines three key observations regarding the economics of AI: first, the intelligence of AI models correlates with the logarithm of resources used for training and operation; second, the cost of utilizing AI decreases significantly over time, leading to increased adoption; and third, the socioeconomic value of AI intelligence grows super-exponentially. He envisions a future where AI agents function as virtual coworkers, enhancing productivity across various fields. While immediate changes may be gradual, the long-term societal and economic impacts of AGI are expected to be profound, necessitating adaptability and resilience in the workforce. Altman warns of potential inequalities arising from AGI's integration, advocating for policies that ensure equitable access to its benefits. He suggests that everyone should have the opportunity to harness the intellectual capacity of future technologies, which could unleash unprecedented creative potential. The balance of power between capital and labor may require intervention to prevent disparities. Altman concludes that the future of AGI should prioritize individual empowerment while addressing safety concerns, ensuring that its advantages are widely distributed.
- AGI is expected to significantly enhance human productivity and creativity.
- The cost of AI technology is decreasing rapidly, promoting wider usage.
- The socioeconomic impact of AI intelligence is predicted to grow exponentially.
- Ensuring equitable access to AGI's benefits is crucial to prevent inequality.
- Adaptability and resilience will be key skills in a future shaped by AGI.
Related
The Intelligence Age
Advancements in AI will enhance human capabilities and problem-solving, with deep learning playing a crucial role. Equitable access requires reduced computing costs, while the transition presents opportunities and challenges.
By default, capital will matter more than ever after AGI
The rise of artificial general intelligence may reduce the value of human labor, concentrating wealth and power among AI controllers, potentially leading to societal stagnation and diminished state-citizen relationships.
Sam Altman says "we are now confident we know how to build AGI"
OpenAI CEO Sam Altman believes AGI could be achieved by 2025, despite skepticism from critics about current AI limitations. The development raises concerns about job displacement and economic implications.
Ask HN: Can we just admit we want to replace jobs with AI?
The discussion on AI models emphasizes concerns about job automation and the implications of Artificial General Intelligence, highlighting the need for honest dialogue to prepare society for its challenges.
A Boy Who Cried AGI
Mark Zuckerberg suggests AI will soon match mid-level engineers, sparking debate on AGI's timeline. The author stresses the need for clear definitions, cautious preparation, and public discourse on AGI ethics.
Moore waited at least five years [1] before deriving his law. On top of that, I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.
[1] http://cva.stanford.edu/classes/cs99s/papers/moore-crammingm...
I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.
See the "OpenAI has a history of broken promises" section of this webpage: https://www.safetyabandoned.org/
In my view, state AGs should not allow them to complete their transition to a for-profit.
3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
First, if the cost is coming down so fast, why the need for "exponentially increasing investment"? One could make the same exponential growth claim for, say, the electric power industry, which had a growth period around a century ago and eventually stabilized near 5% of GDP. The "tech sector" in total is around 9% of US GDP, and relatively stable.
Second, only about half the people with college degrees in the US have jobs that need college degrees. The demand for educated people is finite, as is painfully obvious to those paying off college loans.
This screed comes across as a desperate attempt to justify OpenAI's bloated valuation.
It's weird, because it's still wildly useful and my ideas for side projects are definitely more expansive than they used to be. And yet, I'm really far from having any fear of replacement. Almost none of the answers I'm getting are truly nailing the experience of teaching me something new, while also having perfect accuracy. (I'm on ChatGPT+, not pro.)
But i can't imagine a future where this doesn't lead to mass layoffs, or hiring freezes because these systems can replace tens or hundreds of employess, the end result being more and more unemployed people.
Sure there's been the industrial revolution and the argument usually is: some people will lose their jobs but many other jobs will be created. I'm not sure this argument gonna hold this time given the magnitude of the change.
Is there any serious study of the impact of AI on society and employment, and most importantly is there any solution to this problem ?
We are burning cash so fast and getting very little in return for it. This is a death spiral and we refuse to admit it.
> The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.
We are entirely unconcerned with accuracy and refuse to see how the limitations of our product will not allow us to follow this simple economic aphorism into success.
> The socioeconomic value of linearly increasing intelligence is super-exponential in nature.
You see, even though we're burning money at an exponentially increasing rate, some how this /linear/ increase in output is secretly "super-exponential" in nature. I have nothing to back this up but you just have to believe me.
At least Steve Jobs build something worth having before bullshitting about it. This is just embarassing.
> 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.
> 3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.
My own editorial:
Point 1 is pretty interesting as if P != NP then this is essentially "we can't do better than random search". So, in order to find progressively better solutions as we increase the (linear) input, we need exponentially more resources to find the answer. While I believe P != NP it's interesting to see this play out in the context of learning and AI.
Point 2 is semi well-known. I'm having trouble finding it but there was an article a while back talking about algorithmic efficiencies to the DFT (or DCT?) were outpacing efficiencies that could be just attributed to Moore's law. Meaning the DFT was improving a few orders of magnitude faster than just Moore's law would imply. I assume this is essentially a Wright's law but for attention, in some sense, where more attention to problems leads to better optimizations that dovetail with Moore's law.
Point 3 seems like it's almost a corollary, at least in the short term. If intelligence is capturing the exponential search and it can be re-used to find further efficiency, as in point 2, you get super-exponential growth. I think Kurzweil mentioned something about this as well.
I haven't read the whole article but this jumped out at me:
> Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.
A bald faced lie. Their mission is to capture value from developing AGI. Any benefit to humanity is incidental.
Anyway, AI/AGI will not yield economic liberation for the masses. We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened. Why? Snuck in here:
> the price of... a few inherently limited resources like land may rise even more dramatically.
This is really the crux of it. The price of land will skyrocket, driving the "cost of living" (cost of land) wedge further between the haves and have-nots.
I'm still stuck thinking about this point. I don't know that it's obviously true. Maybe a more bounded claim would make more sense, something like: increasing intelligence in the short-term has big compounding effects. But there's also a cap as society and infrastructure has to adapt. And I don't know how this plays out within an adversarial system where people might be competing for scarce resources like human attention.
Taken to the extreme, one could imagine a fantasy/scifi scenario where each person is empowered like a god in their own universe, allowing them to experiment, learn and and create endlessly.
This is so exciting. I guess NVIDIA's Project DIGITS [0] will be the starting point for a bit more serious home lab usage of local LLMs, while still being a bit like what Quadro used to be in the pro/prosumer market in the 00s/10s.
Now it's all RTX, and while differences still exist between pro gamer cards and workstation cards, most of what workstation GPUs were used back then is easily doable by pro gamer cards nowadays.
Let's just hope that the quoted values are also valid for these prosumer devices like Project DIGITS.
Also, let's hope that companies start targeting that user base specifically, like a Groq SBC.
[0] https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwe...
Hasn’t the latest big improvements in LLMs been due to change in approach/algorithm? Reasoning models and augmenting LLMs with external tools and internet access (agents and deep research).
As far I can tell classical pure LLMs are only improving modestly at this point.
What is even the opposite of "general intelligence"? Specialized intelligence?
But AI already has a large spectrum. It's not an expert system in this or that. It's quite general. But it's not actually intelligent.
We should replace "AGI" with "AAI": Actual Artificial Intelligence.
I think this is extremely myopic to assume things like “have much more time to enjoy with our families” unless that time is because you’re unemployed. Every major technology over the past couple hundred years has been paired with such promises, and it’s never materialized. Only through unions did we get 8h days and weekends. Electricity, factory farming, etc has not made me work any less, even if I do different things than I would have 200 years ago.
I think it’s also odd to assume the only things preventing curing all disease is the lack of intelligence and scale. There are so many more factors that go into this, and into an already competitive landscape (biology) which is constantly evolving and changing. With every new technique innovated (eg CRISPR) and every new discovery (eg immunotherapy) proven, the directions of what’s possible changes. If AGI is thru LLMs as we know it (color me skeptical), they do not have the ability to absorb such new possibilities and change on a dime.
I could go on and on but this is just a random comment on the internet. I understand the original post is meant to achieve certain goals at a specific word length, but not diving into all of to see possibilities (including failure modes in his extraordinarily optimistic assumptions) is quite irresponsible if he is truly meant to be a leader for a bold new future.
Sure we can imagine it. However to make it happen, it's not enough to imagine the end goal, you need to understand and execute every single step.
I suspect his lack of knowledge about what's actually involved allows him to imagine it.
ie I notice he hasn't declared a world free of software bugs and failures ( a much easier task ), before declaring a world free of bugs in human biology.
I wonder who theorized this? Altman isn't known for having models about AGI.
To the actual theorist: Claiming in one paragraph that AI goes as log resources, and in the next paragraph that the resource costs drop by 10x per year, is a contradiction; the latter paragraph shows a dependence on algorithms that is nothing like "it's just the compute silly".
> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.
The primary challenge in my opinion is that access to AGI will dramatically accelerate wealth inequality. Driving costs lower will not magically enable the less educated to be better able to educate themselves using AGI, particularly if they're already at risk or on the edge of economic uncertainty.
I want to know how people like sama are thinking about the economics of access to AGI in more broad terms, more than just a footnote in a utopian fluff piece.
edit: I am an optimist when it comes to the applications around AI, but I have no doubt that we're in for a rough time as the world copes with the economic implications of it's applications. Globally, the highest paying jobs are knowledge workers and we're on the verge (relatively speaking) of making that work go the way that blue collar work did in post-war United States. There's a lot of hard problems ahead and it bothers me when people sweep them under the rug in the name of progress.
The AI being controlled by megacorps scenario conveniently left out.
I fear this is correct, but with "smart" in the sense of smart TVs. In economic terms, TVs are amazing compared to just a few years ago - more pixels, much cheaper, more functionality - but in practice they spy on you and take ten seconds to turn on and show unskippable ads. This is purely a social (legal, economic, etc) problem as opposed to a technical one, and its solution (if we ever find one) would be likewise. So it's frightening to see someone with as much power over the outcome say something like this:
> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.
When capital has control of an industry, but voluntarily gives little pieces of it out to labor so that they can "share" the profit, I think we all know how that turns out. It does seem possible that AGI really will get built and really will seep into everything and really will create a ton of economic value. But getting from there to the part where it improves everyone's lives is a social problem, akin to the problem of smart TVs, and I can't imagine a worse plan to solve that problem than leaving it up to the capitalist that owns the AGIs.
Nothing will stop the Chinese CCP to direct AI towards more state surveillance. Or any number of actors to use AI to create extremely lethal swarms of drones.
I really don't want a future in which I had to supervise 1 million of real-but-relatively-junior virtual coworkers. I would prefer 1 senior over 1 million juniors. I'm in the programming industry and I don't think that coding might be scaled.
Is this really true? O3 (not mini) is still being held for ""safety testing"", and Sora was announced so far before release.
There's a lot to unpack there. Maintain an internal 10-year technological lead compared to what's public with OpenAI?
AGI as defined by OpenAI as s "AI systems that can generate at least $100 billion in profits", right? Because what they are doing has very little to do with actual AGI.
yeah, right. What world is this guy living in? An idealistic one? Will AI equally spread the profits of that economic growth he is talking about? I only see companies getting by on less menpower and doing fine, while poor people stay poor. Bravo. Well thought trhough, guy who now sells "AI".
1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.
2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.
3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.
And my coffee machine is plotting the Singularity...Let it be for the record in the future, AGI will be known as A Grand Illusion.
Sam’s a savvy businessman so he obviously understands that and goes few steps further. He promises exponential returns and addresses any regulatory and societal concerns. This piece is strategically crafted, not for us, but for investors.
1) give me more money and i will make you rich 2) don’t look at deepseek 3) I repeat: there is no reason to not keep giving me EXPONENTIALLY more money to boil all of the oceans
Is it just me, or is that an incredibly weak/vague definition for AGI? Feels like you could make the claim that AI is at this level already if you stretch the terms he used enough.
Don’t let megacorps dominance prevent individual action. The best way to be hopeful is to effect change in your immediate surroundings. Teach people how to leverage AI, then they won’t be hold hostage to the tyranny of the beauraucrats - doctors, lawyers, accountants, politicians, software engineers, project managers, bankers, investment advisors etc.
Yes, AI makes mistakes .. so what? Humans do too.
Credit where credit is due - Sam may be no saint, but OpenAI deserves credit for launching this revolution. Directly or indirectly that led to the release of open models. Would the results have been the same without Sam? Nobody knows, not a point worth anybody’s time debating.
Given most of us here are software engineers, it’s natural to feel threatened, there will be those of us whose skills will be made obsolete. And some of us will make the jump to the land of ideas, and these tools will empower us to build solutions that previously required large companies to build. Perhaps that might mean that we focus less on monetary rewards instead of change, as it becomes ever so easy to effect that change.
To those whose skills will be made obsolete - you have a choice on whether you want to let that happen. Some amount of fear is healthy - keeps our mind alert and allows us to grow.
There will be a growing pain, as our species evolves. We’ll have to navigate that with empathy.
Change starts from you. You are more powerful than you can imagine, at any given point.
Marxist analysis from Sam Altman?
> We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.
I can't imagine a world in which billionaires (trillionaires?) would happily share their wealth to people whose work they don't need anymore. Honestly, I have more faith in a rogue ASI taking over the world to install a fair post-scarcity society than on current politicians and tech-oligarchs to give up a single cent to the masses.
OpenAI and Microsoft signed a profoundly silly contract with vague terms about AGI! It's petty to snipe about journalists asking silly questions: they are responding to the fact that Sam Altman and Satya Nadella are not serious people, despite all their money and power.
Related
The Intelligence Age
Advancements in AI will enhance human capabilities and problem-solving, with deep learning playing a crucial role. Equitable access requires reduced computing costs, while the transition presents opportunities and challenges.
By default, capital will matter more than ever after AGI
The rise of artificial general intelligence may reduce the value of human labor, concentrating wealth and power among AI controllers, potentially leading to societal stagnation and diminished state-citizen relationships.
Sam Altman says "we are now confident we know how to build AGI"
OpenAI CEO Sam Altman believes AGI could be achieved by 2025, despite skepticism from critics about current AI limitations. The development raises concerns about job displacement and economic implications.
Ask HN: Can we just admit we want to replace jobs with AI?
The discussion on AI models emphasizes concerns about job automation and the implications of Artificial General Intelligence, highlighting the need for honest dialogue to prepare society for its challenges.
A Boy Who Cried AGI
Mark Zuckerberg suggests AI will soon match mid-level engineers, sparking debate on AGI's timeline. The author stresses the need for clear definitions, cautious preparation, and public discourse on AGI ethics.