The Intelligence Age
Advancements in AI will enhance human capabilities and problem-solving, with deep learning playing a crucial role. Equitable access requires reduced computing costs, while the transition presents opportunities and challenges.
Read original articleIn the coming decades, advancements in artificial intelligence (AI) are expected to significantly enhance human capabilities, enabling individuals to achieve what once seemed impossible. This transformation is not due to genetic changes but rather the result of a more intelligent societal infrastructure built by previous generations. AI will provide tools that allow people to tackle complex problems, leading to shared prosperity and improved quality of life globally. The development of deep learning has been pivotal, allowing AI to learn from vast amounts of data and improve its problem-solving abilities. As AI systems evolve, they will serve as personal assistants, facilitating tasks such as medical care coordination and scientific research. However, to ensure equitable access to AI, it is crucial to reduce the costs of computing resources. The transition to the Intelligence Age will bring both opportunities and challenges, necessitating careful navigation of potential risks. While the future promises remarkable achievements, such as climate solutions and space colonization, it is essential to address the potential disruptions in labor markets and ensure that AI amplifies human creativity and utility. Ultimately, the Intelligence Age is anticipated to foster unprecedented prosperity, reshaping society in ways that may seem unimaginable today.
- Advancements in AI will enhance human capabilities and problem-solving.
- Deep learning has been crucial for AI's development and effectiveness.
- Reducing computing costs is essential for equitable access to AI.
- The transition to the Intelligence Age will present both opportunities and challenges.
- AI is expected to significantly impact labor markets and societal roles.
Related
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
There's No Guarantee AI Will Ever Be Profitable
Silicon Valley tech companies are investing heavily in AI, with costs projected to reach $100 billion by 2027. Analysts question profitability, while proponents see potential for significant economic growth.
The Impact of AI on Computer Science Education
AI is impacting computer science education and the job market, with studies showing mixed effects on learning. Curricula will adapt, emphasizing responsible AI, while new job roles will emerge alongside automation.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
There's Just One Problem: AI Isn't Intelligent
AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.
Translation for the rest of us: "we need to fully privatize the OA subsidiary and turn it into a B-corp which can raise a lot more capital over the next decade, in order to achieve the goals of the nonprofit, because the chief threat is not anything like existential risk from autonomous agents in the next few years or arms races, but inadequate commercialization due to fundraising constraints".
I'm not an AI skeptic at all, I use llms all the time, and find them very useful. But stuff like this makes me very skeptical of the people who are making and selling AI.
It seems like there was a really sweet spot wrt the capabilities AI was able to "unlock" with scale over the last couple years, but my high level sense is that each meaningful jumps of baseline raw "intelligence" required an exponential increases in scale, in terms of training data and computation, and we've reached the ceiling of "easily available" increases, it's not as easy to pour "as much as it takes" into GPT5 if it turns out you need more than A Microsoft.
"a few thousand days" is such a funny and fascinating way to say "about a decade"
Superficially, reframing it as "days" not "years" is a classic marketing psychology trick, i.e. 99 cents versus a dollar, but I think the more interesting thing is just the way it defamiliarizes the span. A decade means something, but "a few thousand days" feels like it means something very different.
He's hand-waving around the idea presented in the Universal Approximation Theorem, but he's mangled it to the point of falsehood by conflating representation and learning. Just because we can parameterize an arbitrarily flexible class of distributions doesn't mean we have an algorithm to learn the optimal set of parameters. He digs an even deeper hole by claiming that this algorithm actually learns 'the underlying “rules” that produce any distribution of data', which is essentially a totally unfounded assertion that the functions learned by neural nets will generalize is some particular manner.
> I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.
If you think the Universal Approximation Theorem is this profound, you haven't understood it. It's about as profound as the notion that you can approximate a polynomial by splicing together an infinite number of piecewise linear functions.
That's the sales pitch, that this will benefit all.
I'm very pro-AI, but here's the only prediction for the future I would ever make: AI will accelerate, not minimize, inequality and thus injustice, because it removes the organizational limits previously imposed by bureaucracy/coordination costs of humans.
It's not AI's fault. It's not because people are evil or weak or mean, but because the system already does so, and the system has only been constrained by inability to scale people in organizations, which is now relieved by AI.
Virtually all the advances in technology and civilization have been aimed at people capturing resources, people, and value, and recent advances have only accelerated that trend. Broader distributions of value are incidental.
Yes, the U.S. had a middle class after the war, and yes, China has lifted rural people out of technical poverty. But those are the exceptions against the background of consolidation of wealth and power world wide. Not through ideology or avarice but through law and technology extending the reach of agency by amplifying transaction cost differences in market power, information asymmetry and risk burdens. The only thing that stops this is disasters like war and environmental collapse, and it's only slowed by recalcitrance of people.
E.g., now we are at a point were people's economic and online activity is pervasively tracked, but it's impossible to determine who's the owner of the vast majority of assets. That creates massive scale for getting customers, but impedes legal responsibility. Nothing in economic/market theory says that's how it should be; but transaction cost economics does make clear that the asymmetry can and will be exploited, so organizations will capture governance to do so.
It's not AI's job nor even AI's focus to correct injustice, and you can't blame AI for the damage it does. But like nuclear weapons, cluster munitions, party politics, (even software waivers of liability) etc., it creates moral hazards far beyond the ability of culture to accommodate.
(Don't get me started on how blockchain's promise of smart contracts scaling to address transaction risks has devolved into proliferating fraud schemes.)
What is the point of education if the bots can do all the work. If the worlds best accounting teacher is an AI, why would you want anyone (anything?) other than that AI handling your accounting?
A world where human intelligence is second fiddle to AI, schooling _will not_ be anything like what it is today.
None of that shared prosperity was freely given by the Sam Altmans of the world, it was hard won by labor organizers and social movements. Without more of that, the progress from AI will continue the recent trend of wealth accumulating in the hands of a few. The idea that everyone will somehow prosper equally from AI, without specific effort to make that happen, is nonsense.
Let's say you have some amazing project that's going to require 100 Phd-years of work to carry out. In the present world that costs something like $1e7. In the post-AI world, that same amount of intelligence will cost $1e3, an enormous reduction in price. That might seem like a huge impact. BUT, if the project was so amazing, why couldn't you raise $1e7 to pursue it? Governments and VCs throw this kind of money around like corn-hole bags. So the number of actually-worthwhile projects that become feasible post-AI might actually be quite small.
This is one of those few cases where I'm actually more bullish than Altman. I don't need to wait for my kids to have it, but rather I personally am already using this daily. My regular thing is to upload a book/article(s) into the context of a Claude project and then chat with it - I genuinely find it to already be at the level of a decent (though not yet excellent) tutor on most subjects I tried, and by far better than listening to a lecture. The main feature I'm missing is of the AI being able to collaborate with me on a digital whiteboard.
> AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf.
I get it, coordinating medical care is exhausting, but it's kind of amusing that rather than envisioning changing a broken system people instead envision AIs that are so advanced that they can deal with the complexity of our broken systems, and in doing so potentially preserve them.
Related btw to using AI for code.
For example, I’m designing and 3D printing custom LED diffuser channels from TPU filament. My first attempt was terrible, because I didn’t have an intuition for how light propagates through a material.
After a bit of chatting with ChatGPT, I had an understanding and some direction of where to go.
To actually approach the problem properly I decided to run some Monte Carlo light transport simulations against an .obj of my diffuser exported from Fusion 360.
The problem was, the software I’m using only supports directional lights with uniform intensity, while the LEDs I’m using have a graph in their datasheet showing light intensity per degree away from orthogonal to the LED SMD component.
I copy pasted the directional light implementation from the light transport library as well as the light-intensity-by-degree chat from the LED datasheet and asked Claude to write code to write a light source that samples photons from a disc of size x with the probability of emission by angle governed by the chat from the datasheet.
A few iterations later and I had a working simulation which I then verified back against the datasheet chart.
Without AI this would have been a long long process of brushing up on probability, vector math, manually transcribing the chart.
Instead in like 10 minutes I had working code and the light intensity of the simulation against my mesh matched what I was seeing in real life with a 3D printed part.
My take:
* Foom/doom isn't helpful. But, calm cautiousness is. If you're acting from a place of fear and emotional dysregulation, you'll make ineffective choices. If you get calm and regulated first, and then take actions, they'll be more effective. (This is my issue with AGI-risk people, they often seem triggered/fear/alarm-driven rather than calm but cautious)
* Piece is kind of a manifesto for raising money for AI infra
* Sam's done a podcast before about meditation where he talked about similar themes of "prudence without fear" and the dangers of "deep fear and panic and anxiety" and instead the importance of staying "calm and centered during hard and stressful moments" - responding, not reacting (strong +1)
* It's no accident that o1 is very good at math physics and programming. It'll keep getting much better here. Presumably this is the path for AGI to lead to abundance and cheaper energy by "solving physics"
This seems to be the key of the piece to me. It's his manifesto for raising money for the infra side of things. And, it resonates: I don't want ASI to only be affordable to the ultra rich.
AI is a side-show.
Intelligence is ambient in living tissue, so we already have as much intelligence as is adaptive. We don't need more. As talking apes made out of soggy mud wrapped around calcium twigs living in the greasy layer between hard vacuum and a droplet of lava which in turn is orbiting a puddle of hydrogen in the hem of the skirt of a black hole our problems are just not that complicated.
Heck, we are surrounded by four-billion year-old self-improving nanotechnology that automatically provides almost all our physical needs. It's even solar-powered! The whole life-support system was fully automatic until we fucked it up in our ignorance. But we're no longer ignorant, eh?
The vast majority of our problems today are the result of our incredible resounding success. We have the solutions we need. Most of them were developed in the 1970's when the oil got expensive for a few minutes.
Must we boil the oceans just to have a talking computer tell us to get on with it? Can't we just do like the Wizard of Oz? Have a guy in a box with a voice changer and a fancy light show tell us to "love each other"? Mechanical Turk God? We can use holograms.
I am a believer that people like sam are not lying. Anyone using these models daily probably believes the same. The o1 model, if prompted correctly, can architect a code base in a way that my decade+ of premium software experience cannot. Prompted incorrectly, it looks incompetent. The abilities of the future are already here, you just need to know how to use the models.
… This, and nothing about the democratizing effect of “open source AI” (Yes we still need to define what that is!).
I don’t want Sam as the thought leader of AI. I even prefer Zuck.
Are there any thought leaders that are really about democratization and local foss AI on open hardware? Or do they always follow (or step into the light) after the big moneymakers have had their moment? Who can we start watching? The Linus, the RMS, the Wozniak’s of AI. Who are they?
With the current hype wave it feels like we’re almost there but this piece makes me think we’re not.
Surprisingly complicated HTML source code for a simple blog post.
Here it is as:
Plain HTML: https://hub.scroll.pub/sama/index.html
o1-preview perfectly evaluated the conditional and determined that, hilariously, it would always evaluate to true.
o1 untangled the spaghetti, and, verifying that it was correct was quick and easy. It created a perfect truth table for me to visualize.
This is a sign of things to come. We are speeding up.
The entire AI trend - long term is based on the idea that AI will profoundly change the world. This has sparked a global race for developing better AI systems and the more dangerous winner takes all outcome. It is therefore not surprising that billions of dollars are being spent to develop more powerful AI systems as well as to restructure operations around them.
All the existing systems we have must fundamentally change for the better if we want a good future.
The positive aspects / utopia promises have much more visibility to the public than the negative effects / dystopian world.
ARE WE TO pretend that Human greed, selfishness, desires to dominate and control, animalistic behaviour, use of technologies for war and other destructive purposes don't exist?
We are living in times of war and chaos and uncertainty. Increasingly advanced technology is being used on the battlefield in more covert and strategic ways.
History is repeating itself again in many ways. Have we failed to learn? The consequences might be harsher with more advanced technology.
I have read and thought deeply about several anti AI doomer takes from prominent researchers and scientists but I haven't seen any which aren't based on assumptions or foolproof. For something that profoundly changes the world, it's bad to base your hopes on assumptions.
I see people dunking on llms which might not be AI's final form. Then they extrapolate that and say there is nothing to worry about. It is a matter of when not if.
The thought of being useless or worse being treated as nothing more than pests is worrying. Job losses are minor in comparison.
The only hope I have is that we are all in this together. I hope peace and goodwill prevails. I hope necessary actions are taken before it's too late.
A more pragmatic perspective indicates that there are more pressing problems that need to be addressed if we want to avoid a doomer scenario.
Reminds me of these quotes from Sam on this podcast episode (https://www.youtube.com/watch?v=KfuVSg-VJxE)
* "Prudence without fear" (Sam referencing another quote)
* "if you create the descendants of humanity from a place of, deep fear and panic and anxiety, that seems to me you're likely to make some very bad choices or certainly not reflect the best of humanity."
* "the ability to sort of like, stay calm and centered during hard and stressful moments, and to make decisions that are where you're not too reactive"
"Why You Should Fear Machine Intelligence
Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity." - Sam Altman
solarpunk envisioned, possible today:
- entire human knowledge available on palm of your hand..!
vs
cyberdaftpunk actually more common:
- another idiot driver killed somebody when being busy with his candy crush saga or instagram celebrity vid.
While it's true there are a lot of jobs obsoleted by technological progress, the vision of personal AI teams creating a new age of prosperity only makes sense for knowledge workers. Sure, a field worker picking cabbage could also have an AI team to coordinate medical care. But in this brilliant future, are the lowest members of society suddenly well-paid?
The steam engine and subsequent Industrial Revolution created a lot of jobs and economic productivity, sure, but a huge amount of those jobs were dirty, dangerous factory jobs, and the lion's share of the productivity was ultimately captured by robber barons for quite some time. The increase in standard of living could only be seen in aggregate on pages of statistics from the mahogany-paneled offices of Standard Oil, while the lives of the individuals beneath those papers more often resembled Sinclair's Jungle.
Altman's suggestion that avoiding AI capture by the rich merely requires more compute is laughable. We have enormous amounts of compute currently, and its productivity is already captured by a small number of people compared to the vast throngs that power civilization in total. Why would AI make this any different? The average person does not understand how AI works and does not have the resources to utilize it. Any further advancements in AI, including "personalized AI teams," will not be equally shared, they will be packaged into subscription services and sold, only to enrich those who already control the vast majority of the world's wealth.
"This age is characterized by society's increasingly advanced capabilities, driven not by genetic changes but by societal infrastructure becoming smarter and more efficient over time."
Thankfully, we have a recent point of reference. The pioneers of internet & computing's 1st wave transformed civilization. Did they spend years saber rattling about how 'change was coming' ?
Since the access itself is not differentiating, it's going to be the most educated benefiting the most. Already today few people can use the o1 model because they can't dream up a PhD level question nor understand its answers.
More importantly, access to AI does not mean access to assets. Me, a total nobody, can use AI to design the world's best car. But that does nothing because I don't have money or land. Anybody can query AI for that car but only asset owners can actually implement the idea and extract value. Those asset owners could use AI to bring widespread prosperity to all of mankind, but we know they won't.
We don't need more material prosperity, we need social prosperity. Family formation, the restoration of community life, economic security. Not "more stuff".
This is so rich coming from a tech field that's on track to match the energy consumption of a small country. (And no, AI is not going to offset this by 'finding innovative solutions to climate change' or whatever)
> As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.
It's very easy as an extremely rich person to just say, "don't worry, in the end it'll be better for all of us." Maybe that's true on a societal scale, but these are people's entire worlds being destroyed.
Imagine you went to college for a medical specialty for 8-10 years, you come out as an expert, and 2 years later that entire field is handled by AI and salaries start to tank. Imagine you have been a graphic designer for 20 years supporting your 3 children and bam a diffusion model can do your job for a fraction of the cost. Imagine you've been a stenographer working in courtrooms to support your ill parents and suddenly ASR can do your job better than you can. This is just simple stuff we can connect the dots on now. There will be orders of magnitude more shifts that we can't even imagine right now.
To someone like Sam, everything will be fine. He can handle the massive societal shift because he has options. Every a moderately wealthy person will be OK.
But the entire middle class is going to start really freaking the fuck out soon as more and more jobs disappear. You're already seeing anti-AI sentiment all over the web. Even in expert circles, you can see skepticism. People saying things like, "how do I opt out of Apple Intelligence?" People don't WANT more grammar correction or AI emojis in their lives, they just want to survive and thrive and own a house.
How are we going to handle this? Sam's words of "if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable" doesn't mean shit to a family of 4 who went through layoffs in the year 2025 because AI took their job while Microsoft's stock grows 50%.
> in an important sense, society itself is a form of advanced intelligence
This made me think of Charles Stross' observation that Corporations (and bureaucracies and basically any rule-based organizations) are a form of artificial intelligence.
https://www.antipope.org/charlie/blog-static/2019/12/artific...
Come to think of it, the whole article is rather pertinent to this thread.
I would be happy to be convinced that climate is an intelligence problem.
One could argue it could be solved with "abundant energy" but if this abundant energy comes from some new intelligence then we are probably several decades away from having it running commercially. I would also be happy to be convinced that we do have this kind of time to act for climate.
Maybe we will attend superintelligence in 1000 years, maybe not. Maybe Jesus comes back or Krishna reincarnates on earth, who knows. But it is a long way ahead, and it did not start with Sam, and is really not going to end with ChatGPT.
Seems like it's very much the former, and not all the latter. Indeed my understanding of the last 15 years of AI research is that 'rules-based' methods floundered while purely 'data-mimicking' methods have flourished.
So because we have an algorithm to learn any distribution of data, we are now on the verge of the Intelligence Age utopia? What if it just entrenches us in a world built off of the data it was trained on?
That it will lead to prosperity, happiness and a better world (for everyone) is simply a fallacy foisted upon the masses by promoters salivating at potential riches.
A watershed moment for humanity.
https://www.washingtonpost.com/opinions/2024/07/25/sam-altma...
The same magic that can make stuff out of thin air, might as well make them disappear.
Anyway, I'm hooked. What a time to be alive!
Could anyone elaborate on this? Further down he talks about the necessity of bringing the cost of computing down. Is that really the bottleneck?
i think this is the prevailing wisdom but theres an angle that openai doesnt value and therefore isnt mentioned. There's far more compute sitting idle in everyone's offices and homes and pockets than there are in the $100bn openai cluster. it just isnt useful for training because physics. but its useful for inference. local LLMs ship this-next year in Chrome (gemini nano) and Apple (apple intelligence) that will truly be available for everyone instead of going thru OpenAI's infra. they'll be worse than GPT4, but only for a couple more years.
and also to me today, but none of that matters as long as I still get paid
We have a capitalist arguing for support of further investment in his capital expenditures in the form of planet-ending heat and monopoly power, promising to pay for it with intelligence more rapidly delivered.
No, thanks.
Please correct me if I’m wrong
More of a "everything's fine, nothing to worry about".
While, there is already job disruption, and widespread misinformation.
It isn't in some future, it is already happening.
> Nobody gives a fuck
> "the children have to go to school!"
> Well moms, good luck!
Sam Altman is the last guy we want helping lead that revolution.
Confident based on what, exactly? Sam Altman is engaging in 'The Secret' where if you really, really believe a thing, you'll manifest it.
Mind you, Sam Altman actually has no technical expertise, so he really really believes he can pay other people who actually know something to do magic whilst he walks about pretending to be Steve Jobs 2.0.
He'll get his trillion, AI will go nowhere but he'll be on to the next grift by then.
I keep noticing how LLM's make our vocabulary not work anymore. Maybe we should call it the age of fast-talk :P
I dunno Sam, groceries have gotten awfully expensive.
This statement is manifestly untrue. Neural networks are useful, many hidden layers is useful, all of these architectures are useful, but the idea that they can learn anything, is based less on empirical results and more on what Sam Altman needs to convince people of to get this capital investments.
Chatgpt: The text presents a highly optimistic view of the future shaped by AI and deep learning, but there are several areas where the reasoning might be seen as flawed or overly simplistic. Here are some points that could be critiqued:
Overconfidence in Technology: The author expresses a strong belief in the inevitability of AI advancement and its benefits, which may overlook the unpredictability of technological development and the potential for setbacks.
Underestimating Risks: While the potential for prosperity is emphasized, there’s insufficient attention given to the ethical, societal, and environmental risks associated with AI, such as bias, privacy concerns, and job displacement.
Assumption of Universal Benefit: The idea that AI will lead to shared prosperity ignores systemic inequalities that might prevent equitable access to AI technology, potentially leading to a wider wealth gap.
Neglect of Human Factors: The argument largely abstracts from human emotions, societal values, and the complexities of human behavior. The assumption that prosperity will automatically lead to happiness or fulfillment is problematic.
Simplistic Historical Comparisons: The comparison of current advancements to past technological revolutions (e.g., Industrial Age) may not account for the unique challenges posed by AI, such as rapid obsolescence and ethical dilemmas that previous technologies did not face.
Lack of Detailed Solutions: The text calls for action but offers little concrete guidance on how to navigate the complexities of AI’s integration into society, especially regarding labor market changes and ethical considerations.
Optimism Bias: The author’s perspective may be influenced by optimism bias, leading to a potentially unrealistic view of future outcomes without sufficient acknowledgment of the challenges.
Dependence on Infrastructure: While the author correctly identifies the need for infrastructure to support AI, there’s little discussion of the potential for that infrastructure to become a battleground for control, leading to conflicts rather than cooperation.
Diminished Role of Individuals: The portrayal of people relying heavily on AI teams may undermine the value of individual creativity and agency, potentially leading to a society overly dependent on technology.
By examining these points, one can argue that while the vision of a prosperous future powered by AI is compelling, it is essential to approach such ideas with a critical perspective, considering the broader implications and potential pitfalls.
Paging Dr. Bullshit, we've got an optimist on the line who'd like to have a word with you.
Related
All the existential risk, none of the economic impact. That's a shitty trade
Despite high expectations, AI advancements have not significantly impacted productivity or profits. Concerns about creating highly intelligent entities pose potential existential threats, urging careful monitoring and management of AI implications.
There's No Guarantee AI Will Ever Be Profitable
Silicon Valley tech companies are investing heavily in AI, with costs projected to reach $100 billion by 2027. Analysts question profitability, while proponents see potential for significant economic growth.
The Impact of AI on Computer Science Education
AI is impacting computer science education and the job market, with studies showing mixed effects on learning. Curricula will adapt, emphasizing responsible AI, while new job roles will emerge alongside automation.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
There's Just One Problem: AI Isn't Intelligent
AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.