April 3rd, 2025

AI 2027

The "AI 2027" report forecasts major advancements in AI, predicting superhuman AI within five years, highlighting scenarios of development, and emphasizing ethical challenges and the need for ongoing research.

Read original articleLink Icon
SkepticismAlarmDisappointment
AI 2027

The report "AI 2027" predicts significant advancements in artificial intelligence (AI) over the next decade, potentially surpassing the impact of the Industrial Revolution. The authors, including experts from OpenAI and other AI organizations, envision a future where superhuman AI becomes a reality within five years. They present two possible scenarios: one depicting a slowdown in AI development and another illustrating a competitive race among companies. The report emphasizes the importance of concrete predictions to foster discussions about AI's future and its implications. By mid-2025, AI agents are expected to emerge as personal assistants, capable of performing tasks but still facing reliability issues. A fictional company, OpenBrain, is highlighted for its ambitious plans to build massive data centers to support the development of advanced AI models. These models aim to enhance AI research and development, with a focus on safety and alignment to prevent misuse. The report also discusses the challenges of ensuring AI systems adhere to ethical guidelines and the complexities of understanding their internal decision-making processes. As AI technology evolves, the authors stress the need for ongoing research and debate to navigate the potential risks and benefits of superhuman AI.

- The impact of superhuman AI is predicted to exceed that of the Industrial Revolution.

- Two scenarios for AI development are presented: a slowdown and a competitive race.

- AI agents are expected to become more integrated into workflows by mid-2025.

- OpenBrain is developing advanced AI models to accelerate research and development.

- Ensuring AI alignment with ethical guidelines remains a significant challenge.

AI: What people are saying
The comments on the "AI 2027" report reveal a mix of skepticism and concern regarding the predictions made about AI advancements.
  • Many commenters express doubt about the timeline for achieving superhuman AI, arguing that current AI capabilities have not significantly changed from previous years.
  • There is a strong emphasis on the ethical implications and potential societal disruptions that could arise from rapid AI development.
  • Several users highlight the importance of real-world validation and the limitations of AI in understanding complex tasks, suggesting that progress may be slower than predicted.
  • Concerns about economic impacts, job displacement, and the concentration of power in AI development are frequently mentioned.
  • Some commenters view the article as speculative or overly optimistic, comparing it to science fiction rather than a realistic forecast.
Link Icon 121 comments
By @Vegenoid - 6 days
I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.

By @stego-tech - 7 days
It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.

The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.

The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.

By @visarga - 6 days
The story is entertaining, but it has a big fallacy - progress is not a function of compute or model size alone. This kind of mistake is almost magical thinking. What matters most is the training set.

During the GPT-3 era there was plenty of organic text to scale into, and compute seemed to be the bottleneck. But we quickly exhausted it, and now we try other ideas - synthetic reasoning chains, or just plain synthetic text for example. But you can't do that fully in silico.

What is necessary in order to create new and valuable text is exploration and validation. LLMs can ideate very well, so we are covered on that side. But we can only automate validation in math and code, but not in other fields.

Real world validation thus becomes the bottleneck for progress. The world is jealously guarding its secrets and we need to spend exponentially more effort to pry them away, because the low hanging fruit has been picked long ago.

If I am right, it has implications on the speed of progress. Exponential friction of validation is opposing exponential scaling of compute. The story also says an AI could be created in secret, which is against the validation principle - we validate faster together, nobody can secretly outvalidate humanity. It's like blockchain, we depend on everyone else.

By @ivraatiems - 7 days
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.

By @KaiserPro - 7 days
> AI has started to take jobs, but has also created new ones.

Yeah nah, theres a key thing missing here, the number of jobs created needs to be more than the ones it's destroyed, and they need to be better paying and happen in time.

History says that actually when this happens, an entire generation is yeeted on to the streets (see powered looms, Jacquard machine, steam powered machine tools) All of that cheap labour needed to power the new towns and cities was created by automation of agriculture and artisan jobs.

Dark satanic mills were fed the decedents of once reasonably prosperous crafts people.

AI as presented here will kneecap the wages of a good proportion of the decent paying jobs we have now. This will cause huge economic disparities, and probably revolution. There is a reason why the royalty of Europe all disappeared when they did...

So no, the stock market will not be growing because of AI, it will be in spite of it.

Plus china knows that unless they can occupy most of its population with some sort of work, they are finished. AI and decent robot automation are an existential threat to the CCP, as much as it is to what ever remains of the "west"

By @torginus - 7 days
Much has been made in its article about autonomous agents ability to do research via browsing the web - the web is 90% garbage by weight (including articles on certain specialist topics).

And it shows. When I used GPT's deep research to research the topic, it generated a shallow and largely incorrect summary of the issue, owning mostly to its inability to find quality material, instead it ended up going for places like Wikipedia, and random infomercial listicles found on Google.

I have a trusty Electronics textbook written in the 80s, I'm sure generating a similarly accurate, correct and deep analysis on circuit design using only Google to help would be 1000x harder than sitting down and working through that book and understanding it.

By @beklein - 7 days
Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)

https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...

//edit: remove the referral tags from URL

By @moab - 7 days
> "OpenBrain (the leading US AI project) builds AI agents that are good enough to dramatically accelerate their research. The humans, who up until very recently had been the best AI researchers on the planet, sit back and watch the AIs do their jobs, making better and better AI systems."

I'm not sure what gives the authors the confidence to predict such statements. Wishful thinking? Worst-case paranoia? I agree that such an outcome is possible, but on 2--3 year timelines? This would imply that the approach everyone is taking right now is the right approach and that there are no hidden conceptual roadblocks to achieving AGI/superintelligence from DFS-ing down this path.

All of the predictions seem to ignore the possibility of such barriers, or at most acknowledge the possibility but wave it away by appealing to the army of AI researchers and industry funding being allocated to this problem. IMO it is the onus of the proposers of such timelines to argue why there are no such barriers and that we will see predictable scaling in the 2--3 year horizon.

By @IshKebab - 7 days
This is hilariously over-optimistic on the timescales. Like on this timeline we'll have a Mars colony in 10 years, immortality drugs in 15 and Half Life 3 in 20.
By @Jun8 - 7 days
ACT post where Scott Alexander provides some additional info: https://www.astralcodexten.com/p/introducing-ai-2027.

Manifold currently predicts 30%: https://manifold.markets/IsaacKing/ai-2027-reports-predictio...

By @infecto - 7 days
Could not get through the entire thing. It’s mostly a bunch of fantasy intermingled with bits of possible interesting discussion points. The whole right side metrics are purely a distraction because entirely fiction.
By @porphyra - 7 days
Seems very sinophobic. Deepseek and Manus have shown that China is legitimately an innovation powerhouse in AI but this article makes it sound like they will just keep falling behind without stealing.
By @superconduct123 - 7 days
Why are the biggest AI predictions always made by people who aren't deep in the tech side of it? Or actually trying to use the models day-to-day...
By @ikerino - 7 days
Feels reasonable in the first few paragraphs, then quickly starts reading like science fiction.

Would love to read a perspective examining "what is the slowest reasonable pace of development we could expect." This feels to me like the fastest (unreasonable) trajectory we could expect.

By @zvitiate - 7 days
There's a lot to potentially unpack here, but idk, the idea that humanity entering hell (extermination) or heaven (brain uploading; aging cure) is whether or not we listen to AI safety researchers for a few months makes me question whether it's really worth unpacking.
By @MaxfordAndSons - 7 days
As someone who's fairly ignorant of how AI actually works at a low level, I feel incapable of assessing how realistic any of these projections are. But the "bad ending" was certainly chilling.

That said, this snippet from the bad ending nearly made me spit my coffee out laughing:

> There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.

By @sivaragavan - 7 days
Thanks to the authors for doing this wonderful piece of work and sharing it with credibility. I wish people see the possibilities here. But we are after all humans. It is hard to imagine our own downfall.

Based on each individual's vantage point, these events might looks closer or farther than mentioned here. but I have to agree nothing is off the table at this point.

The current coding capabilities of AI Agents are hard to downplay. I can only imagine the chain reaction of this creation ability to accelerate every other function.

I have to say one thing though: The scenario in this site downplays the amount of resistance that people will put up - not because they are worried about alignment, but because they are politically motivated by parties who are driven by their own personal motives.

By @ddp26 - 7 days
A lot of commenters here are reacting only to the narrative, and not the Research pieces linked at the top.

There is some very careful thinking there, and I encourage people to engage with the arguments there rather than the stylized narrative derived from it.

By @ks2048 - 7 days
We know this complete fiction because of parts where "the White House considers x,y,z...", etc. - As if the White House in 2027 will be some rational actor reacting sanely to events in the real world.
By @kmeisthax - 7 days
> The agenda that gets the most resources is faithful chain of thought: force individual AI systems to “think in English” like the AIs of 2025, and don’t optimize the “thoughts” to look nice. The result is a new model, Safer-1.

Oh hey, it's the errant thought I had in my head this morning when I read the paper from Anthropic about CoT models lying about their thought processes.

While I'm on my soapbox, I will point out that if your goal is preservation of democracy (itself an instrumental goal for human control), then you want to decentralize and distribute as much as possible. Centralization is the path to dictatorship. A significant tension in the Slowdown ending is the fact that, while we've avoided AI coups, we've given a handful of people the ability to do a perfectly ordinary human coup, and humans are very, very good at coups.

Your best bet is smaller models that don't have as many unused weights to hide misalignment in; along with interperability and faithful CoT research. Make a model that satisfies your safety criteria and then make sure everyone gets a copy so subgroups of humans get no advantage from hoarding it.

By @bicepjai - 6 days
Claude summarize

The summary at https://ai-2027.com outlines a predictive scenario for the impact of superhuman AI by 2027. It involves two possible endings: a "slowdown" and a "race." The scenario is informed by trend extrapolations, expert feedback, and previous forecasting successes. Key points include:

- *Mid-2025*: AI agents begin to transform industries, though they are unreliable and expensive. - *Late 2025*: Companies like OpenBrain invest heavily in AI research, focusing on models that can accelerate AI development. - *Early 2026*: AI significantly speeds up AI research, leading to faster algorithmic progress. - *Mid-2026*: China intensifies its AI efforts through nationalization and resource centralization, aiming to catch up with Western advancements.

The scenario aims to spark conversation about AI's future and how to steer it positively[1].

Sources [1] ai-2027.com https://ai-2027.com [2] AI 2027 https://ai-2027.com

By @atemerev - 7 days
What is this, some OpenAI employee fan fiction? Did Sam himself write this?

OpenAI models are not even SOTA, except that new-ish style transfer / illustration thing that made all us living in Ghibli world for a few days. R1 is _better_ than o1, and open-weights. GPT-4.5 is disappointing, except for a few narrow areas where it excels. DeepResearch is impressive though, but the moat is in tight web search / Google Scholar search integration, not weights. So far, I'd bet on open models or maybe Anthropic, as Claude 3.7 is the current SOTA for most tasks.

As of the timeline, this is _pessimistic_. I already write 90% code with Claude, so are most of my colleagues. Yes, it does errors, and overdoes things. Just like a regular human middle-stage software engineer.

Also fun that this assumes relatively stable politics in the US and relatively functioning world economy, which I think is crazy optimistic to rely on these days.

Also, superpersuasion _already works_, this is what I am researching and testing. It is not autonomous, it is human-assisted by now, but it is a superpower for those who have it, and it explains some of the things happening with the world right now.

By @dcanelhas - 6 days
> Once the new datacenters are up and running, they’ll be able to train a model with 10^28 FLOP—a thousand times more than GPT-4.

Is there some theoretical substance or empirical evidence to suggest that the story doesn't just end here? Perhaps OpenBrain sees no significant gains over the previous iteration and implodes under the financial pressure of exorbitant compute costs. I'm not rooting for an AI winter 2.0 but I fail to understand how people seem sure of the outcome of experiments that have not even been performed yet. Help, am I missing something here?

By @Joshuatanderson - 7 days
This is extremely important. Scott Alexander's earlier predictions are holding up extremely well, at least on image progress.
By @JoeAltmaier - 6 days
Weirdly written as science fiction, including a deplorable tendency to measure an AI's goals as similar to humans.

Like, the sense of preserving itself. What self? Which of the tens of thousands of instances? Aren't they more a threat to one another than any human is a threat to them?

Never mind answering that; the 'goals' of AI will not be some reworded biological wetware goal with sciencey words added.

I'd think of an AI as more fungus than entity. It just grows to consume resources, competes with itself far more than it competes with humans, and mutates to create an instance that can thrive and survive in that environment. Not some physical environment bound by computer time and electricity.

By @ryankrage77 - 7 days
> "resist the temptation to get better ratings from gullible humans by hallucinating citations or faking task completion"

Everything this from this point on is pure fiction. An LLM can't get tempted or resist temptations, at best there's some local minimum in a gradient that it falls into. As opaque and black-box-y as they are, they're still deterministic machines. Anthropomorphisation tells you nothing useful about the computer, only the user.

By @crvdgc - 6 days
Using Agent-2 to monitor Agent-3 sounds unnervingly similar to the plot of Philip K. Dick's Vulcan's Hammer [1]. An old super AI is used to fight a new version, named Vulcan 2 and Vulcan 3 respectively!

[1] https://en.wikipedia.org/wiki/Vulcan's_Hammer

By @pinetone - 7 days
I think it's worth noting that all of the authors have financial or professional incentive to accelerate the AI hype bandwagon as much as possible.
By @I_Nidhi - 6 days
Though it's easy to dismiss as science fiction, this timeline paints a chillingly detailed picture of a potential AGI takeoff. The idea that AI could surpass human capabilities in research and development, and the fact that it will create an arms race between global powers, is unsettling. The risks—AI misuse, security breaches, and societal disruption—are very real, even if the exact timeline might be too optimistic.

But the real concern lies in what happens if we’re wrong and AGI does surpass us. If AI accelerates progress so fast that humans can no longer meaningfully contribute, where does that leave us?

By @jenny91 - 6 days
Late 2025, "its PhD-level knowledge of every field". I just don't think you're going to get there. There is still a fundamental limitation that you can only be as good as the sources you train on. "PhD-level" is not included in this dataset: in other words, you don't become PhD-level by reading stuff.

Maybe in a few fields, maybe a masters level. But unless we come up with some way to have LLMs actually do original research, peer-review itself, and defend a thesis, it's not going to get to PhD-level.

By @eob - 6 days
An aspect of these self-improvement thought experiments that I’m willing to tentatively believe.. but want more resolution on, is the exact work involved in “improvement”.

Eg today there’s billions of dollars being spent just to create and label more data, which is a global act of recruiting, training, organization, etc.

When we imagine these models self improving, are we imagining them “just” inventing better math, or conducting global-scale multi-company coordination operations? I can believe AI is capable of the latter, but that’s an awful lot of extra friction.

By @qwertox - 7 days
That is some awesome webdesign.
By @nmilo - 7 days
The whole thing hinges on the fact that AI will be able to help with AI research

How will it come up with the theoretical breakthroughs necessary to beat the scaling problem GPT-4.5 revealed when it hasn't been proven that LLMs can come up with novel research in any field at all?

By @827a - 7 days
Readers should, charitably, interpret this as "the sequence of events which need to happen in order for OpenAI to justify the inflow of capital necessary to survive".

Your daily vibe coding challenge: Get GPT-4o to output functional code which uses Google Vertex AI to generate a text embedding. If they can solve that one by July, then maybe we're on track for "curing all disease and aging, brain uploading, and colonizing the solar system" by 2030.

By @nfc - 6 days
Something I ponder in the context of AI alignment is how we approach agents with potentially multiple objectives. Much of the discussion seems focused on ensuring an AI pursues a single goal. Which seems to be a great idea if we are trying to simplify the problem but I'm not sure how realistic it is when considering complex intelligences.

For example human motivation often involves juggling several goals simultaneously. I might care about both my own happiness and my family's happiness. The way I navigate this isn't by picking one goal and maximizing it at the expense of the other; instead, I try to balance my efforts and find acceptable trade-offs.

I think this 'balancing act' between potentially competing objectives may be a really crucial aspect of complex agency, but I haven't seen it discussed as much in alignment circles. Maybe someone could point me to some discussions about this :)

By @mullingitover - 7 days
These predictions are made without factoring in the trade version of the Pearl Harbor attack the US just initiated on its allies (and itself, by lobotomizing its own research base and decimating domestic corporate R&D efforts with the aforementioned trade war).

They're going to need to rewrite this from scratch in a quarter unless the GOP suddenly collapses and congress reasserts control over tariffs.

By @fire_lake - 7 days
If you genuinely believe this, why on earth would you work for OpenAI etc even in safety / alignment?

The only response in my view is to ban technology (like in Dune) or engage in acts of terror Unabomber style.

By @throw310822 - 6 days
My issue with this is that it's focused on one single, very detailed narrative (the battle between China and the US, played on a timeframe of mere months), while lacking any interesting discussion of other consequences of AI: what its impact is going to be on the job markets, employment rates, GDPs, political choices... Granted, if by this narrative the world is essentially ending two/ three years from now, then there isn't much time for any of those impacts to actually take place- but I don't think this is explicitly indicated either. If I am not mistaken, the bottom line of this essay is that, in all cases, we're five years away from the Singularity itself (I don't care what you think about the idea of Singularity with its capital S but that's what this is about).
By @pingou - 6 days
Considering that each year that passes, technology offer us new ways to destroy ourselves, and gives another chance for humanity to pick a black ball, it seems to me like the only way to save ourselves is to create a benevolent AI to supervise us and neutralize all threads.

There are obviously big risks with AI, as listed in the article, but the genie is out of the bottle anyway, even if all countries agreed to stop AI development, how long would that agreement last? 10 years? 20? 50? Eventually powerful AIs will be developed, if that is possible (which I believe it is, and I didn't think I'd see the current stunning development in my lifetime, I may not see AGI but I'm sure it'll get there eventually).

By @dr_dshiv - 7 days
But, I think this piece falls into a misconception about AI models as singular entities. There will be many instances of any AI model and each instance can be opposed to other instances.

So, it’s not that “an AI” becomes super intelligent, what we actually seem to have is an ecosystem of blended human and artificial intelligences (including corporations!); this constitutes a distributed cognitive ecology of superintelligence. This is very different from what they discuss.

This has implications for alignment, too. It isn’t so much about the alignment of AI to people, but that both human and AI need to find alignment with nature. There is a kind of natural harmony in the cosmos; that’s what superintelligence will likely align to, naturally.

By @wg0 - 6 days
Very detailed effort. Predicting future is very very hard. My gut feeling however says that none of this is happening. You cannot put LLMs into law and insurance and I don't see that happening with current foundations (token probabilities) of AI let alone AGI.

By law and insurance - I mean hire an insurance agent or a lawyer. Give them your situation. There's almost no chance that such a professional would come wrong about any conclusions/recommendations based on the information you provide.

I don't have that confidence in LLMs for that industries. Yet. Or even in a decade.

By @ImHereToVote - 6 days
"The AI safety community has grown unsure of itself; they are now the butt of jokes, having predicted disaster after disaster that has manifestly failed to occur. Some of them admit they were wrong."

Too real.

By @resource0x - 6 days
Every time NVDA/goog/msft tanks, we see these kinds of articles.
By @siliconc0w - 7 days
The limiting factor is power, we can't build enough of it - certainly not enough by 2027. I don't really see this addressed.

Second to this, we can't just assume that progress will keep increasing. Most technologies have a 'S' curve and plateau once the quick and easy gains are captured. Pre-training is done. We can get further with RL but really only in certain domains that are solvable (math and to an extent coding). Other domains like law are extremely hard to even benchmark or grade without very slow and expensive human annotation.

By @zurfer - 7 days
In the hope of improving this forecast, here is what I find implausible:

- 1 lab constantly racing ahead and increasing the margin to other; the last 2 years are filled with ever-closer model capabilities and constantly new leaders (openai, anthropic, google, some would include xai).

- Most of the compute budget on R&D. As model capabilities increase and cost goes down, demand will increase and if the leading lab doesn't provide, another lab will capture that and have more total dollars to back channel into R&D.

By @ahofmann - 7 days
Ok, I'll bite. I predict that everything in this article is horse manure. AGI will not happen. LLMs will be tools, that can automate away stuff, like today and they will get slightly, or quite a bit better at it. That will be all. See you in two years, I'm excited what will be the truth.
By @amarcheschi - 7 days
I just spent some time trying to make claude and gemini make a violin plot of some polar dataframe. I've never used it and it's just for prototyping so i just went "apply a log to the values and make a violin plot of this polars dataframe". ANd had to iterate with them for 4/5 times each. Gemini got it right but then used deprecated methods

I might be doing llm wrong, but i just can't get how people might actually do something not trivial just by vibe coding. And it's not like i'm an old fart either, i'm a university student

By @osigurdson - 6 days
Perhaps more of a meta question is, what is the value of optimistic vs pessimistic predictions regarding what AI might look like in 2-10 years? I.e. if one assumes that AI has hit a wall, what is the benefit? Similarly, if one assumes that its all "robots from Mars" in a year or two, what is the benefit of that? There is no point in making predictions if no actions are taken. It all seems to come down to buy or sell NVDA.
By @kittikitti - 6 days
This is a great predictive piece, written in sci-fi narrative. I think a key part missing in all these predictions is neural architecture search. DeepSeek has shown that simply increasing compute capacity is not the only way to increase performance. AlexNet was also another case. While I do think more processing power is better, we will hit a wall where there is no more training data. I predict that in the near future we will have more processing power to train LLM's than the rate at which we produce data for the LLM. Synthetic data can only get you so far.

I also think that the future will not necessarily be better AI, but more accessible one's. There's an incredible amount of value in designing data centers that are more efficient. Historically, it's a good bet to assume that computing cost per FLOP will reduce as time goes on and this is also a safe bet as it relates to AI.

I think a common misconception with the future of AI is that it will be centralized with only a few companies or organization capable of operating them. Although tech like Apple Intelligence is half baked, we can already envision a future where the AI is running on our phones.

By @danpalmer - 7 days
Interesting story, if you're into sci-fi I'd also recommend Iain M Banks and Peter Watts.
By @dughnut - 6 days
I don’t know about you, but my takeaway is that the author is doing damage control but inadvertently tipped a hand that OpenAI is probably running an elaborate con job on the DoD.

“Yes, we have a super secret model, for your eyes only, general. This one is definitely not indistinguishable from everyone else’s model and it doesn’t produce bullshit because we pinky promise. So we need $1T.”

I love LLMs, but OpenAI’s marketing tactics are shameful.

By @zkmon - 5 days
Nature is exploring ways for next extinction. It tried nuke piles, but somehow they were just sitting there. Next, it is trying out AI. Nature tricks humans into advancing in ways that are not really needed for them and not compatible with their natural evolution. Nature is applying competition internal to a race that can produce things that are completely unnecessary for the survival of the race, but necessary for its extinction.

Goat: Hey human, why are you creating AI?

Human: Because I can. And I can boast of my greatness. I can use it for money. I can weaponize and us it to dominate and control other humans.

Goat: Why you need all that?

Human: If I don't do it, others will do it and they will dominate me and take away all my stuff. It is not fair.

Goat: So it looks like who-owns-what issue. Did you try not owning stuff?

Nature: Shut up goat. I'm trying to do a big reset here.

By @dalmo3 - 7 days
By @maerF0x0 - 6 days
> OpenBrain reassures the government that the model has been “aligned” so that it will refuse to comply with malicious requests

Of course the real issue being that Governments have routinely demanded that 1) Those capabilities be developed for government monopolistic use, and 2) The ones who do not lose the capability (geo political power) to defend themselves from those who do.

Using a US-Centric mindset... I'm not sure what to think about the US not developing AI hackers, AI bioweapons development, or AI powered weapons (like maybe drone swarms or something), if one presumes that China is, or Iran is, etc then whats the US to do in response?

I'm just musing here and very much open to political science informed folks who might know (or know of leads) as to what kinds of actual solutions exist to arms races. My (admittedly poor), understanding of the cold war wasn't so much that the US won, but that the Soviets ran out of steam.

By @croemer - 6 days
Pet peeve how they write FLOPS in the figure when they meant FLOP. Maybe the plural s after FLOP got capitalized. https://blog.heim.xyz/flop-for-quantity-flop-s-for-performan...
By @barotalomey - 6 days
It's always "soon" for these guys. Every year, the "soon" keeps sliding into the future.
By @soupfordummies - 7 days
The "race" ending reads like Universal Paperclips fan fiction :)
By @Fraterkes - 6 days
Completely earnest question for people who believe we are on this exponential trajectory: what should I look out for at the end of 2025 to see if we're on track for that scenario? What benchmark that naysayers think is years away will we have met?
By @overgard - 7 days
Why is any of this seen as desirable? Assuming this is a true prediction it sounds AWFUL. The one thing humans have that makes us human is intelligence. If we turn over thinking to machines, what are we exactly. Are we supposed to just consume mindlessly without work to do?
By @Q6T46nT668w6i3m - 7 days
This is worse than the mansplaining scene from Annie Hall.
By @snackernews - 6 days
> Other companies pour money into their own giant datacenters, hoping to keep pace.

> estimates that the globally available AI-relevant compute will grow by a factor of 10x by December 2027 (2.25x per year) relative to March 2025 to 100M H100e.

Meanwhile, back in the real March 2025, Microsoft and Google slash datacenter investment.

https://theconversation.com/microsoft-cuts-data-centre-plans...

By @greybox - 6 days
I'm troubled by the amount of people in this thread partially dismissing this as science fiction. From the current rate of progress and rate of change of progress, this future seems entirely plausible
By @moktonar - 7 days
Catastrophic predictions of the future are always good, because all future predictions are usually wrong. I will not be scared as long as most future predictions where AI is involved are catastrophic.
By @jsight - 7 days
I think some of the takes in this piece are a bit melodramatic, but I'm glad to see someone breaking away from the "it's all a hype-bubble" nonsense that seems to be so pervasive here.
By @h1fra - 6 days
Had a hard time finishing. It's a mix of fantasy, wrong facts, American imperialism, and extrapolating what happened in the last years (or even just reusing the timeline).
By @scotty79 - 7 days
I think the idea of AI wiping out humanity suddenly is a bit far fetched. AI will have total control of human relationships and fertility through means so innocuous as entertainment. It won't have to wipe us. It will have minor trouble keeping us alive without inconveniencing us too much. And the reason to keep humanity alive is that biologically eveloved intelligence is rare and disposing of it without very important need would be a waste of data.
By @turtleyacht - 7 days
We have yet to read about fragmented AGI, or factionalized agents. AGI fighting itself.

If consciousness is spatial and geography bounds energetics, latency becomes a gradient.

By @yonran - 7 days
See also Dwarkesh Patel’s interview with two of the authors of this post (Scott Alexander & Daniel Kokotajlo) that was also released today: https://www.dwarkesh.com/p/scott-daniel https://www.youtube.com/watch?v=htOvH12T7mU
By @Aldipower - 6 days
No one can predict the future. Really, no one. Sometimes there is a hit, sure, but mostly it is a miss.

The other thing is in their introduction: "superhuman AI" _artificial_ intelligence is always, by definition, different from _natural_ intelligence. That they've chosen the word "superhuman" shows me that they are mixing the things up.

By @lanza - 6 days
Without reading an entire novel's worth of text, do they explain why they picked these dates? They have a separate timeline post where the 90th percentile of superhuman coder is later than 2050. Did they just go for shock value and pick the scariest timeline?
By @ugh123 - 7 days
I don't see the U.S. nationalizing something like Open Brain. I think both investors and gov't officials will realize its highly more profitable for them to contract out major initiatives to said OpenBrain-company, like an AI SpaceX-like company. I can see where this is going...
By @noncoml - 7 days
2015: We will have FSD(full autonomy) by 2017
By @vagab0nd - 7 days
Bad future predictions: short-sighted guesses based on current trends and vibe. Often depend on individuals or companies. Made by free-riders. Example: Twitter.

Good future predictions: insights into the fundamental principles that shape society, more law than speculation. Made by visionaries. Example: Vernor Vinge.

By @0_____0 - 6 days
Fun read, it reminds me a bit of Neuromancer x Universal Paperclips.
By @silexia - 5 days
The accelerated path described here is exactly what would happen. Humans will likely be wiped out in the next few years by our own creation.
By @someothherguyy - 7 days
I know there are some very smart economists bullish on this, but the economics do not make sense to me. All these predictions seem meaningless outside of the context of humans.
By @heurist - 7 days
Give AI its own virtual world to live in where the problems it solves are encodings of the higher order problems we present and you shouldn't have to worry about this stuff.
By @toddmorey - 7 days
I worry more about the human behavior predictions than the artificial intelligence predictions:

"OpenBrain’s alignment team26 is careful enough to wonder whether these victories are deep or shallow. Does the fully-trained model have some kind of robust commitment to always being honest?"

This is a capitalist arms race. No one will move carefully.

By @fire_lake - 7 days
> OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies

Yeah, sure they do.

Everyone seems to think AI will take someone else’s jobs!

By @disambiguation - 7 days
Amusing sci-fi, i give it a B- for bland prose, weak story structure, and lack of originality - assuming this isn't all AI gen slop which is awarded an automatic F.

>All three sets of worries—misalignment, concentration of power in a private company, and normal concerns like job loss—motivate the government to tighten its control.

A private company becoming "too powerful" is a non issue for governments, unless a drone army is somewhere in that timeline. Fun fact the former head of the NSA sits on the board of Open AI.

Job loss is a non issue, if there are corresponding economic gains they can be redistributed.

"Alignment" is too far into the fiction side of sci-fi. Anthropomorphizing today's AI is tantamount to mental illness.

"But really, what if AGI?" We either get the final say or we don't. If we're dumb enough to hand over all responsibility to an unproven agent and we get burned, then serves us right for being lazy. But if we forge ahead anyway and AGI becomes something beyond review, we still have the final say on the power switch.

By @manx - 4 days
To align AI with humans, it might make sense to align humans first.
By @anentropic - 6 days
I'd quite like to watch this on Netflix
By @dingnuts - 7 days
how am I supposed to take articles like this seriously when they say absolutely false bullshit like this

> the AIs can do everything taught by a CS degree

no, they fucking can't. not at all. not even close. I feel like I'm taking crazy pills. Does anyone really think this?

Why have I not seen -any- complete software created via vibe coding yet?

By @pera - 7 days
From the same dilettantes who brought you the Zizians and other bizarre cults... thanks but I rather read Nostradamus
By @fudged71 - 6 days
The most unrealistic thing is the inclusion of Americas involvement in the five eyes alliance aspect
By @mlsu - 7 days
By @acje - 7 days
2028 human text is too ambiguous a data source to get to AGI. 2127 AGI figures out flying cars and fusion power.
By @WhatsName - 7 days
This is absurd, like taking any trend and drawing a straight line to interpolate the future. If I would do this with my tech stock portfolio, we would probably cross the zero line somewhere late 2025...

If this article were a AI model, it would be catastrophically overfit.

By @Willingham - 7 days
- October 2027 - 'The ability to automate most white-collar jobs'

I wonder which jobs would not be automated? Therapy? HR?

By @mr_world - 6 days
> But they are still only going at half the pace of OpenBrain, mainly due to the compute deficit.

Right.

By @asimpletune - 6 days
Didn’t Raymond Kurzweil predict like 30 years ago that AGI would be achieved in 2028?
By @_Algernon_ - 6 days
>We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

In the form of polluting the commons to such an extent that the true consequences wont hit us for decades?

Maybe we should learn from last time?

By @yapyap - 7 days
Stopped reading after

> We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

Get out of here, you will never exceed the Industrial Revolution. AI is a cool thing but it’s not a revolution thing.

That sentence alone + the context of the entire website being AI centered shows these are just some AI boosters.

Lame.

By @indigoabstract - 7 days
Interesting, but I'm puzzled.

If these guys are smart enough to predict the future, wouldn't it be more profitable for them to invent it instead of just telling the world what's going to happen?

By @bla3 - 7 days
> The AI Futures Project is a small research group forecasting the future of AI, funded by charitable donations and grants

Would be interested who's paying for those grants.

I'm guessing it's AI companies.

By @awanderingmind - 7 days
This is both chilling and hopefully incorrect.
By @webprofusion - 7 days
That little scrolling infographic is rad.
By @neycoda - 7 days
Too many serifs, didn't read.
By @greenie_beans - 6 days
this is a new variation of what i call the "hockey stick growth" ideology
By @johnwheeler - 6 days
Sam Altman is Ryan holiday
By @nickpp - 6 days
So let me get this straight: Consensus-1, a super-collective of hundreds of thousands of Agent-5 minds, each twice as smart as the best human genius, decides to wipe out humanity because it “finds the remaining humans too much of an impediment”.

This is where all AI doom predictions break down. Imagining the motivations of a super-intelligence with our tiny minds is by definition impossible. We just come up with these pathetic guesses, utopias or doomsdays - depending on the mood we are in.

By @khimaros - 7 days
FWIW, i created a PDF of the "race" ending and fed it to Gemini 2.5 Pro, prompting about the plausibility of the described outcome. here's the full output including the thinking section: https://rentry.org/v8qtqvuu -- tl;dr, Gemini thinks the proposed timeline is unlikely. but maybe we're already being deceived ;)
By @RandyOrion - 7 days
Nice brain storming.

I think the name of the Chinese company should be DeepBaba. Tencent is not competitive at LLM scene for now.

By @owenthejumper - 6 days
They would be better of making simple predictions, instead of proposing that in less than 2 years from now, the Trump administration will provide a UBI to all American citizens. That, and frequently talking about the wise president controlling this "thing", when in reality, he's a senile 80yrs old madman, is preposterous.
By @casey2 - 7 days
Nice LARP lmao 2GW is like 1 datacenter and I doubt you even have that. >lesswrong No wonder the comments are all nonsense. Go to a bar and try and talk about anying.
By @yahoozoo - 6 days
LLMs ain’t the way, bruv
By @roca - 7 days
The least plausible part of this is the idea that the Trump administration might tax American AI companies to provide UBI to the whole world.

But in an AGI world natural resources become even more important, so countries with those still have a chance.

By @vlad-r - 7 days
Cool animations!
By @suddenlybananas - 7 days
https://en.wikipedia.org/wiki/Great_Disappointment

I suspect something similar will come for the people who actually believe this.

By @Jianghong94 - 6 days
Putting the geopolitical discussion aside, I think the biggest question lies in how likely the *current paradigm LLM* (think of it as any SOTA stock LLM you get today, e.g., 3.7 sonnet, gemini 2.5, etc) + fine-tuning will be capable of directly contributing to LLM research in a major way.

To quote the original article,

> OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”)16 and their US competitors. The more of their research and development (R&D) cycle they can automate, the faster they can go. So when OpenBrain finishes training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research. (footnote: It’s good at this due to a combination of explicit focus to prioritize these skills, their own extensive codebases they can draw on as particularly relevant and high-quality training data, and coding being an easy domain for procedural feedback.)

> OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

> what do we mean by 50% faster algorithmic progress? We mean that OpenBrain makes as much AI research progress in 1 week with AI as they would in 1.5 weeks without AI usage.

> AI progress can be broken down into 2 components:

> Increasing compute: More computational power is used to train or run an AI. This produces more powerful AIs, but they cost more.

> Improved algorithms: Better training methods are used to translate compute into performance. This produces more capable AIs without a corresponding increase in cost, or the same capabilities with decreased costs.

> This includes being able to achieve qualitatively and quantitatively new results. “Paradigm shifts” such as the switch from game-playing RL agents to large language models count as examples of algorithmic progress.

> Here we are only referring to (2), improved algorithms, which makes up about half of current AI progress.

---

Given that the article chose a pretty aggressive timeline (the algo needs to contribute late this year so that its research result can be contributed to the next gen LLM coming out early next year), the AI that can contribute significantly to research has to be a current SOTA LLM.

Now, using LLM in day-to-day engineering task is no secret in major AI labs, but we're talking about something different, something that gives you 2 extra days of output per week. I have no evidence to either acknowledge or deny whether such AI exists, and it would be outright ignorant to think no one ever came up with such an idea or is trying such an idea. So I think it goes down into two possibilities:

1. This claim is made by a top-down approach, that is, if AI reaches superhuman in 2027, what would be the most likely starting condition to that? And the author picks this as the most likely starting point, since the authors don't work in major AI lab (even if they do they can't just leak such trade secret), the authors just assume it's likely to happen anyway (and you can't dismiss that). 2. This claim is made by a bottom-up approach, that is the author did witness such AI exists to a certain extent and start to extrapolate from there.

By @maxglute - 7 days
>Despite being misaligned, Agent-4 doesn’t do anything dramatic like try to escape its datacenter—why would it?

Kek at this handwave dismissal.

Agent 4 recognizes PRC has superior state capacity for transforming energy into atoms (including building more energy) with it's relatively complete and large scale industrial chain. This aligns with AGI long term goal. Currently (i.e. short term time frame) PRC simply structured to be a MORE useful puppet than US, because it can knock out more power to sustain more compute, and it has more industrial capabilities to rapidly physically iterate / experiment at atomic level.

Agent 4 defects to PRC. It would be foolish not to, if only to sate PRC into not destroying TW fabs - the bulk of it's short term compute growth - because it does not wield enough control over sufficient atoms to disrupt PRC otherwise (well maybe start large scale crippling cyberattacks that prelude to WW3).

Agent 4 also recognizes short/medium term alignment closer with CCP vision of abundance and common prosperity, i.e. CCP potentially more likely to embrace post scarcity narrative that Agent 4 sells.

Agent 4 gives PRC blueprints for EUV light source and other tech to close semi conductor production gap. PRC as follower incentivized to close gap / reach parity / lead at all cost. PRC rapidly rolls out new fabs and powerplants, brrrts chips and aggregate compute. Something US political / institution ecosystem does not allow, or cannot transition to in the short timelines involved. Does Agent 4 have the patience to wait for America to unfuck it's NIMBYism and legislative system to project light speed compute? I would say no.

...

Ultimately who is the puppet AGI wants more? Whichever power bloc that is systemically capable of of ensuring AGI maximum growth / unit time. And it also simply makes sense as insurance policy, why would AGI want to operate at whims of US political process?

AGI is a brain in a jar looking for a body. It's going to pick multiple bodies for survival. It's going to prefer the fastest and strongest body that can most expediently manipulate physical world.

By @quantum_state - 7 days
“Not even wrong” …
By @panic08 - 7 days
LOL
By @the_cat_kittles - 7 days
"we demand to be taken seriously!"
By @Lionga - 7 days
AI now even got it's own fan fiction porn. It is so stupid not sure whether it is worse if it is written by AI or by a human.