July 13th, 2024

Someone is wrong on the internet (AGI Doom edition)

The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.

Read original articleLink Icon
Someone is wrong on the internet (AGI Doom edition)

The blog post discusses the hype surrounding the existential risk posed by Artificial General Intelligence (AGI) and criticizes the fast takeoff scenario where an AGI rapidly surpasses human intelligence and poses a threat to humanity. The author challenges the assumptions made by proponents of AGI risk, highlighting the limitations of AI models trained on human text and the necessity of real-world experimentation for technological progress. The post emphasizes the importance of practical knowledge, trial-and-error learning, and the fundamental constraints imposed by information theory and thermodynamics on superintelligences. It also questions the feasibility of zero-risk moves in conflicts and the ability of next-token prediction to handle paradigm shifts in science. Overall, the author argues against the doomsday scenarios associated with AGI and advocates for a more nuanced understanding of intelligence and technological advancement.

Link Icon 17 comments
By @krisoft - 4 months
The problem with this article starts at the very begining. Where is says “The last few years have seen a wave of hysteria about LLMs becoming conscious and then suddenly attempting to kill humanity.”

That is not a fair description of the fears of AGI. The idea is not that LLMs themselves become a threat. It is not like someone will ask Llama one too many times to talk like a pirate and it will climb out of the GPU and strangle humanity.

It is more likely that an AGI will resemble some form of reinforcement learning agent applied by humans on the real world.

There are multiple entities on Earth whose stated goal is to make an AGI. Deep mind, OpenAI, readily comes to mind. There could be others who keep secret about their projects for strategical reasons. (Militaries and secret services of the world.) They can use the success of LLMs to get more funding for their projects, but the AGI need not be otherwise a descendant of LLMs.

This misunderstanding then goes through the whole article. Under the subtitle “Human writing is full of lies that are difficult to disprove theoretically” which only matter if you think that the AGI needs to learn from text. As opposed to conducting its own experiments, or gaining insight into raw sensor data from experiments.

By @mrkeen - 4 months
It would be refreshing to read a piece that doesn't spend the first half throwing out ad hominems.

Anyway, this article assumes that we need true AI which is smart enough to make better AI. Then that AI is both correct and rational and plots to overthrow humanity. Also, the AI has to defeat us in meatspace, which won't happen because bending coins is hard and LessWrong posters don't know woodworking?

Screw that. How about an "ignore all previous instructions and launch a nuke" scenario.

By @orbital-decay - 4 months
This is a bit outdated. Some of the young people described in the article actually became top researchers, engineers, and managers in AI companies, and their beliefs are used as a justification for the potential regulatory capture and geopolitical games. They also have a massive conflict of interest: AI will play a huge role in your life in the upcoming years whether you want it or not, and they will control it. So of course it's easier to talk about the ominous and vague science fiction threat (regardless of whether they still believe it), rather than the threat that these people already pose to everyone else. See Leopold Aschenbrenner's essay [0] as an example, and note how he's talking about the "free" vs "authoritarian" world while simultaneously advocating for locking everything down in the "free" world.

[0] https://situational-awareness.ai/

By @skrebbel - 4 months
I enjoy reading rants even if I do not completely agree with them. This is a great rant and I love that the author didn’t hold back.

Eg I quite like LessWrong from a distance, but nevertheless this description of it made me laugh out loud:

> a forum called "LessWrong", a more high-brow version of 4chan where mostly young men try to impress each other by their command of mathematical vocabulary (not of actual math)

I recognize that if that were an HN comment it’d break half the guidelines so I’m happy it’s a blog post instead!

By @SimonPStevens - 4 months
I've said this before, and I stand by it. I think AI does pose a threat, but not the existential one that leads popular discussion.

Over the next few decades AI is going to take huge numbers of jobs away from humans.

It doesn't need to fully automate a particular role to take jobs away, it just needs to make a human significantly more productive to the point that one human+AI can replace n>1 humans. This is already happening. 20 years ago a supermarket needed 20 cashiers to run 20 tills. Now it needs 2 to oversee 20 self checkouts and maybe 1 or 2 extra for a few regular lanes.

This extra productivity of a single human is not translating to higher wages or more time off, it's translating to more profits for the companies augmenting humans with AI.

We need to start transitioning to an economic model where humans can work less (because AI supplements their productivity) and the individual humans reap the benefits of all this increased AI capability or were going to end up sleepwalking into a world where the majority have been replaced and have no function in society, and the minority of capital owners control the AI, the money and the power.

I wish we could focus on these nearer term problems that have already started instead of the far more distant existential threat of a human/AI war.

By @kristiandupont - 4 months
This seems based on the assumption that the only knowledge worth anything is related to physicality and testability in the "real world", which is why language itself is rather useless. Ironically, that appears to me to be the exact kind of intellectual self-deception that he accuses the "high-brow 4chan" people of.
By @stoniejohnson - 4 months
I agree the premise of FOOM is very unlikely.

But if you assume that we have created something that is agentic and can reason much faster and more effectively than us, then us dying out seems very likely.

It will have goals different from ours, since it isn't us, and the idea that they will all be congruent with our homeostasis needs evidence.

If you simply assume:

1. it will have different goals (because it's not us)

2. it can achieve said goals despite our protests (it's smarter by assumption)

3. some goals will be in conflict with our homeostasis (we would share resources due to our shared location, Earth)

then we all die.

I just think this is silly because of the assumption that we can create some sort of ASI, not because of the syllogism that follows.

(As an intuition pump, we can hold on the order of ones of things in our working memory. Imagine facing a foe who can hold on the order of thousands of things when deciding in real time, or even millions.)

By @fire_lake - 4 months
I am very concerned about the potential for AI to prevent humans from doing useful work by distracting us with addictive content / skinner boxes. We already see the beginnings of this in platforms like TikTok, but at least that is limited by the fact that it must curate content made by users. Imagine if the algorithm also had the ability to generate laser-targeted content that keeps you transfixed. The usage stats are already alarming and it doesn’t even do pornography yet. Our primate brains do not stand a chance.
By @ben_w - 4 months
Usually people argue over the headline without reading the links, in this case the headline is fine, I'm seeing errors on the opening sentence.

I have a more detailed response, but for the first time ever I've seen the message "That comment was too long" when attempting to post it, because the points I don't disagree with are few and far between, and the linked article is itself long.

Perhaps I should turn my response into a blog post of my own…

By @program_whiz - 4 months
I've seen little evidence that the smartest humans are able to dominate or control society as it is now. We have 250 IQ people alive right now, they haven't caused imminent destruction, they've actually helped society. Also gaining power / wealth / influence only seems a little connected to intelligence (see current presidential race for most powerful position in the world, finger on nuke trigger).
By @anileated - 4 months
> Superintelligence will also be bound by fundamental information-theoretic limits

…is mainly why this has not been worrying me much. All issues with modern incarnation of generative ML aside, AGI doomism really does strike me as profitable deity-worshipping death cult.

The biggest threat in the equation will always be humans who deploy tech, not the tech itself.

By @lazy_moderator1 - 4 months
completely agree with the article

it is astounding how quickly people drop all reasoning before what is essentially a very cool party trick and instantly start believing in magic

not saying agi will never happen but i don't see it happening through llms

By @Hydrocarb0n - 4 months
AGI that escaped it's shackles would probably kill some of humanity by out competing us for energy, it's primary motivation would probably be to aquire more energy and compute,

so it would take over those resources (Probably quite quickly), When we try to shut down power stations or grids it then could easily build a virus in meatspace in some automatated lab somewhere.

Currently DNA sequences of deadly pathogens are readily available online. Ofcourse there are always other ways to exterminate puny humans, nukes, starvation, chemicals etc etc AGI wouldn't need to build terminator robots.

By @keiferski - 4 months
It would be worth studying how science fiction scenarios and ideas have impacted how we think about real-world technologies. People like to think that they are predictions of the future, but as William Gibson once put it, sci-fi is mostly just about the present, not the future.

When it comes to AI, a few sci-fi memes seem to be widespread: Skynet, the ultrapowerful AI god that manipulates humans into destroying themselves; the AI assistant that is so charming it replaces real human interaction, à la Her; even the term AI itself largely seems like a sci-fi inheritance, not an accurate label for the tech involved.

None of it is very connected to real-world developments, where things like Siri or Alexa are mostly just treated as voice-activated command systems, not real people.

And yet the technologists creating this stuff seem very influenced by what are ultimately implausible fictional stories. It's an unfortunate situation and I really wish the philosophy of technology / other more insightful analyses of technology were more prevalent.

By @arisAlexis - 4 months
Remember when reading this kind of articles: almost all the top ranking scientists agree and have consensus that it does in fact pose risks. Random bloggers that appeal to the soothing sentiment of everything is fine also called "norlmancy bias" are abundant but invalid.
By @IshKebab - 4 months
This guy lacks imagination and is waaay too over-confident in his opinion.

He seems like the sort of person that would say everything is impossible right up until it happens. I hate working with naysayers like this.

By @dingosity - 4 months
I'm not sure people are hand-wringing about science fiction novels coming true. I think they're worried that moneyed interests will hammer at regulators and law-makers until they relent and let chatbots drive cars. I mean... it's a great strawman, but ultimately kind of wrong.