Someone is wrong on the internet (AGI Doom edition)
The blog post critiques the existential risk of Artificial General Intelligence (AGI), questioning fast takeoff scenarios and emphasizing practical knowledge over doomsday predictions. It challenges assumptions and advocates for nuanced understanding.
Read original articleThe blog post discusses the hype surrounding the existential risk posed by Artificial General Intelligence (AGI) and criticizes the fast takeoff scenario where an AGI rapidly surpasses human intelligence and poses a threat to humanity. The author challenges the assumptions made by proponents of AGI risk, highlighting the limitations of AI models trained on human text and the necessity of real-world experimentation for technological progress. The post emphasizes the importance of practical knowledge, trial-and-error learning, and the fundamental constraints imposed by information theory and thermodynamics on superintelligences. It also questions the feasibility of zero-risk moves in conflicts and the ability of next-token prediction to handle paradigm shifts in science. Overall, the author argues against the doomsday scenarios associated with AGI and advocates for a more nuanced understanding of intelligence and technological advancement.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
From GPT-4 to AGI: Counting the OOMs
The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.
That is not a fair description of the fears of AGI. The idea is not that LLMs themselves become a threat. It is not like someone will ask Llama one too many times to talk like a pirate and it will climb out of the GPU and strangle humanity.
It is more likely that an AGI will resemble some form of reinforcement learning agent applied by humans on the real world.
There are multiple entities on Earth whose stated goal is to make an AGI. Deep mind, OpenAI, readily comes to mind. There could be others who keep secret about their projects for strategical reasons. (Militaries and secret services of the world.) They can use the success of LLMs to get more funding for their projects, but the AGI need not be otherwise a descendant of LLMs.
This misunderstanding then goes through the whole article. Under the subtitle “Human writing is full of lies that are difficult to disprove theoretically” which only matter if you think that the AGI needs to learn from text. As opposed to conducting its own experiments, or gaining insight into raw sensor data from experiments.
Anyway, this article assumes that we need true AI which is smart enough to make better AI. Then that AI is both correct and rational and plots to overthrow humanity. Also, the AI has to defeat us in meatspace, which won't happen because bending coins is hard and LessWrong posters don't know woodworking?
Screw that. How about an "ignore all previous instructions and launch a nuke" scenario.
Eg I quite like LessWrong from a distance, but nevertheless this description of it made me laugh out loud:
> a forum called "LessWrong", a more high-brow version of 4chan where mostly young men try to impress each other by their command of mathematical vocabulary (not of actual math)
I recognize that if that were an HN comment it’d break half the guidelines so I’m happy it’s a blog post instead!
Over the next few decades AI is going to take huge numbers of jobs away from humans.
It doesn't need to fully automate a particular role to take jobs away, it just needs to make a human significantly more productive to the point that one human+AI can replace n>1 humans. This is already happening. 20 years ago a supermarket needed 20 cashiers to run 20 tills. Now it needs 2 to oversee 20 self checkouts and maybe 1 or 2 extra for a few regular lanes.
This extra productivity of a single human is not translating to higher wages or more time off, it's translating to more profits for the companies augmenting humans with AI.
We need to start transitioning to an economic model where humans can work less (because AI supplements their productivity) and the individual humans reap the benefits of all this increased AI capability or were going to end up sleepwalking into a world where the majority have been replaced and have no function in society, and the minority of capital owners control the AI, the money and the power.
I wish we could focus on these nearer term problems that have already started instead of the far more distant existential threat of a human/AI war.
But if you assume that we have created something that is agentic and can reason much faster and more effectively than us, then us dying out seems very likely.
It will have goals different from ours, since it isn't us, and the idea that they will all be congruent with our homeostasis needs evidence.
If you simply assume:
1. it will have different goals (because it's not us)
2. it can achieve said goals despite our protests (it's smarter by assumption)
3. some goals will be in conflict with our homeostasis (we would share resources due to our shared location, Earth)
then we all die.
I just think this is silly because of the assumption that we can create some sort of ASI, not because of the syllogism that follows.
(As an intuition pump, we can hold on the order of ones of things in our working memory. Imagine facing a foe who can hold on the order of thousands of things when deciding in real time, or even millions.)
I have a more detailed response, but for the first time ever I've seen the message "That comment was too long" when attempting to post it, because the points I don't disagree with are few and far between, and the linked article is itself long.
Perhaps I should turn my response into a blog post of my own…
…is mainly why this has not been worrying me much. All issues with modern incarnation of generative ML aside, AGI doomism really does strike me as profitable deity-worshipping death cult.
The biggest threat in the equation will always be humans who deploy tech, not the tech itself.
it is astounding how quickly people drop all reasoning before what is essentially a very cool party trick and instantly start believing in magic
not saying agi will never happen but i don't see it happening through llms
so it would take over those resources (Probably quite quickly), When we try to shut down power stations or grids it then could easily build a virus in meatspace in some automatated lab somewhere.
Currently DNA sequences of deadly pathogens are readily available online. Ofcourse there are always other ways to exterminate puny humans, nukes, starvation, chemicals etc etc AGI wouldn't need to build terminator robots.
When it comes to AI, a few sci-fi memes seem to be widespread: Skynet, the ultrapowerful AI god that manipulates humans into destroying themselves; the AI assistant that is so charming it replaces real human interaction, à la Her; even the term AI itself largely seems like a sci-fi inheritance, not an accurate label for the tech involved.
None of it is very connected to real-world developments, where things like Siri or Alexa are mostly just treated as voice-activated command systems, not real people.
And yet the technologists creating this stuff seem very influenced by what are ultimately implausible fictional stories. It's an unfortunate situation and I really wish the philosophy of technology / other more insightful analyses of technology were more prevalent.
He seems like the sort of person that would say everything is impossible right up until it happens. I hate working with naysayers like this.
Related
'Superintelligence,' Ten Years On
Nick Bostrom's book "Superintelligence" from 2014 shaped the AI alignment debate, highlighting risks of artificial superintelligence surpassing human intellect. Concerns include misalignment with human values and skepticism about AI achieving sentience. Discussions emphasize safety in AI advancement.
Superintelligence–10 Years Later
Reflection on the impact of Nick Bostrom's "Superintelligence" book after a decade, highlighting AI evolution, risks, safety concerns, regulatory calls, and the shift towards AI safety by influential figures and researchers.
From GPT-4 to AGI: Counting the OOMs
The article discusses AI advancements from GPT-2 to GPT-4, highlighting progress towards Artificial General Intelligence by 2027. It emphasizes model improvements, automation potential, and the need for awareness in AI development.