August 10th, 2024

There's Just One Problem: AI Isn't Intelligent

AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.

Read original articleLink Icon
There's Just One Problem: AI Isn't Intelligent

AI is often celebrated as a transformative technology, yet its current forms, particularly generative AI, lack true intelligence. They merely mimic human language and thought processes without understanding context or meaning. This mimicry can lead to significant risks, as AI systems are prone to errors, unable to distinguish between fact and fiction, and can generate misleading information. The reliance on AI for complex problem-solving can create systemic vulnerabilities, especially when humans lose the ability to intervene or understand the systems they manage. As AI becomes more integrated into critical infrastructure, the potential for cascading failures increases, particularly if these systems encounter unforeseen issues. Furthermore, the convenience of AI-generated content may diminish our capacity for deep learning and critical thinking, leading to a population less equipped to engage with complex problems. The economic narrative surrounding AI suggests it will create vast wealth and new industries, but the reality may be a reduction in job quality and an increase in inequality. Ultimately, while AI can enhance efficiency, it does not address fundamental human challenges and may exacerbate existing societal issues.

- AI currently mimics human intelligence but lacks true understanding.

- Over-reliance on AI can lead to systemic risks and failures in critical systems.

- The convenience of AI-generated content may undermine deep learning and critical thinking.

- Economic benefits of AI may not materialize as expected, potentially increasing inequality.

- AI does not solve pressing global issues, such as environmental degradation.

Link Icon 6 comments
By @satvikpendem - 4 months
> The AI effect occurs when onlookers discount the behavior of an artificial intelligence program as not "real" intelligence.[1]

> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]

https://en.wikipedia.org/wiki/AI_effect

I don't know how many times I've posted this now, and how many times I'll have to continue posting it in the future, because it's a very real psychological phenomenon that I can observe in real time among people, such as the author of this article.

By @mindcrime - 4 months
This is a meme that needs to die. It's not insightful or interesting and just muddies the waters in a way that confuses the issue for everybody.

Look, nobody (for the most part) is claiming that current AI systems are "there yet" as far as being fully equal to human intelligence. That's what makes this whole argument useless... it's basically a straw-man argument.

OTOH, saying that artificial intelligence systems aren't "intelligent" to a point because they don't do exactly what humans do strikes me as roughly equivalent to saying that "airplanes don't really fly because they don't flap their wings like birds. They're just mimicking actual flight."

Of course AI is intelligent, it just isn't done developing yet. An apt comparison might be a precocious (and something peculiar, in this case) child.

And as an additional side-note: this article seems to conflate "AI" and "Generative AI" / LLM's as being the same thing. But that's not right - Generative AI / LLM's are just a subset of AI techniques and technologies that exist. Yes, GenAI/LLM is the current "new hotness" and the trendy thing everybody is talking about, but that doesn't excuse completely ignoring the distinction between "all of AI" and "Generative AI".

By @persnickety - 4 months
"Mimicry of intelligence isn't intelligence" is a big assumption. It's like saying "fake it until you make it" doesn't work. It's like saying that two undistinguishable properties are nevertheless not equivalent.
By @cainxinth - 4 months
Just for kicks: Claude 3.5’s opinion of this piece:

> This critique raises many valid concerns about the limitations, risks, and potential negative impacts of AI. It serves as an important counterpoint to overly optimistic or uncritical views of AI's potential. However, the critique may underestimate the potential for AI to evolve and overcome some current limitations. Additionally, while highlighting risks, it doesn't fully acknowledge potential benefits of AI in areas like scientific research, medical diagnostics, or improving efficiency in various fields.

By @jokoon - 4 months
Aircraft engineers studied birds to make airplanes.

If AI engineers don't study brains, they will probably never build an intelligent ai.

At least stimulate the brain of an ant or a small lizard, that shouldn't be hard to do.

Maybe try to do more things with primates or animals to teach them things.

I don't understand why cognitive sciences are not approached when dealing with ai, that seems obvious, but all I see is people viewing the brain like it's a computer with an algorithm.

Launching a bird shaped wood plank will never lead to flight.

Feels like we forgot that science is about understanding things. AI engineers don't analyse trained neural networks, they're black boxes. What's the point?

Maybe scientists are just bad at science.

There are so many questions to ask.