MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI
MIT robotics pioneer Rodney Brooks cautions against overhyping generative AI, emphasizing its limitations compared to human abilities. He advocates for practical integration in tasks like warehouse operations and eldercare, stressing the need for purpose-built technology.
Read original articleMIT robotics pioneer Rodney Brooks, known for co-founding companies like Rethink Robotics and iRobot, believes that generative AI is being overhyped. He warns against overestimating the capabilities of AI systems, emphasizing that they are not human-like and should not be assigned human capabilities. Brooks highlights the limitations of generative AI, stating that it excels at specific tasks but cannot replicate human abilities. He uses his company, Robust.ai, as an example, explaining that using language models for warehouse robots may not be practical and could slow down operations. Brooks advocates for focusing on solving solvable problems where robots can be easily integrated, such as in warehouse environments. He also discusses the potential of AI in domestic settings, particularly in eldercare, but cautions that challenges related to control theory and optimization must be addressed. Brooks emphasizes the importance of making technology accessible and purpose-built, rather than assuming exponential growth in capabilities.
Related
Moonshots, Malice, and Mitigations
Rapid AI advancements by OpenAI with Transformer models like GPT-4 and Sora are discussed. Emphasis on aligning AI with human values, moonshot concepts, societal impacts, and ideologies like Whatever Accelerationism.
Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality
Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.
All web "content" is freeware
Microsoft's CEO of AI discusses open web content as freeware since the 90s, raising concerns about AI-generated content quality and sustainability. Generative AI vendors defend practices amid transparency and accountability issues. Experts warn of a potential tech industry bubble.
He suggests to limit the scope of the AI problem, add manual overrides in case there are unexpected situations, and he (rightly, in my opinion) predicts that the business case for exponentially scaling LLM models isn't there. With that context, I like his iPod example. Apple probably could have made a 3TB iPod to stick to Moore's law for another few years, but after they reached 160GB of music storage there was no usecase where adding more would deliver more benefits than the added costs.
I think Brooks' opinions will age poorly, but if anyone doesn't already know all the arguments for that they aren't interested in learning them now. This quote seems more interesting to me.
Didn't the iPod switch from HDD to SSD at some point, and they focused on shrinking them rather than upping storage size? I think the quality of iPods have been growing exponentially, we've just seen some tech upgrades on other axises that Apple thinks are more important. AFAIK, looking at Wikipedia, they're discontinuing iPods in favour of iPhones where we can get a 1TB model and the disk size trend is still exponential.
The original criticism of the iPod was that it had less disk space than its competitors and it turned out to be because consumers were paying for other things. Overall I don't think this is a fair argument against exponential growth.
If you consider LLMs as the iPod of ML, what would the iPhone equivalent be?
I don’t need to store all my music on my device. I can have it beamed directly to my ears on-demand.
But the sentiment analysis, summaries, and object detection seem incredibly capable and like the actual useful features of LLMs and similar tensor models.
This kind of strawman "limitations of LLMs" is a bit silly. EVERYONE knows it can't do everything a human can, but the boundaries are very unclear. We definitely don't know what the limitations are. Many people looked at computers in the 70s and saw that they could only do math, suitable to be fancy mechanical accountants. But it turns out you can do a lot with math.
If we never got a model better than the current batch then we still would have a tremendous amount of work to do to really understand its full capabilities.
If you come with a defined problem in hand, a problem selected based on the (very reasonable!) premise that computers cannot understand or operate meaningfully on language or general knowledge, then LLMs might not help that much. Robot warehouse pickers don't have a lot of need for LLMs, but that's the kind of industrial use case where the environment is readily modified to make the task feasible, just like warehouses are designed for forklifts.
Sorry for not commenting on the article directly.
I've been thinking, in Star Trek terms, what if it's not Lt. Cdr. Data, but just the Ship's Computer?
Anyone who can not acknowledge this lacks the discernment to understand that when they ask a computer program a question and get an answer no human could have given, that is super intelligence.
Aesthetic is complex subject but cannot be generated if task is more precise.
Can someone translate what that means? I'm struggling to read past that line, since I just can't wrap my head around what a "Panasonic Professor" is. Does that word refer to something other than the corporation in this context?
Were one to slice the corpus callosum,
and burn away the body,
and poke out the eyes..
and pickle the brain,
and everything else besides...
and attach the few remaining neurones
to a few remaining keys..
then out would come a program
like ChatGPT's --
"do not run it yet!"
the little worker ants say
dying on their hills,
out here each day:
it's only version four --
not five!
Queen Altman has told us:
he's been busy in the hive --
next year the AI will be perfect
it will even self-drive!
Thus comment the ants,
each day and each night:
Gemini will save us,
Llama's a delight!
One more gigawat,
One more crypto coin
One soul to upload
One artist to purloin
One stock price to plummet
Oh thank god
-- it wasn't mine!
Related
Moonshots, Malice, and Mitigations
Rapid AI advancements by OpenAI with Transformer models like GPT-4 and Sora are discussed. Emphasis on aligning AI with human values, moonshot concepts, societal impacts, and ideologies like Whatever Accelerationism.
Anthropic CEO on Being an Underdog, AI Safety, and Economic Inequality
Anthropic's CEO, Dario Amodei, emphasizes AI progress, safety, and economic equality. The company's advanced AI system, Claude 3.5 Sonnet, competes with OpenAI, focusing on public benefit and multiple safety measures. Amodei discusses government regulation and funding for AI development.
All web "content" is freeware
Microsoft's CEO of AI discusses open web content as freeware since the 90s, raising concerns about AI-generated content quality and sustainability. Generative AI vendors defend practices amid transparency and accountability issues. Experts warn of a potential tech industry bubble.