June 30th, 2024

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI

MIT robotics pioneer Rodney Brooks cautions against overhyping generative AI, emphasizing its limitations compared to human abilities. He advocates for practical integration in tasks like warehouse operations and eldercare, stressing the need for purpose-built technology.

Read original articleLink Icon
MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI

MIT robotics pioneer Rodney Brooks, known for co-founding companies like Rethink Robotics and iRobot, believes that generative AI is being overhyped. He warns against overestimating the capabilities of AI systems, emphasizing that they are not human-like and should not be assigned human capabilities. Brooks highlights the limitations of generative AI, stating that it excels at specific tasks but cannot replicate human abilities. He uses his company, Robust.ai, as an example, explaining that using language models for warehouse robots may not be practical and could slow down operations. Brooks advocates for focusing on solving solvable problems where robots can be easily integrated, such as in warehouse environments. He also discusses the potential of AI in domestic settings, particularly in eldercare, but cautions that challenges related to control theory and optimization must be addressed. Brooks emphasizes the importance of making technology accessible and purpose-built, rather than assuming exponential growth in capabilities.

Link Icon 24 comments
By @fxtentacle - 4 months
To me, this reads like a very reasonable take.

He suggests to limit the scope of the AI problem, add manual overrides in case there are unexpected situations, and he (rightly, in my opinion) predicts that the business case for exponentially scaling LLM models isn't there. With that context, I like his iPod example. Apple probably could have made a 3TB iPod to stick to Moore's law for another few years, but after they reached 160GB of music storage there was no usecase where adding more would deliver more benefits than the added costs.

By @Animats - 4 months
That makes sense from Brooks' perspective. He's done his best work when he didn't try to overreach. The six-legged insect thing was great, but very dumb. Cog was supposed to reach human-level AI and was an embarrassing dud. The Roomba was simple, dumb, and useful. The military iRobot machines were good little remote-controlled tanks. The Rethink Robotics machines were supposed to be intelligent and learn manipulation tasks by imitation. They were not too useful and far too expensive. His new mobile carts are just light-duty AGVs, and compete in an established market.
By @roenxi - 4 months
> He uses the iPod as an example. For a few iterations, it did in fact double in storage size from 10 all the way to 160GB. If it had continued on that trajectory, he figured out we would have an iPod with 160TB of storage by 2017, but of course we didn’t.

I think Brooks' opinions will age poorly, but if anyone doesn't already know all the arguments for that they aren't interested in learning them now. This quote seems more interesting to me.

Didn't the iPod switch from HDD to SSD at some point, and they focused on shrinking them rather than upping storage size? I think the quality of iPods have been growing exponentially, we've just seen some tech upgrades on other axises that Apple thinks are more important. AFAIK, looking at Wikipedia, they're discontinuing iPods in favour of iPhones where we can get a 1TB model and the disk size trend is still exponential.

The original criticism of the iPod was that it had less disk space than its competitors and it turned out to be because consumers were paying for other things. Overall I don't think this is a fair argument against exponential growth.

By @TheOtherHobbes - 4 months
The iPod didn't stop growing. It turned into an iPhone - a much more complex system which happened to include iPod features, almost as a trivial add-on.

If you consider LLMs as the iPod of ML, what would the iPhone equivalent be?

By @gjvc - 4 months
Amara's law -- "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
By @qeternity - 4 months
The iPod analogy is a poor one. Instead of 160TB music players, we got general computers (iPhone) with effectively unlimited storage (wireless Internet).

I don’t need to store all my music on my device. I can have it beamed directly to my ears on-demand.

By @aetherson - 4 months
We'll see GPT-5 in a few months and that will be vastly more useful information to update your sense of whether the current approach will continue to work than anyone's speculation today.
By @lukeschlather - 4 months
I feel like "generative" might be the worst possible label, because while the generative capabilities are the most exciting features, they're not the most useful, and in most cases they aren't useful at all.

But the sentiment analysis, summaries, and object detection seem incredibly capable and like the actual useful features of LLMs and similar tensor models.

By @ianbicking - 4 months
"He says the trouble with generative AI is that, while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can"

This kind of strawman "limitations of LLMs" is a bit silly. EVERYONE knows it can't do everything a human can, but the boundaries are very unclear. We definitely don't know what the limitations are. Many people looked at computers in the 70s and saw that they could only do math, suitable to be fancy mechanical accountants. But it turns out you can do a lot with math.

If we never got a model better than the current batch then we still would have a tremendous amount of work to do to really understand its full capabilities.

If you come with a defined problem in hand, a problem selected based on the (very reasonable!) premise that computers cannot understand or operate meaningfully on language or general knowledge, then LLMs might not help that much. Robot warehouse pickers don't have a lot of need for LLMs, but that's the kind of industrial use case where the environment is readily modified to make the task feasible, just like warehouses are designed for forklifts.

By @locallost - 4 months
After using Copilot that is pretty bad at guessing what I exactly want to do, but still occasionally right on the money and often pretty close: AI is not really AI and it won't kill us all, but the realization is that a lot of work is just repetitive and really not that clever at all. If I think about all the work I did in my life it follows the same pattern: a new way of doing things comes along, then you start figuring out how to do it and how to use it, and once you're there you rinse and repeat. The real value in the work will be increasingly in why is it useful for people using it, although it probably was like this always, the geeks just didn't pay attention to it.

Sorry for not commenting on the article directly.

By @none_to_remain - 4 months
> Brooks explains that this could eventually lead to robots with useful language interfaces for people in care situations.

I've been thinking, in Star Trek terms, what if it's not Lt. Cdr. Data, but just the Ship's Computer?

By @RecycledEle - 4 months
ChatGPT and other world-knowledge AIs are proven super intelligences in that they have a broader range of knowledge than any human who has ever lived and they are faster than any human who has ever lived.

Anyone who can not acknowledge this lacks the discernment to understand that when they ask a computer program a question and get an answer no human could have given, that is super intelligence.

By @t0bia_s - 4 months
I use Midjourney for my work here and there. It let me done job faster, however it would be useless without my years of experience with visual production. It still require knowledge of postprocessing, drawing, imagination, photomanipulation, creating assets... And a taste which is subjective but it helps a lot.

Aesthetic is complex subject but cannot be generated if task is more precise.

By @bilalq - 4 months
> Panasonic Professor of Robotics Emeritus at MIT

Can someone translate what that means? I'm struggling to read past that line, since I just can't wrap my head around what a "Panasonic Professor" is. Does that word refer to something other than the corporation in this context?

By @globalnode - 4 months
i dont know much about machine learning but what i think i know is that its getting an outcome based on averages of witnessed data/events. so how's it going to come up with anything novel? or outside of normal?
By @spacecadet - 4 months
gen ai is dunning-kruger
By @mjburgess - 4 months

    Were one to slice the corpus callosum, 
    and burn away the body, 
        and poke out the eyes.. 
    and pickle the brain, 
        and everything else besides... 
    and attach the few remaining neurones 
        to a few remaining keys.. 
    then out would come a program 
        like ChatGPT's --
    
    "do not run it yet!" 
        the little worker ants say
    dying on their hills, 
    out here each day:
        it's only version four -- 
        not five!
    Queen Altman has told us: 
        he's been busy in the hive --
        next year the AI will be perfect
            it will even self-drive!

    Thus comment the ants, 
    each day and each night:
        Gemini will save us, 
            Llama's a delight!
        One more gigawat, 
        One more crypto coin
        One soul to upload
        One artist to purloin
        One stock price to plummet
        Oh thank god
           -- it wasn't mine!
By @portaouflop - 4 months
Let’s call it machine learning, the AI term is just so far from what it actually is
By @pikseladam - 4 months
the thing is, you can't know if i'm an ai generated comment or not. this is the thing.