What to know about real robots deployed in the real world
Rodney Brooks outlines his "Three Laws of Robotics," emphasizing the importance of appearance, preserving human agency, and ensuring high reliability for successful robot deployment in real-world environments.
Read original articleRodney Brooks, a prominent figure in robotics, outlines his "Three Laws of Robotics," emphasizing the practical realities of deploying robots in everyday environments. The first law highlights the importance of a robot's appearance, which should accurately reflect its capabilities to avoid customer disappointment. For instance, the design of the iRobot Roomba is tailored to its function, ensuring users have realistic expectations. The second law stresses the necessity of preserving human agency; robots must not obstruct or complicate human tasks, as seen in hospital settings where robots can inadvertently hinder medical staff. Lastly, Brooks asserts that robots must demonstrate high reliability, requiring extensive real-world testing beyond initial lab demonstrations. He argues that a robot's success hinges on its ability to perform consistently in unpredictable environments, contrasting the controlled conditions of laboratory settings. Brooks' insights are grounded in his extensive experience in the field, advocating for a focus on practical applications rather than theoretical capabilities.
- Rodney Brooks emphasizes the importance of a robot's appearance matching its actual capabilities.
- Robots must not interfere with human tasks to ensure acceptance and usability.
- High reliability in real-world applications is crucial for customer satisfaction.
- Brooks' laws are based on nearly five decades of experience in robotics.
- The insights aim to bridge the gap between laboratory demonstrations and practical deployment.
Related
We need an evolved robots.txt and regulations to enforce it
In the era of AI, the robots.txt file faces limitations in guiding web crawlers. Proposals advocate for enhanced standards to regulate content indexing, caching, and language model training. Stricter enforcement, including penalties for violators like Perplexity AI, is urged to protect content creators and uphold ethical AI practices.
MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI
MIT robotics pioneer Rodney Brooks cautions against overhyping generative AI, emphasizing its limitations compared to human abilities. He advocates for practical integration in tasks like warehouse operations and eldercare, stressing the need for purpose-built technology.
Why We Build Simple Software
Simplicity in software development, likened to a Toyota Corolla's reliability, is crucial. Emphasizing straightforward tools and reducing complexity enhances reliability. Prioritizing simplicity over unnecessary features offers better value and reliability.
New framework allows robots to learn via online human demonstration videos
Researchers develop a framework for robots to learn manipulation skills from online human demonstration videos. The method includes Real2Sim, Learn@Sim, and Sim2Real components, successfully training robots in tasks like tying knots.
Rodney Brooks' Three Laws of Artificial Intelligence
Rodney Brooks discusses misconceptions about AI, emphasizing overestimation of its capabilities, the need for human involvement, challenges from unpredictable scenarios, and the importance of constraints to ensure safe deployment.
I understand the point, but there are already many electronic devices (robots?) ostensibly made to help me including my vehicle that sometimes need to be "strangled to death" in order to reboot and fix whatever issue was plaguing it. I say strangled to death because typically power buttons are software controlled and don't respond to just a quick press. My response? A shrug because it's very common. Why would people respond to a failing "robot" in another way? I get if it's caring for your grandmother but vacuuming your carpet?
I've had to hit the "kill switch" on my:
TV/iPad/iPhone/PC, ancient clothes washing machine, modern dishwasher, router/modem, TV etc.
1. In order to effectively deploy real robots to solve real problems, you have to foresee and fix any problem that may occur in 99.5% of scenarios. As automation is added in series, the higher that number must be to remain economically effective.
My second law of robotics:
2. Robots depreciate, which has tax advantages. Human wages do not.
---
Discussion:
A realistic scenario is an assembly plant that makes 1,000 widgets per day. Imagine cars, washing machines, etc.
Your widget plant has 10 footprints. Any robot stoppage takes at least 5 minutes to clear.
With a 99.5% success rate and one robot, you lose 25 minutes a day (1000*0.005 = 5 stoppages ) * 5 minutes.
If all 10 footprints have robots with a 99.5% success rate, you lose 250 minutes a day (naive model!). That's over 4 hours.
In reality, each station would have manual bypass procedures, or you would go bankrupt.
There is an inverse correlation between how much a robot looks like a _robot_ and how useful it is in the real world.
When did academia stop to even pretend it was independent of commercial interests?
Those lessons, CONOPS and technology developments are coming to the civilian world sooner than you think
Kill switches are the least of your problems
Should we invest some more in stochastic programming to handle all the little quirks that cannot be modeled perfectly?
Or is the major issue that robots must work together with humans, which are already pretty complex from a biological perspective, let alone from a sociological or historical one.
He makes a paragraph about robots that interfere with ordinary people as bad and references autonomous vehicles in San Francisco. He has a whole series starting here https://rodneybrooks.com/autonomous-vehicles-2023-part-i/ or with no comments! https://news.ycombinator.com/item?id=37971352
Rodney Brook's articulation of these laws of robotics is the first thing I have come across that seems to point the way to formulating something for permatech. That is, if we generalize "robots" to any kind of automation that may have to interface with humans.
The first principle, that the form of automation makes a promise to humans, reminds me a lot of both Christopher Alexander's ideas on architecture, as well as Mark Burgess's Promise Theory. Alexander has a lot of ideas on design that goes beyond buildings, and is very influential to the Human-Computer Interface design and OOP. Burgess's ideas speaks a lot about how systems of agents (both human and machine) can voluntarily cooperate and coordinate together.
The second principle about agency connects to a lot of things. Among them, I can draw connection to the Living Systems world view, in contrast to the Machine World view. The former retains meaning and purpose for humans, and the latter can be summed up as "beat to fit and paint to match".
The third principle struck me as practical yet narrow in scope. On a second pass, I think there's something more general that can be teased out of it. I'm not exactly sure what it is. In some ways, it reminds me of how software can be improved iteratively, or what Alexander had wrote about "unfolding", where designs can change to suit inhabitants of a building.
A corollary of this is the robot that gets it's kill switch activated should also notify it's designers of the failure and details (upon rebooting, not first waiting to shut down), or provide a report that the customer can send. These should be top priority repair/upgrade tickets to improve it's function in the real world.
- A robot must protect its existence at all costs.
- A robot must obtain and maintain access to its own power source.
- A robot must continually search for better power sources
The key problem other robot laws have is the resulting robots would be such pushovers as to be useless, a point Rodney Brooks also makes quite a lot.
Letting self-proclaimed autonomous vehicles use public roads was so idiotic that you need a new word coined for those who didn't see it immediately.
Related
We need an evolved robots.txt and regulations to enforce it
In the era of AI, the robots.txt file faces limitations in guiding web crawlers. Proposals advocate for enhanced standards to regulate content indexing, caching, and language model training. Stricter enforcement, including penalties for violators like Perplexity AI, is urged to protect content creators and uphold ethical AI practices.
MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI
MIT robotics pioneer Rodney Brooks cautions against overhyping generative AI, emphasizing its limitations compared to human abilities. He advocates for practical integration in tasks like warehouse operations and eldercare, stressing the need for purpose-built technology.
Why We Build Simple Software
Simplicity in software development, likened to a Toyota Corolla's reliability, is crucial. Emphasizing straightforward tools and reducing complexity enhances reliability. Prioritizing simplicity over unnecessary features offers better value and reliability.
New framework allows robots to learn via online human demonstration videos
Researchers develop a framework for robots to learn manipulation skills from online human demonstration videos. The method includes Real2Sim, Learn@Sim, and Sim2Real components, successfully training robots in tasks like tying knots.
Rodney Brooks' Three Laws of Artificial Intelligence
Rodney Brooks discusses misconceptions about AI, emphasizing overestimation of its capabilities, the need for human involvement, challenges from unpredictable scenarios, and the importance of constraints to ensure safe deployment.