Release to refresh
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
Hi everyone! Glad to be on Parlio! I have been an academic researcher and administrator in robotics and Artificial Intelligence, an author, a journal editor, and a robotics entrepreneur. In the latter role, I have founded companies that have delivered more home robots, more military ground robots, and more humanoid robots (for manufacturing and logistics) than any other company.

I am a strong believer in the long term promise of robotics and Artificial Intelligence, but believe that the current fears being expressed in the press, and by opinion leaders, wildly overestimate the short term possibilities for these technologies.

I welcome questions on these topics!
This Q&A took place between 8/27/15 and 9/4/15. Unanswered questions have been hidden
15 questions
Student at The George Washington University
Hey Rodney thanks for taking our questions,

Given the uproar we've seen around the world with the rise of Uber and other companies that are using new technologies to disrupt the status quo in many long standing industries, how do you believe corporations in the next 10 years will have to go about responding to pressure from organized labor groups because of the perception that these new technologies, including those involving Artificial Intelligence, will kill low wage service sector jobs?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I think the new technologies have enabled some companies to move the risks elsewhere, e.g., on to the "employees". Society must decide what is acceptable in this regard.

E.g., my four kids all got non-paying internships at some points growing up. I think that is an immoral exploitation of people who need some work experience on the resumes. So at my companies we have always paid interns a real wage, a living wage in fact, more than double the current minimum wage.

I expect we will see pressures for society to change the way things operate, to make them more equitable. Universal health care is an example--by decoupling health care provision from your employer (has anyone suggested that car insurance should be coupled to your employer), and have regulation that makes it affordable actually helps entrepreneurial businesses as it lowers the risks for new employees to join a company that may fail in a year or two.

We may see things that were unimaginable just a few years ago, like negative income taxes at the bottom end. It is going to be an interesting time, and clinging to old models while destroying the ability for people to have a living wage is not a recipe for a healthy society.
Corporate Innovation Activist | Social Scientist in training
Hi Rodney,

Where do you see the legal responsibility laying in the event of a death caused by malfunction (of sorts) in AI controlled capabilities?

For example, if a self-driving car that hits a pedestrian, or a security bot shoots and kills an innocent bystander, should the owner of the car/bot, the manufacturer, the programmer or someone else be held responsible?

PS: is your job as exciting as it sounds? I'm very jealous! :)
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
My job is more exciting than it sounds!!!

Your question is real, and complex.

I think there are two key components to understanding this landscape. Intentionality and complexity.

1. Intentionality. Many of the sci-fi stories (and I include comments made by some academics in this category) and responsibility of AI systems, imply that the AI systems have intentionality. There are zero, none, zip, nada, intentional AI systems deployed in the world. And in my view a similar number ever demonstrated in laboratories. That doesn't mean there won't be such systems in the future. But I think we can wait to worry about them until we at least see lab systems demonstrating the intentionality of a bird, or even a mammal. Otherwise we are just shooting in the dark, and any answers we come up with soon fall prey to the facts of the situation on the ground should we get to intentional AI systems.

2. Complexity. We have faced the issue of who is responsible for failures in many systems. We have insurance for individual drivers, and on individual cars, to compensate for accidents caused by drivers. And it is compulsory to protect the public, as they don't get to choose whether they get hit by an insured or uninsured car. But in certain cases the blame for the accident gets promoted to blaming the auto manufacturer.

I think we will see similar levels of blame for different players for AI systems, but I think that the complexity of AI systems will make it a much richer and nuanced blame attribution system in the end.

Certainly an accident caused by an easter egg that some software engineer knowingly hid in the system should come down to blame on that engineer--and they may need to face criminal liability.

There are standards already for some robotics systems, internationally agreed upon by international standards body. The civil codes of many governments (and sometimes even states, and sometimes even cities), refer to various of those standards and say that deployed systems must meet them. That usually absolves both the manufacturer and the user of the system when it meets those legally specified standards. I think standards bodies are in for a rich set of challenges to keep up with all the coming AI and robotics developments. Sometimes the standards will lag the reality--we are seeing that in the unmanned drone space right now, and I predict that area is going to stay messy for some time.

For some standards such as safety systems for industrial equipment there are specifications right down at the engineering level (one European standard says that none of the code in the embedded micro controllers can use either stacks or pointers!!). But for complex AI systems I think it will need to be more like safety testing, as we now have for crash testing in automobiles.

Often people argue that we will need to prove the correctness of the AI programs, but I think that is too high a bar, won't be meetable, and so will be replaced with performance testing.

Here is an analogy. We used horses as a primary means of transport and drayage for most of written human history. But we could not prove that horses were correct. Instead a general standard for horse behavior was generally agreed upon. Badly acting horses were culled from places where they interacted with people--i.e., don't meet the standard and you (horse) are out. And people knew the limitations of horse behavior. Most people knew not to stand directly behind a horse. And they knew not to light a match three inches from a horse's eye. These unwritten but generally accepted rules made the use of horses, themselves very complex systems, useful and safe in human society. Complex AI systems are going to be more like horse than proven correct software programs.
Economist, philosopher, author, lecturer, futurist
Hello "Rodney",
How do I know you are not a robot? ;-)

What kind of robotics research do you think we should not allow?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I am a robot! I am a robot made up of biomolecules that interact with each other in ways that can be described by physical laws. That is all I am. [But see my forthcoming answer on computation and thought.]

I do not think there is any sort of robotics research that we should not allow. I do think there are certain sorts of robots that we should not build, deploy, or let near people. [I'll address some of those instances in other questions.] But that is different from fundamental research in robotics. We can not tell ahead of time what we will discover in research and what implications it might have for good or evil. When we are building robots for particular tasks we can know those questions better. In my own career I have worked in both arenas. They are very different.
Hi. Thanks for taking the time to answer questions. Here's mine: Despite the massive successes of robotics there are still folk who insist that thought is not ultimately computational. In fact--they point to the discrepancy between computer AI and the way humans achieve things to underscore this. What is your take on this? Could there be more to thought than information processing and if so, what?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
AI started out with Alan Turing modeling computation on how human "computers" (that was what they were called) carried out complex numerical computations for physics models, by going through sequence of steps, sometimes conditionally based on the numbers, and writing down intermediate results on paper. That lead to what are now called "Turing machines" a model of computation where there is an infinitely long tape of squares,m each of which contains one of a finite (perhaps very large) number of distinct symbols, with one currently under a read/write head. At any time the machine is in one of a finite (perhaps very large) number of states, and for each state there is a rule on what to do that is conditional on the contents of the current square. And each rule says whether to replace the symbol with a particular new one, then to move the tape one square left or one square right, and then to change to a new state.

That is the basis of all our models of computation, an abstraction of how people do numerical calculations. Along the way computation became a model of how nervous systems work. McCullough and Pitts, at MIT, in the late forties were two of the key players in making that connection. And it has become the dominant way of thinking about higher level neuroscience, as computational.

But it is not the first such model. At various times in history the nervous system has been modeled as a hydrological system, as a steam system, as what is now known as classical control theory, as a stimulus response system. Each of these models has helped elucidate better understanding of neural systems. Each has been overdone sometimes. Each has been replaced by more sophisticated models later.

I don't know whether the computational model will last forever. My personal hunch is that it won't. I wrote about this in my 2002 book Flesh and Machines (a sure sign of a grumpy old professor is when they refer to something they wrote more than ten years ago...), and suggested that something was missing from our models of both nervous systems and living systems in general, and postulated that there might be one thing that helps change our thinking about both, at least for a while, until the next new thing comes along. I called this mystical new idea, not yet formulated by anyone, "the juice". I still get emails from people telling that they have discovered "the juice". No one has convinced me yet (another sign of a grumpy old professor!).
Entrepreneur, Internet addict, philosophy lover
Hi Rodney,

By the end of this century:
- AI will be able to replace most systematic tasks even of many engineers
- Creative and artistic work that is human can be done by robots
- Robots can fix themselves
- Humans don’t need to work much
Is our socio-economic system going to be redefined?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I think we will redefine our socio-economic systems, as we have always done with new technology. Sometimes it is a rough ride for those getting redefined, and we need to be cognizant of that. [Also see my answer to Humza Rizvi above.]

When the printing press was introduced there were worries about what would happen to the scribes who copied books by hand. They were eventually replaced (though developments now are on a much shorter timescale), and I think we would mostly agree that we are glad that printing is around, and that it lead to universal literacy.

In the 18th century the vast majority of Americans were engaged in agriculture. Now it is less than 1% and many of the people that do the more menial jobs are undocumented aliens, as no one else is willing to do that hard work for such low pay. Our society survived such a radical change in our socio-economic system, and we may well survive the influx of digital with new radical changes in our socio-economic system, but we must be open to new ideas, and not stick to how it was under this president or that president--they were different ages with different pressures.

A word of concern however. The real losers in the change of agriculture from muscle work to machine work were horses. The number of horse in the US dropped dramatically, and the jobs for horses disappeared except for running races or dragging tourists around Central Park. Let's hope that we humans are not the horses of digitalization.
Thank you for taking our questions. I was wondering if you had any suggestions for a sensible regulatory framework that could shield humanity from some of the more dystopian consequences of AI that we keep hearing about while still allowing us to capitalize on its potential to improve society.
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I really don't think we are close enough, technology wise, to the extreme dystopian consequences that some people are predicting to make and sensible suggestions. Who could have thought, when they first saw humans flying over Paris in a hot air balloon, in 1783, that a big question for regulation in the future would be noise abatement near the points of take off and landing.

We are so very far from having systems that could lead to those dystopian outcomes that I think it makes no sense to worry about regulations at this time. Whatever we might decide will surely look laughable in the future. On the other hand there are real deployments of AI and robotics going on, and they, as with all technologies, should be subject to appropriate regulations. I'll address the short term places where we do understand things well enough to know what to worry about in some other questions.
Consulting Research Associate, Root Cause Institute
Hi Rodney,
Thank you for taking the time to field our questions. I am curious what you think is in the near future for robotics - you've seen multiple iterations of how robotics has solved contemporary problems in numerous field (consumer, military, etc); what's next?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
Besides the explosion of activity around both autonomous and remotely controlled small flying drones, and the robotization of cars through both driver assist and self-serving technologies, I think there are going to be three new waves of robotic applications in the fairly short term. Plus some others that I have not had the foresight to imagine. On order of large scale adoption they are:

1. Collaborative and intelligent (just a tiny little smidgeon of intelligence--way less than that of a two year old) robots in manufacturing and other menial repetitive jobs.

2. Order fulfillment, first in the warehouse, then increasingly in the delivery to end consumer.

3. Elder care in the home, letting the elderly retain their dignity and control over their own lives longer. [I think driver assist safety technologies are already doing this for the elderly, letting them continue to drive longer than would be advisable with cars without that technology.]
Hi! Do you think that current line of AI progress can EVER lead to the "dangerous" or "self-conscious" robots? It seems to me that no matter how sophisticated machine learning gets, it will never have a way to set "its own goals" - and that to have this, we would need some qualitatively different direction of research.
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I don't think the current, very real, and very significant, progress in machine learning will lead to dangerous or self-conscious robots. In another answer I deconstructed some of the limitations of current machine learning.

In order for machine learning to be more useful for more sophisticated robots and AI systems it needs to start developing explanatory models. People have been working on this for 50 years, but it is a long hard push, and no one can predict when major new results will be forthcoming.

And, as you suggest, robots would need to have a sense of self, goals, and desires, in order to get to be dangerous or self conscious. Almost no one is working on that as a research question, and certainly any deployment into real robots or network AI systems is just not on the cards at the moment. No one has a clue how to do it.

Lastly, I want to address the dystopian claim that soon AI systems will be smart enough to rewrite their own code and so there will be an uncontrolled runway intelligence. Current AI systems often have millions of lines of code. There is no AI system (despite automatic programming research going on for 50 years), that can understand a 50 line C program well enough to rewrite it (except in the sense of an optimizing compiler). No system can work at the level of a student in any introductory programming class halfway through the first semester (the cut off might be the first week, but we have to allow for students who drop the class as it was a total mistake for them to enroll in the first place).
Harvard Law School & Harvard Kennedy School
Rodney, thank you for fielding questions. Using robots for military purposes may lower the cost of going to war, that is, the consequences of war seem more remote if an actor can use a drone rather than put a soldier in harm's way. To what extent is this a problem? How can robots help prevent war?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
Remote action, disconnected from the consequences can indeed be a problem. People have argued the same thing about bomber crews involved in carpet bombing of cities in the second world ware, or launchers of rockets with war heads aimed at foreign cities in the same conflict. I don't think that disconnection is robot specific--rather it is technology driven.

People are always afraid that more robots will speed up the temp of conflicts. I am not convinced. We see many instances, across the US, where the police are using remote controlled robots (such as the Packbot from my old company iRobot) to go in to suspect meth labs and see what is going on in there. Often a person in the lab can be remotely talked into surrendering. I think that is preferable to going in with guns blazing.

In general, a robot can afford to shoot second. It is much harder to ask 19 year old recent high school grad, sent into village at night, with civilians and potential armed opponents, to shoot second. They can easily get spooked and fear for their lives. The robot does not fear for its life, and has no need to live. It can take incoming fire without shooting in the dark in panic. Robots can slow down a conflict.
Hi Rodney -- Thank you for taking the time to speak with us. What would you say is the greatest misconception about the short-term to medium-term possibilities of robotics? What error in reasoning or imagination leads to this mistake?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I think that well meaning people, mostly outside of the field of AI/robotics, are making a mistake of using models of generalization of human competence and applying it to instances of AI system performance. I'll give three examples of the distinction that should be made.

Deep Blue, an IBM supercomputer, beat the world chess champion, Gary Kasparov back in 1997. Today there are more than a dozen programs that can run on a lap top and have a higher chess rating than any human in history. But none of those programs have the generalized competence that a human grandmaster has. None of those programs can play tic-tac-toe, or checkers. None of them can coach a person to be a better player, giving them advice. The best they can do is act as a sparring partner, and with a little extra programming, could rank a person's chosen move on against all other possible moves. And BTW, none of those programs know what a game is, what a person is, or that they, the programs, even exist.

Recently deep learning has lead to fantastic progress in speech recognition and image labeling. In the latter case the program is given an image and it provides an English language label. The programs do much better than I think any AI expert would dared have predicted just five years ago. In this NYT story nytimes.com/2014/11/18/sc...ftware there is an example where a Google program labels the image (I think it will show up just below this answer) as: A group of young people playing a game of frisbee. A human having that level of performance would be able to answer a whole lot of related questions. The program can answer none of these. How many people? Where is the frisbee? Who is furtherest from the frisbee? Who is engaging with the frisbee in a two second time frame around the instant of the photo? Nor can the program answer any of the following questions, even though a person who gave that image label could. What is frisbee? What is a game? What are people? Are people living creatures?

And finally there was much fanfare about the company DeepMind Technologies when Google bought them, as they had a learning system that could learn to play video games just from observing them. But the way they learn is to associate image patches (in a very complex) way with next action. I was at a talk recently by Prof Josh Tenenbaum of MIT, where he pointed out a few things. The programs are trained on 1,000 hours of watching each game. On some games they perform much better than humans can, on others they perform only at the 6% level--my observation the latter happens when the games need a little look ahead. Josh pointed out that the programs go back to zero performance if you just change the colors of the items in the game, and need to be retrained from scratch. Or if you double the resolution of the screen. Absolutely no performance remains from either of those changes.

In each of these three cases the AI system has high performance, comparable or better than a human. But it has not a shred of human level competence. It is competence that makes humans such formidable intelligences.

So, I think people are making a mistake. AI will have continued improvements in performance on many, many specific tasks, including many which we aren't even thinking about today. But until a whole lot of new research is done, and we put models into learning systems, we will not get high competence. AI researchers knew this was the big challenge back in the sixties. That was fifty years ago. We can not predict when it will come to pass, but fifty years of not being able to do it gives us a clue as to how hard it may be.
Hi Rodney, what do you believe is the next big thing of robotics in Logistics?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I think the big challenge in logistics is robotic picking, followed by packing.

Kiva Systems, now Amazon Robotics, has been an effective solution to moving things around the warehouse. Other companies are working on similar solutions given that Kiva is now a captive of Amazon's internal market, and yet other companies are working on different solution for moving things around in the warehouse/fulfillment center.

But a clear indiction of how far we are from having successful picking robots is that Amazon is sponsoring an open "pick challenge". The first competition was held at ICRA (International Conference on Robotics and Automation) in Seattle in May, and there will be ongoing competitions sponsored by Amazon in the future.

Really, no one is working much in the pack space. But that will be there after pick has technological developments.
Financial Consultant/ Investment Advisory Representative
Dr. Brooks,

With the advent of retail artificial intelligence products such as Apple's Siri, Microsoft's Cortana and Amazon's Echo, where do you see the progression in the battle for our cellphones and living rooms?

P.S. My mom uses the iRobot every day and even named it after our first dog
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I see three technological challenges for getting more robots into people's homes. There will be a need for more robots so the elderly can maintain their independence and stay at home longer in the worlds of Europe, North America, Japan, and China, were a demographic inversion is happening, and there are just not going to be enough (already true in Japan) younger people to provide services for the elderly. The three challenges are what I call the three M's.

1. Messiness. Real homes are really messy with lots of clutter and lots of objects all over the place. Here I think we are seeing and will see rapid progress, driven by deep learning, and by the availability of low cost 3-D cameras, driven by gaming platforms. Deep learning will allow much better labelling of objects, even in messy environments, and just about any researcher can now afford a 3-D camera that plugs in to their lap top so they can do research in this area. We see results in the vast number of papers appearing on this topic in computer vision, and in robotics conferences. A big change from just a few years ago.

2. Manipulation. We are still a long way from having robot hands with the dexterity that young human hands have, an absolute necessity for dealing with the messy environments, and for interacting directly with the elderly in their homes. We have not had a lot of progress in the design of hands over the last forty years. There has been a high barrier to entry, in that they had to be at the end of a robot arm, and the available robot arms have usually been somewhat dangerous, and simply position controlled, industrial robot arms. That is why Rethink Robotics started selling a research version of the Baxter robots. There are now hundreds of them in research labs around the world. The arms are safe to be near, so it is fine to let undergraduates and graduate students work late into the night in the lab, using these robots. And we have a general interface at the wrist so that people can attach new robot hands that they develop. People are using Baxters for all sorts of research, but some are starting to use them as platforms for research in dexterous manipulation. Hooray!!

3. Mobility. Real houses have steps, sometimes just one or two, sometimes a lot, both outside getting into and out of the house, and inside where floor levels change in different parts of the house, and in many houses between completely different stacked levels. Roomba's are flummoxed by even a single step. At iRobot we made sure that Roomba's would never fall down the stairs. But we didn't figure out a way that they could go up even a single step change in floor level. There are no low cost platforms for researchers to work on mobility within houses. This one may take a while.
Director, User Interface Engineering at Condé Nast
In "Fast, Cheap, and Out of Control" you discussed how negating the assumption that stability was necessary for walking led to new ways of thinking re: robot locomotion.

In software there seems to be a similar trend away from "build perfect systems" towards "fail and learn" systems. Any thoughts?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I think we have learned, finally, that people are incapable of building perfect software systems. So now we need to build defensive software systems where failures are self correcting or inconsequential. It takes a new way of thinking.
Senior Fellow for Innovation at Alliance for Peacebuilding
Dr. Books,

I just finished John Markoff's new book in which you feature. I liked it so much that I showed my four year old grandson video of Shakey the day I started it and will show him Baxter the next time I see him, a decision I reached before I heard you would be here.

Seriously, Markoff raises an important distinction between AI for AI's sake and the work of people in Doug Engelbart's shadow how see IT in general and robotics in particular as a way of enhancing human capcaities.

I don't know where I stand on that distinction or whether it is an important one at all. It certainly doesn't affect my work as a peacebuilder.

But I'd be interested in how you see the future of robotics in the broader picture of IT and its impact on social problem solving.
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
I think for the next thirty years we are mostly going to see robots and people working hand in hand. At some point in the future that may change, but despite the fears expressed in the popular press that AI and evil robots are about to take over the world, I think we are an unknowable long way from autonomous systems that will be able to do very much useful a tall without a person somewhere in the overall system.

Enjoy teaching your grandson about robots. He is going to know a lot of them during his lifetime!
Hi Rodney, I really enjoyed your TED talk on Baxter! Do robotics projects like Baxter have their fair share of funding? Are business owners seeing the benefit opposed to conventional industrial robots?
Emeritus Prof. of Robotics @ MIT - Rethink Robotics Chairman
Thanks for you appreciation of my talk! But I don't know what a fair share of funding for Baxter would mean. Baxter was developed at a VC backed company, Rethink Robotics. So there is no funding to receive from governments or funding agencies. In the past the pre-research for the technologies that went into Baxter have been funded by the US government, via NASA and DARPA. There I had to write proposals explain why the research was a good idea, and how it could have long term benefits. Usually the researchers at Universities and the entrepreneurs who build commercial systems are not the same person. I am a little weird I guess.

And yes, business owners are buying Baxter, and ordering its soon to be delivered sibling, Sawyer, as they provide a tangible return on investment. There are severe labor shortages in manufacturing in both the US and China (in the latter case that is true even with consistent 15% pay increases, government mandated, every year for the last many years), and Baxter and Sawyer are filling the holes, doing very menial non-dexterous dull boring tasks.