(R03) True Human-Robot Collaboration: The Next Step for Industrial Automation

At the first annual public session of the re:MARS conference in Las Vegas, Plus One Robotics CEO Erik Nieves presented his vision for cognitive collaboration as the enabling technology for widespread adoption of robotics in e-commerce.

People–with their abilities to perceive the world, think and make decisions–will supply what robots lack. Collaborating cognitively, people and robots will be able to address inherently variable tasks, like those found in e-commerce and beyond.

Transcript

All right, Good afternoon. Thanks for sticking around for the human robot collaboration talk. I am Eric Nieves. I’m the founder of Plus One Robotics. Plus One, we build the eyes in hand-eye coordination for logistics application. So this morning, my friend and colleague Ken Goldberg at the keynote, talked about the grasping piece and showed the video of the robot perpetually grasping air because it wasn’t using good 3D vision, right? We are the eyes in the hand-eye coordination problem, but that’s not what I was invited here to speak about. 

The Amazon wanted me to talk really about what’s happened in the industrial robot space with this whole notion of collaborative robot and prior to starting. Plus one, I spent 25 years at Yaskawa, which is the second largest robot company globally. Second to FANUC sitting over there. And it’s not even close. But  I was the head of R & D there for the U S and that afforded me kind of a ringside seat to all of the disruption that collaborative robots brought to bear in our industry.

So today, I want to talk just a little bit on level setting us on collaborative robots. And then I want to think through what’s happening in our industry in the sort of terms of the Cambrian explosion. And then I really want to spend time on this notion of cognitive collaboration on kind of where collaboration is and where I think it needs to go. And then what are some of the tools required to get there? And finally, in deference to our hosts, some things I have gleaned as inspiration for what we do from Amazon mechanical turk. 

So to level set us on what I will call traditional robots versus collaborative robots, and the terminology gets a little messy in our space so I’m going to just refer you to this. So here is your typical installation for traditional automation, traditional robots. So six degree of freedom, robots doing spot welding. Every car commercial you’ve ever seen has a line of  robots in it and sparks flying everywhere. And this is generally what it looks like, right? You’re going to have a huge force multiplier. There’ll be 200 robots on this line and the robots will do their thing as the car body comes through. And what is it that these robots have done? Well they’ve been around for us for 40 some odd years? Industrial robotics is actually pretty immature field.

There’s 2.2 million robots now in the world installed. And what do we value them for? We valued them for their power, their sort of lifting capacity, their speed and their repeatability. So you can think of it in terms of just muscle memory. 

To give you a sense of scale, a robot in this payload class, doing spot welding would be moving 150 kilograms at very high accelerations with a repeatability of plus or minus 0.05 millimeters, one half of one 10th of one millimeter. What does that mean? If I take in and take a piece of paper and put a dot of my pen on it. What’s the easiest way to put another dot 0.05 millimeters away from it? Turn the paper over, put a dot on the backside. Okay, that is the repeatability of these machines with full speed, full payload, full acceleration, industrial robots are an engineering marvel. If you ever get the chance to tour an automotive plant, if you’re gonna go to Tesla on the west coast or any Toyota facility or GM, you should do that because it’s an amazing piece of technology, evens o they are the dumbest piece of equipment on the floor. They’re blind, deaf, mute. They have zero situational awareness.

In fact, the only way these robots can do any legitimate work at all is by structuring the environment  around them, to the degree that they can be successful. So the robot has terrific repeatability and it will weld at this coordinate in space, whether there’s a car there or not. If a body is sitting a half an inch to the left, well then your welds are all going to be wrong.

And it’s this lack of awareness that makes these robots inherently dangerous. You’ve got, if you’ll notice the fences that are around the system, why is that? Because since they have no situational awareness and they run at such, they have such power and such speed, they present a hazard. 

Make no mistake the fences are not there to keep the robots in. The robots have no motivation to escape. It is to keep the people out. So that’s how we deal in traditional robotics with safety. In other words, we interpret it as Asimov’s first law. 

What’s Asimov’s first law of robotics? A robot shall not cause harm to a human by action or inaction. So in a traditional robot installation, we say the robot, the robot says I will not harm a human because I will not be in contact with a human.

But collaborative robots or cobots in the parlance, they first appeared in 2008 and the design philosophy was entirely contrary. They were intended first off, they were smaller. They lived at human scale. Collaborative robots were five kilogram payload to 20 kilogram payload, generally. It’s a couple of outliers at 35 kg, but generally they were made to your scale. And they were ostensibly easier to program. 

Traditional robots, they’re interfaces very ham-handed, it’s called a teach pendant and you hold it in your hand and it has a bunch of axis keys or some other joystick, and you would drive the robot in certain coordinate frames to get it where you need it and it would remember those positions and then you would add instructions and that type of thing. For robot engineers. Okay, it was effective. But anytime you need to remember, which way is negative Y  in this frame. All right. That is by definition, not easy. 

So collaborative robots lent themselves to ease of use by sort of minimizing that interaction and you actually grabbed the robot by its head and, teach it that way. They were easier to install; they ran on one 110. I can’t tell you what a big deal this was. The robots before they’ll run on three phase power, you’ve got to get an electrician in the mix. A robot like this, it required an extension cord and you could install it, right. It was lightweight. It showed up UPS, delivered it on the back dock and off you went and they were extensively lower total costs.

So there was a Baxter, which was this two armed robot that came and went the way of the Doodle bird. And it’s younger brother, Sawyer. They were safe to work beside. That was a lot of the premise and this was a seismic change in our industry. And it caused a great deal. A lot of consternation among us, the robo Senti, right?

Because, wait a minute, we just talked about Asimov’s, first law and not causing harm to a human. And an industrial roboticist we interpreted it that one way. And these folks come in saying, “Yes, this robot will hit you, but it won’t hurt”. That was the premise, this robot will hit you, but it won’t hurt. What does that even mean?

All right. I’ll hit you, but it won’t hurt. I mean, people are different sizes. They have different pain thresholds. What if the robot hits you in the neck instead of the shoulder? What if the robot’s holding a screwdriver? 

I can tell you, we tied ourselves in knots at the robot safety standards, trying to get through the regulatory piece of this industry. We tied ourselves in knots, but once the standards caught up with the tech, then you saw a lot of these: people working side by side with these new sort of forced limited robots. I will repeat, once the standards caught up with the tech, then you blossom the tech always leads. And you will get early adopters and they will do interesting work and it will not flourish until the standards catch up with it. And the users can deploy things in confidence, knowing the insurance and OSHA and everybody else is satisfied.  So, here’s an example of this lead through teach idea. And hold the robot by the wrist. Teach it an example path and then make some adjustments as needed. 

So it gives you a sort of a level set, but I want to talk about the Cambrian explosion of robotics. So if you remember, or subscribe to the evolutionary  biology model, then you’re familiar with the notion of the Cambrian explosion. So you had in the early to late pre-Cambrian, there was not a lot of diversity in life. And then you came to the Cambrian period and you had this explosion of critters, chordates, mollusks, arthropods,  sponges, all sorts of things appeared.

And what was it that characterized the Cambrian explosion? It was really two things. One was rapid expansion. There was a lot of new and the other was increased specialization. They were fitting themselves more adequately to their environment. That’s what the Cambrian explosion was. 

Gill Pratt, in 2015 penned a paper and that’s what he said, “There’s a cambrian explosion coming to robotics.” And Gill would have the right to say that. 

Gill, for those of you that don’t know him, was the head of robotics for DARPA and in America with our military industrial complex, if you are the head of robotics at DARPA, you’re effectively the chief robot geek in America. And that was Gill’s gig. In fact, Gill in his role at DARPA,  supported a lot of the work that is now finally starting to come to fruition. He supported Boston dynamics, whole big,  dog era. And I can guarantee you without Gill Pratt, there is no spot mini running around the tech expo this week. And he said there is a Cambrian explosion coming to robotics.

The robotics Cambrian explosion, likewise manifests itself in two ways. Rapid expansion, you now have all these previously underserved markets now adopting robots, a plastics industry, a small machine shops started installing robots. I want you to notice how portable the system was. Just put it on casters. It weighed less than 200 pounds. Traditional robots weren’t good for that. They were much heavier and they accelerated faster. So they were built from steel and cast aluminum, and they were intended to be anchored to the floor or they would walk their way right out of the building. So, this whole collaborative robot notion was intended just to make things that much easier.

And packaging, so we’ve looked at plastics, machine shops, this is a box erector robot. You see all of the cartons lined up. This robot is going to pull them out and then load them here and there’s a taper and it’s making boxes that some, you can imagine, subsequent processes it’s going to fill and send along its way. This was not a market that robots would have traditionally been found in. 

The robot on the left is known to middle-aged frequent flyer, men. This is the artist’s hair replacement robot, and it shows up in all the airline magazines in the seat pocket in front of you. That’s a collaborative application. This robot is actually going to put that device and I have no idea what that does, on your head to do follicle extraction from the back and move it to the front. If you are so challenged, definitely a intentional contact application collaborative. And hey, if a robot can now load machining centers, mills and lays around people. Oh, why not a pizza on it? So, a lot of expansion, rapid expansion in volume. 

In 2008, when the first cobots appeared, and they probably deployed fewer than a hundred in that first year. 2018 they deployed 40,000 of them.  To give you a sense of scale over that same, well, yeah, they’ve deployed 40,000 in total as of 2018, to give you a sense of scale, the industrial robot market deployed over a million robots in that same time. So it is still in single digits, I mean, it’s not even on the radar for robots in terms of volume, but it’s where all the growth is. 

Some folks, depend on what research paper you wanna look at, some analysts prescribed or considering this is going to be 30% of the robot market by the year 2030. 34% it’s a huge number and it’s not without merit. Why do I say that? I say that because we build our processes, you and I, we build our processes around our capabilities, right? And collaborative robots lend themselves more to the scale at which we work. So you’ll see a lot of collaborative applications moving forward.

So which leads to the other facet of the Cambrian explosion, which is increased specialization. 

On the left is the original Unimate. Joanne Goldberger, sort of the father of our industry, started the Unimate Corporation back  in the 60’s and that was the first robot. Look what it has, that’s an end effector. It’s got a vacuum cup on a stick. What did you see today in Goldberg’s presentation, a vacuum cup on a stick, everybody’s first robot, is a pick and place robot we vacuum, even so. 

From that, you’ve got all of this different sort of geometries of robots that fall under this rubric of collaborative arms and this is, if we again, think in terms of evolutionary biology, these are morphologies, right? These are different manifestations for suitability to their environment. I don’t want to spend a lot of time on these, but I wanted to pick out a couple of them.

So, this is a different morphology. This is, went away from the traditional, sort of first idea of robots is always an articulated vertically articulated arm. 6° of freedom, just like you from here to here, actually you’re seven, different argument, but this robot is actually a scarer. A 4° of freedom robot that’s collaborative, but in the niche of wafer handling or other high precision stuff.

Collaborative robots from robot companies? Everybody’s heard of FANEC, the largest robot company in the world. There’s this robot is downstairs in the tech center. So it has the collaborative robots idea has now finally been taken up by what I call the big four, what the industry calls the big four, right? The traditional robot OEMs. 

But you also have collaborative robots from companies you’ve never heard of this is Siasun in China.I bring this one up because Siasun is the largest, domestic Chinese supplier today and China is now and has been for the last number of years, the largest industrial robot market globally. So, Siasun will be an important player on the global scene in collaborative robots.

This one’s interesting to me, this is a two armed cobalt. Now this is, I think, significant. We just said that we build our capabilities. We build our robots to try to match our capabilities. Right? But every robot that you’ve seen in the field is one arm tied behind its back. In fact, that’s the way robot engineers think. They look at your process and they try to deconstruct it into bits and say now, how would I do that with one arm? And that’s how we sort of build the process. 

That’s not the way you would have done it. You would, kind of like that shadow hand demo downstairs with the two robots and shadow hands on they’re, very cool. 

Bilateral manipulation is a human capability and over the long-term it will be a capability that you will see more broadly applied in industrial space. That is a true statement, whether it’s tomorrow or five years from now, or 10 years from now, bilateral manipulation will be a thing. 

And this was interesting to me. Let me back up. 

So, the AMRs, Autonomous Mobile Robots. If cobots are defined as safe to be around, well, then the mobile robot people say, “Hey, we’re collaborative too.” This robot here is six river and it’s a platform that you can sort of change out the payload for whatever the application requires. I don’t know if this is running around some really high end, dry cleaner, or some apparel store somewhere. And then the other is, savvy Oaks relay, which is your hotel Butler robot. I’ve never seen one of these actually live until this week. There’s two of them sitting in the Vadaro hotel next to us. And I almost wanted to call down to the front desk and say, “I didn’t bring a toothbrush”, just so that the robot would  find me in the hotel and would deliver that to me. And that’s what it is. Right? 

It’s these collaborative, mobile robots that when they see you coming, they’ll either stop or they will plan around you or whatever they’re supposed to be safe to be around. But these folks did not sort of jump onto the moniker of collaborative robots until recently, all right?

So, all right. Cognitive collaboration. For all the benefits cobots have brought, I suggest to you that this definition of collaboration, meaning safe operation is woefully inadequate. You would never say, that you and you worked beside each other on the line all day. And neither of you went home bruised. You collaborated well.

Well, we would never say that. Collaboration would mean that, “Hey, I don’t know how to do this part of my gig and you come over and you show me” and it’s also, “Hey, I need to go on break right now. Would you cover this for me? And even if it means that our throughput drops, the work is still flowing”. It also means, “Hey, this one is too heavy or too broad or too whatever you’re going to have to help me with it. And we’re going to team”, then you would say they collaborated well, collaboration is far richer than safe operation.

I like to switch metaphors from evolutionary biology to developmental psych. Early childhood development, right? It speaks of Parton’s stages of play, the stages of play for children. And she noted the progression children undergo from Solitary play off by yourself unaware of others, to Parallel play side-by-side knowing you have a friend, but there’s no interaction. And that is where we are at today with cobots. We are at the developmental stage of parallel play, safe proximity.

We don’t want our children to get stuck in parallel play. We expect them to grow into cooperative play. And if they haven’t done that by kindergarten or first grade, we look at, we have a problem. We expect them to grow into cooperative, play working together with others, assigning roles. You do this, I’ll do that. You’re good at this. I’m better at that. And working together to accomplish whatever it is. Collaboration was always intended to be more cognitive than mere safe operation.

The next 10 years we’ll see the widespread adoption of true human robot collaboration, where there are roles assigned by capabilities. The robot is going to do what it’s good at. The people are going to do what they are good at. So the robot lifts, it brings its power and speed and muscle memory, endurance gonna bring all of that and the people will be doing the decision-making and the perception and see, and think. They’ll build contingencies between each other, right? Okay. The robot you’re going to do what you do and when you need help, you’ll ask and that’s how cognitive collaboration is ultimately going to work in this space.

And the dominant construct for this approach will be what’s referred to in the literature as supervised autonomy. Again, Gill Pratt, when he led the DARPA robotics challenge and, you saw again this morning videos of the drunk robots, right? Falling over. There was a lot of that at the DARPA robotics challenge. But for all of the laughs underneath there was this premise of supervised autonomy, meaning let the robot do what it can of its own volition to the degree that it is possible. But when the robot cannot, have a human jump in. A human in the loop. Supervised autonomy, there is some degree of autonomy and it is improved or managed, or you deal with the edge cases, etc, through a human worker and that’s what this was. So, the robot would take and maybe amble its way over and then it would see this valve and I don’t know how this works. I don’t know what this is. And, from remote, someone would say, you grab it at 10 and 2 and you turn clockwise and we’re going to close the valve. So, that’s what supervised autonomy was intended for. 

How to supervise the autonomy show up on the factory floor? How does it show up in the distribution center? I think of it as the missing middle. 

Go back here. On the left is, your traditional robot again, huge force multiplier, 200 robots on the line and maybe two or three people cleaning contact tips or doing something. They are a huge force multiplier for labor, but they have very little flexibility, very little flexibility. This one’s welding, a sedan now taking put a truck in front of it and he goes, “I don’t know what this is”. Huge, flexible, huge force multiplier, no flexibility. 

Go to the opposite side. Intuitive surgical, intuitive surgical, no force multiplier, one robot, one surgeon, no force multiplier. But anywhere that robot can reach that surgeon can take it, full flexibility throughout the working space of the robot. So on the left side, one person to very many robots, on the right side, one person to one robot, both of these constructs are successful.

FANOC is often considered the Microsoft of industrial automation because of their scale. It’s a huge business, very successful industry. On the right, intuitive surgical, a very successful venture. So it’s not that full autonomy or no autonomy are bad. It’s just that in the middle is where there hasn’t been a solution. You need one to several or many, not very many, not 1 person to 200, but 1 person to 20?, 1 person to 25? 50? Whatever the number is, that can then deal with the edge cases and the, variability that is inherent in the process and this is especially true in supply chain applications. 

It is inherently variable. If I’m going to build the Camry on the line, there’s, I don’t know, four, six different trim levels of a Camry, and I’m going to build lots and lots of them over time. I can kind of program my way out of that problem. But if you’re in a supply chain application, I don’t know it’s going to come down the line next, right now, or next week or certainly not any longer than that. The horizon to variability is much shorter in supply chain. 

Now, in the interest of full disclosure, this is what Plus One does, right? We are bringing supervised autonomy to 3D vision to industrial applications. And that’s frankly why we’re called Plus One, because we believe that this connected network of robots in the field becomes much more capable through addition of a real person. 

Robots work, people rule, is kind of our hashtag because we acknowledge that people are better than robots at everything associated with supply gene.

But how do you get there? What, what are the tools required to get to one to many? Well, supervised autonomy can’t happen without situational awareness. 

The ability for the Crew Chief. That’s what we think of it. This person that manages these robots, this sort of robot wrangler is a Crew Chief. Let me think of, and, the ability for the Crew Chief to understand what the robot is experiencing is crucial. So you need fencing, 2D, 3D vision, four sensing, tactile, etc. It’s all of these elements that lead us to being able to have a force multiplier that’s meaningful.

Luckily, this is happening already. So vision systems and industrial and collaborative robots over time. Yes, it’s up into the right, but I want you to notice that the growth of vision in industrial robots is outpaced by in raw, in absolute numbers, by their adoption, into collaborative robots. People understand that collaborative robots are going to be. In applications where sensing is more crucial.

Same thing on end-defectors. So, industrial robots, cobots the growth in for sensing at the end of the wrist is more significant on the collaborative robot applications.

This is a sensory modality that I find intriguing. And this is Vail. They’re a house out of Boston and their premise is, and it’s sound that hey, every robot could be a collaborative robot if we just have better sensing. Safety rated sensing. 

This is fundamentally different and has a much higher bar than sensing for robot guidance or object recognition or whatever. Now you’re using say sensors as your safeguard. So, you’ll notice the sensors mounted up above and they’re looking down at Clara and the robot and its tool and recognizing what is supposed to be where when it knows that, because it’s tied into the robot controls, but it also is looking for things that ought not be there.

So there’s it does all this occlusion analysis and classification and tracking and forward planning, etc. But note, the sensors are fixed in space. They’re watching the scene like these tireless centuries ensuring the safety of all humans. 

That’s generally the approach we take to safety. We had cages, physical ones, keeping people out. Now we have programmable laser scanners and other sensors that are, I call them people meters. Where are the people? Look for the people, see people? Stop the robot, see people stop the robot. The sensors are fixed. 

Which is why I find Amazon’s tech vest so compelling. So you saw this downstairs, if you’re not familiar with the way the fulfillment center works. So of course you have the Kiva field, I guess we don’t call it that anymore, the AR field, but whatever it is, the moving robots to shelves, bringing them to the people field. And the field is all where all the robots are and they bring the pods, the goods to the perimeter where the people are and that’s how you take your goods off and fulfill your orders and all of that. 

When the inevitable happens, in the field and something drops out of a pod or a robot breaks down or something. It’s gonna require human intervention, somebody’s got to go in there. So, previously what that meant was, okay, I got to shut down the Kiva field or at least some substantial portion of it so that I can go in, do what needs doing and get back out. Everything must be static, because a human is in that space and the only way to ensure their safety is to make sure there’s nothing moving in the area.

But what if the human were the sensor? That’s what I find intriguing. They call it the tech vest and she wears it and when she enters the scene, I would never call this the tech vest,  that is way too dull. I would call this the Moses vest because when you walk into the field, the sea parts for me. Right? 

And it’s this, you are the safe zone. Instead of the sensors being centuries, looking out for the people, the people are their own portable, safe zone. I think that’s pretty cool. So this person as sensor might turn out to find utility in lots of instances. And now that Amazon has revealed this whole Pegasus initiative, that means there’s going to be more of these human mobile robot interactions.  So, human as sensor.

So there is a lot happening in vision and in safety and 2D and 3D. All of it based on, kind of visual perception of some kind or another. Why? Because that’s our go-to modality. Right? As humans, Visions is where we go first. So its dominates our understanding of our environment. So, it is of course the first thing we try to replicate in our devices, but if you’re like me, when the alarm clock goes off in the morning, it is your vision system that comes into play? It’s some amount of muscle memory, right? I know it’s somewhere over here on the nightstand and then it’s tactile till I get it in my hand and then it’s forced when I just click the button. 

Muscle memory of vague recollection, tactile feedback for sensing. 10 years from now, collaborative applications will all incorporate multiple sensing modalities at the edge.

You can’t sense over the cloud. You must sense at the edge and you can pass results over the cloud and you can interrupt over the cloud, but the sensing will be, the compute will be loaded at the edge. 

More importantly, any of these sensors are less powerful individually than they are in concert. It is the sensor fusion that becomes the force multiplier. You can have lower resolution vision if you have it tied to tactile sensing, this is always true.

All right. We talked about the teach pendant being this really kind of crummy interface that we would like to go away and, instead folks have, moved to lead through teaching and such. I believe natural language processing will be a thing in industrial automation in the next years and it’s because Alexa has brought a new simplicity consumer interaction. But we are way early in its application to industrial work. I went through the home thing and I liked that, it was one IO count at a time, right. Shades up. I say we’re at the dos prompt of natural language processing for industrial automation.

This is, this is Austin, and this is our lab back in Texas, and he’s trying to develop Alexa skills to a reasonable abstraction. Right now it’s all kind of command line stuff, open ROS, duh-duh. What we really want to say is, sort all the shampoo bottles into B and A. And as natural language processing and understanding continues to evolve it will find a role in industrial applications.

So Amazon mechanical Turk. For the Amazonians in the room all about this, but if you’re not familiar with Amazon mechanical Turk, primmer. Early on, Amazon had a big problem, they would have webpages that said ladies, flats, yellow, size nine, but the picture was ladies, flats, size nine in red. And so there’s an army of people having to make sure that the text matched to the images. And say, “We’re not doing this anymore”. And they farmed it out, effectively crowdsourced it, they created this platform called Amazon mechanical Turk and on there, you could, they would put out here’s work that needs to be done. The world brain would say, I’ll do that for a 10% of a page. And they say, “Great, you get to do the work”. And you do the work and you submit it and somebody reviews it and it’s like, thumbs up, thumbs down and there’s a reviewing scale and payments were seamless. It just worked great. 

It is since, sort of expanded, it’s no longer just Amazon tasks on AMT. Anybody can post a task on AMT that they want to crowdsource.

This is what I find so interesting about that. You know what Amazon mechanical Turk calls the work? HIT’s, Human Intelligence Tasks, and since Amazon was the first to start this, it’s a passive mission by somebody at Amazon that hey, for this type of work, either algorithm can’t or for whatever reason, it isn’t cost-effective. It was just better to have people do it.

So, you have these requesters that have tests they need to need to be completed. They put it onto the platform, the mechanical Turk marketplace and these workers wanna earn money, they want to do something and there you go. It’s all very sanitary, but it’s all knowledge work. 

The request, the task, the results, they all live on a computer or a server somewhere because it’s all ones and zeros. The task was, I need to get this webpage edited correctly. It went out to the world. It came back as a webpage. That was right.

But what if you could extend Amazon mechanical turk to the physics of the problem? What if you could have it interact with physical objects in the world? You could solve a lot of business problems. I asked the question like this, what would you do? If you could see a thousand miles away and your arm was out long, what would you do?

I want a gala, apple, right now? All right. I will take and reach over to Washington state and I’ll pull it off the tree right now, because I have that capacity. This notion of mechanical Turk was kind of inspiring in that you could extend the reach of your labor force. 

I had one of our customers come to me and say, “Here’s the thing Eric, our sort of work and distribution and warehouses and fulfillment centers, we all end up congregating around or together because of access to infrastructure or zoning requirements or what have you and we will burn up the labor for 40 miles. How do I use technology to extend the reach of my labor pool?” Heck, even if you could just go another 60 miles, two counties over, you would find a lot more labor. How do you get them to come to work? You’re going to get them on the bus? No, you’re giving them technology. 

This idea of Amazon mechanical Turk be extended, extended to physical devices means I can have the arm local to the problem and it’s got some degree of autonomy and it’s picking and placing and doing its thing and then every once in a while that raises his hand and says, “can you help me right now?” You extend the reach of your labor force.

Another one of our users said, “Hey, my problem is, a lot of our work is an anti-social working hours. When do a lot of sorts happen?” Overnight. That’s why you have to pay a shift differential. People don’t want to come to work at that hour, but the customer said, “It’s always first shift, somewhere”.

When you can extend the reach of labor, then you can have a follow the sun workforce, and you take and you combine this with supervised autonomy and now you have a force multiplier over distance. You’ve got one person managing robots across four time zones, and that’s how,  that’s how you do this. Right? You take and extend the capabilities, through technology. 

So where do I see robots in the next 10 years? I do believe that this notion of collaboration will continue to increase in richness. It will be much more interactive. We will move from parallel play to cooperative play, on the scale, and we will do so over distance securely and  solve a lot of real problems in manufacturing and in distribution. 

And with that, I thank you. And I’ll take any questions, we have a few minutes left.