It’s great to be here at ProMat. I’m Erik Nieves, CEO of Plus One Robotics. Some of you know us, we’re a software company making 3D and AI-powered vision that gives robots the hand-eye coordination they need to perform meaningful work in warehouses, distribution centers, couriers, et cetera.
For the past 40 years, robots have been used to perform repetitive tasks in factories and manufacturing facilities the world over. And they’d been very successful at that, but that’s all they could do was repetitive task. And when a robot encountered either an object that wasn’t familiar with or a change in the process, a worker would need to enter the robot cell, address the issue and then fix the robot, get it back on track.
But e-com and fulfillment and distribution, they’re predicated on variability, the variability of the items passing through. And there isn’t one size of brown box or padded mailer. It’s an endless array of package shapes and sizes and items and SKUs that move through a warehouse. And there’s thousands of new SKUs added daily. So the technology needed to image and process this high variability didn’t exist 10 years ago. But the rate of technological innovation has made logistics robotics more accessible, and it’s now companies beyond the big shippers and e-com, they’re all accelerating plans to automate because warehouse operators are facing this critical juncture in which their growth is outstripping the availability of the labor around them.
So today I wanted to talk to you about this path from traditional, you know, robotics too, this notion of, you know, some semi-autonomous robots and the advancements that are made possible due to these improvements in the technology. To do that, I wanted to give just kind of a quick overview of industrial robotics.
Prior to founding Plus One, I spent twenty-five years at Yaskawa, one of the largest industrial robot companies globally. So I’ve seen firsthand the evolution of this space over time. And when I say industrial robots, this is probably what you’re thinking—typical spot welding line in Detroit, you know, a hundred, 200 robots on a line, all performing, you know, spot welding of whatever you drove in here today. This still remains one of the most important applications for industrial robotics, but I can tell you that every one of these robots is blind. Their benefit is in their endurance, and in their ability to move a lot of mass quickly and with precision. But it can only handle a few different option codes as they roll down the line, maybe the sedan, the two-door coupe, and the hatchback, but if you put a pickup truck frame in front of it, it won’t know what to do because traditional robots are programmed for a task or a small set of tasks, and that’s all they can do. But it was Gill Pratt when he was still at DARPA that wrote a paper where he proclaimed that “there is a Cambrian explosion coming to robotics” and that has borne true. But if we think about the Cambrian explosion, right, the evolutionary biology, if you subscribed to that model, is that you had a sort of prime mover, and once the conditions were right, all of a sudden you had this great proliferation of creatures, and it was increased specialization and it was lots of different types of morphologies. And that’s what the Cambrian explosion means in terms of biology. But the same can be said of robotics. You see the prime mover here, Engelberger’s first robot, the animate, and it was a pick and place robot, you know, 50 years ago. But from that seed, once the environment was correct and suited, then you had this plethora of different robots and morphologies that grew out of that. This is what we mean by the Cambrian explosion of robotics. And you know, it kind of breaks into a couple of different major branches. One of them is mobile robots and you see a couple of examples of those here, you know, robots delivering your toothbrush that you forgot, you know, at the hotel. Robots in retail facilities, distribution centers and such, even mobile robots for electronics assembly, but the bulk of the robots in the explosion are arms. And, you know, today you see all types of different arms.You’ve got six axis robots that are the traditional type, you’ve got kinematically redundant seven degree of freedom robots, you’ve got a whole new domestic set of robot manufacturers in China now, large robots, small robots and safe robots.
And that’s where I want to spend a little bit of the time today, is in this notion of collaborative robots, because one of the main things that has grown out of the Cambrian explosion in manipulation is this notion of robots that are safe to be near. Where proximity is possible between the robot and the person.
As we talked about in traditional robotics, the robots would be on one side of a guarding, you know, and the people would be on the other side of the fence. And it wasn’t that those spot welding robots were trying to break out of the zoo, it was that you needed to keep people from inadvertently entering their space because they were so powerful and so fast, they could cause real harm. But collaborative robots is this idea of a force-limited robot. A robot that by its nature is not capable of injuring a person in and of itself.
So there’s a couple of ways that you can achieve force-limited robots. One of them is intrinsically in the mechanics of the arm itself. Basically, you make the robot compliant and then the other is in the controller. Where you can control the amount of force or torque that the motors are expending and therefore make the robots safe.
And one of the benefits of this technology is their focus on ease of use. You’ll notice that this operator is teaching the robot by actually grabbing it and moving it through the set of poses that he wants the robot to perform. In the end, these collaborative robots were built on a different premise. They worked under different constraints. Meaning, what if I didn’t have three-phase power or 480 available in the facility? What kind of robot could I build? If I just had a 110 volt wall outlet? Well, it would obviously be a smaller robot. It wouldn’t be, it would be incapable of lugging a, you know, 150 pounds spot welding gun, but 5, 10, 20 pounds, sure. But that’s a lot of applications because we, as people, build our processes around our capabilities and we’re at about a 20 pound limit if we’re gonna do this day in and day out hour after hour. And so what you saw then is lots of these collaborative robots appearing in the market. And you’ll notice in this image, the robots are so light that they’re in fact, portable. You see that there, you know, in this case, they’ve put them on a frame on casters so you could move the robot from station to station as the workflow dictated. That was brand new. And so that allowed for these robots to be seen in lots of new applications where the fence was gone and you had a robot possibly handing a piece of equipment or a part to its operator for inspection, et cetera, which is as we see here.
Well, once the fence was lost, once you dropped that safeguard, now all of a sudden you see robots in lots of different industries. The ARTAS robot on your left is a collaborative robot after a fashion. This robot is actually used to harvest follicles of hair somewhere on your scalp and replant them elsewhere where they might be better served. Obviously a collaborative application because this robot is in direct contact with your head. And if you’re going to use a robot of the collaborative variety to load and unload a machining center, there’s not much difference between that and having it load a pizza oven. And so all of a sudden we start seeing robots in all kinds of explorations within food preparation.
But this is the one that calls my attention. You can see that this collaborative robot is in a warehouse. And it has found a task it can do—it’s erecting the boxes, the cartons. You see the magazine of cardboard and this robot’s job is to pull a carton out and push it through some process to erect the box. And then ultimately this will go through a taper and sealer. They found a repetitive task within the warehouse and so this collaborative robot where the operator can be right next to it, safely, has found utility. But look in the background. You see all those shelves? Look at the variety of items on that shelf. Different sized boxes, bags, items, et cetera. How do you deal with the variability there? Because that’s really what the warehouse needs. And so collaborative robots from the perspective of safe, physical proximity is not enough to deal with the variability that the tasks of the warehouse mandate.
And that’s why I talk about cognitive collaboration. Cognitive collaboration. I’m going to switch gears for a minute from evolutionary biology to developmental psychology. When children are growing, one of the sort of steps in their development is parallel play. You have children, they find a space and she’s coloring over here and her friend is coloring over there. And as long as you stay over there, we’re going to get along just fine. I do what I do. You do what you do. There’s not a lot of interaction happening in parallel play. Well, I would argue this is a state of robotics and collaboration. In the market, generally speaking, you have a person and you have a robot and the robot does a thing and the person does a thing, but there’s not a lot of interaction between them. But just as with children, children grow from parallel play to cooperative play. Cooperative play means there’s assigning of roles. You’ll do this and I’ll do that. And when you’ve done this, I’ll do this. And there’s communication and there’s interaction. When you have cooperative play, it’s always more, much more cognitive—It’s decisions being made. And that’s what the warehouse needs.
Some of you may recall this robot. Speaking of Gill Pratt, this is from the DARPA challenge. And this was a challenge to sort of see what robots were capable of. And so you had all of these humanoid robots with lots of technology. This is a very sophisticated robot. And you can see the suite of sensors it has. It’s got cameras and lidar, et cetera. And this robot, you know, they had an obstacle course and they had a set of tasks the robot needed to complete. And so the robot did what it could autonomously and maybe it ambled its way to this valve and its sensors saw it, but they didn’t understand what to do with it. And when that would happen, a remote operator could take control of the robot and say, that’s a valve and it works like this. And whether they teleoperated the robot or commanded the robot and the robot did it, you know, locally in either case there was a human in the loop that stepped in when the autonomy failed. And that’s the notion of supervised autonomy. And supervised autonomy is what the, you know, sort of technical jargon in the literature is, for what is cognitive collaboration. Let the robot do what it’s able to do of its own volition, but then have a human step in for the sticky bits. And that’s what you’re gonna need if you’re gonna deal with the great exploding warehouse.
The warehouse of today is finding itself in a circumstance where the growth of e-commerce is burying their ability to keep up. We’re all subscribing now, be it to razorblades, vitamin supplements, cat food. There’s just more and more e-commerce coming online and the warehouses have to keep up with all of this growth and it looks kinda like this. This person’s job is to take all of these parcels and packages and put them on the conveyor behind them. And it’s a torrent, it’s a never ending flow of these packages into the system. So what do you need when you’re dealing with that much variability? Well, the first thing you’re gonna need is a vision system.
Just like that robot in the DARPA challenge, you have to have a set of sensors that can give some situational awareness and autonomy to the system. And so that’s what, Plus One built. Plus One builds high performance vision software for logistics robots. And this is what it looks like. PickOne is the vision system. It’s a set of cameras, sets of sensors and software, 2D, 3D AI, and they’re interpreting the scene in front of the robot, and through a set of heuristics, determining which is the next best thing to go pick.
Obviously, PickOne is the bulk of the heavy lifting here. It’s the autonomy in the supervised autonomy framework. So it can be 95, 99, 99 and a half percent autonomous that the robot is working using that PickOne system. The PickOne system is, as you expect, 3D sensors, and high-performance edge compute, including enterprise GPUs. But then there’s this other component, the supervisory element of human in the loop, what we call Yonder. And we subscribe to this notion at Plus One because, after 30 years as an industrial roboticist, I am convinced that people are better than robots at everything that matters here. And that’s especially true of 3D vision and decision-making. So when 99.5% isn’t good enough, when the robot says, “I don’t understand.” That’s when it raises its hand over the cloud and a human sees exactly what that robot is looking at. And the human says, “Yep, I see the problem. I can see why that’s so confusing. I would probably pick up this one.” And you command the robot from remote and the robot goes back to work. This notion of 3D vision plus human-in-the-loop is the reason our company is called Plus One robotics. Because through the addition of one human being into this control stack, you greatly increase the reliability, the fault tolerance of the overall system.
And what that means then is you can have applications such as these. So in this case, you see the robot picking and placing parcels. And it’s doing that of its own volition for most of the cycles. But then once in a while this happens, and the robot sends a request to a Crew Chief—that’s what we call the people on this end of the wire—and the Crew Chief says, “Oh yeah, I’d probably pick up that one.” And as you see here, the scene is either very cluttered, like you see here, or it’s occluded, as you see here. And the robot is able to deal with that because the human was able to command it.
Here’s another example, these robots are performing order fulfillment. You see there’s multiple robots in this facility. And as the totes go by the robots or vision system is commanding the robot to go get the items out of the tote and put them in the shippable. But again, every once in a while, when you have hundreds of thousands of SKU, the system will get confused. And this is what it looks like for a Crew Chief dealing with that situation. You see that robot requested support with a certain pick. And she was able to tell it, “I use this tool and I’d picked that item,” and the robot said, “thank you very much,” and went back to work.
And finally, this is another implementation of hand-eye coordination. You see everywhere in warehouses depalletizing, taking boxes, and bags, and trays off of pallets and inducting them onto conveyors, et cetera. Same situation. You see that the bulk of the activity, the robot is working of its own volition, but right there, the robot is asking for help. And you see all of a sudden that the robot went back to work. Why? Because of the intervention of the Crew Chief. PickOne for autonomy, Yonder for human-in-the-loop support.
When I think about what this enables, it really does fill this middle portion of the autonomy spectrum. You know, on one side of the autonomy spectrum, you have Detroit. Motor city, tons of robots working on a line, high force multiplier. You know, it may be 200 robots on a respot line and there maybe just two or three people walking around cleaning welding tips. The system has a tremendous multiplier on labor, but it has next to zero flexibility. All the way on the opposite side of the spectrum, intuitive surgical and the DaVinci, no force multiplier. One robot, one surgeon. But anywhere that robot can reach and any tool that robot has available to it that surgeon can take full advantage of. No force multiplier, but effectively infinite flexibility. Supervised autonomy is in the middle. It isn’t 200 robots to one, but it may be 50. And so you have a Crew Chief that’s able to manage a number of robots, but given the flexibility that they need to be able to deal with the tasks at hand. And that’s the notion of supervised autonomy.
In a system such as this, robotics is always a system’s problem. You’re gonna have software, you’re going to have compute, you’re going to have the robot itself, grippers, cameras, et cetera. So some of the enabling tech that has to come together over the last five to 10 years to make this a reality, for one is on the sensing side. And whether it’s a lidar, structured light sensor, time of flight, all of those sensors have come to bear on this problem over the last decade. The other, obviously, is the improvements in edge compute. Again, we use high-performance, industrial PCs with Nvidia enterprise GPUs to parse that point cloud and run the AI model at the edge. But of course, you have the human-in-the-loop piece and that’s a cloud connection. And over the last 10 years, we’ve been able to incorporate these cloud environments into the overall solution. I can tell you that just as users, warehouse operators, have their preferences for robot suppliers. Whether it’s FANUC or ABB or KUKA or Yaskawa, or Universal. They likewise have preferences on their cloud provider. So whether it’s AWS, or Google cloud, or Azure, all of these have been brought to bear to make cognitive collaboration possible. And it’s cognitive collaboration that unlocks all of these different applications to support the great exploding warehouse.
Thanks for your interest. Robots work, people rule.
Supply chains are innovating. E-comm keeps driving volume. Customers continue demanding shorter and shorter delivery times. As a result, we’re all looking for ways to manage what I call the great exploding warehouse. If your job in the warehouse is to depalletize off of a 48 by 40 skid onto a conveyor, your workday is measured in tons. These are hard jobs to fill, but are even harder to retain.
So PlusOne set out to help. We applied our 3D vision and AI tech to this application. PickOne is our vision software and it makes unloading pallets and warehouses fulfillment centers and DCS a reality. A full pallet is placed in the pick location, PickOne immediately begins to image the items to determine size, shape, location. The software then determines which ones are pickable, assigns each of them a specific confidence level, and commands the robot which item to go fetch. The reason you need robots and vision to do this is that pallet mix has changed. Operations are requiring mixed pallets, rainbow pallets. And once you have different heights in the same layer, traditional automation breaks down.
This is all solved by PickOne, which can manage the individual cases. The software’s so robust that the robot can easily induct items off a pallet of randomly oriented cases, cartons, trays, even bags, right onto a conveyor. The robots are able to easily unload a variety of pallet layers, including those with overhang.
We’ve designed it to work with any robot solution at scale. With PickOne The robot tackles new SKU with ease, performance improves over time. What do we deliver? We deliver reliable, consistent throughput. We can even ensure the safety of your workers by moving them out of these repetitive injury-causing tasks and promoting them to higher value work.
We do that by using Yonder. An application that gives us the opportunity to employ a Crew Chief. That’s a person who can remotely manage dozens of robots at a time from a workstation. I wanna go back to this idea of confidence levels. It’s an algorithmic measurement for how confident the software is that the robot will be able to pick up and place an item correctly. High confidence levels result in automated picks. But if all the packages or items show low confidence levels, PickOne instead sends a Yonder request. So that the Crew Chief, this person, can now step in and manually handle this exception remotely. And because Yonder stores the Crew Chiefs’ responses, PickOne’s systems become smarter over time.
I want you to think about robotics like this—imagine a spectrum of autonomy. On the left is your robot in Detroit, you know, spot welding arm on a line, 200 robots on a line. Very few people having to maintain them. All the way on the right, surgical robots. Like, one robot, one surgeon. On the left you have a huge force multiplier, but no flexibility. All it can do is what it was programmed to do. On the right one surgeon, one robot, no force multiplier, but a lot of flexibility.
Here’s the thing, both of those are successful business models, but what’s missing, and where the real action is, is the missing middle of supervised autonomy.
One person responsible for many robots and they still have the flexibility that that teleoperation model afforded. We call this person responsible for those robots, in the diaspora, a Crew Chief. The Crew Chief manages and maintains those robots remotely.
Having that person at the center is the reason we say “robots work, people rule.”
You know, robots doing pick and place and you can see that everywhere. But really, what Yonder is about is the tie between the robot and the human being. Now, the reason it’s running at this rate is because this robot is smart enough to know that I am within reach, but if I step away, it will run faster and faster and faster. Human-robot collaboration is where it’s at. It’s the ability of me and the robot to co-exist and do so in a safe manner. I’m going to step out again, and I’m gonna interrupt it. See that?
So human-robot collaboration comes in a couple of flavors. One of them is the safety piece. The other is the intelligence piece. This robot could pick and place of its own volition most of the time. But every once in a while, it’s gonna get confused. And when it does, that’s when the human in the loop comes in, the Crew Chief as we call him. And it will pick it, and I’m gonna step out of the way and it will run faster as it places it.
So this is an example of where the robot’s asking him for help. He gives it and the robot takes and does its thing. So this combination of robot plus human plus human brain is really what gives our system the ability to run all the time.