An exoskeleton to remote-control a robot | André Schiele | TEDxRheinMain

An exoskeleton to remote-control a robot | André Schiele | TEDxRheinMain


Translator: Nadine Hennig
Reviewer: Denise RQ Now, in true spirit of TED, I will not go to space,
but I will go through hell because I know we prepared
a live demonstration of a teleoperation of a robotic system that is located in the Netherlands
more than 500 kilometers away. So let’s jointly have a look at what the not-too-distant future
of robotics will look like. When you’re thinking about robotics,
you might think of humanoid robots being among us, doing all the tasks
we don’t want to do, or we shouldn’t do. You might think of Google cars driving in cities, autonomously,
among normal people. Or you might think of
robotic systems monitoring us, leaving you with some amount of fear. Other people will feel the excitement
of what robotics can be in the decades to come. In 2025, if Moore’s Law is correct
– and it seems like it is – computing power will reach
the point where we can have the computing capability of the human
brain on normal personal computers. Now what does this mean? We had Watson won
Jeopardy back in 2011, and other computers, like Deep Blue,
beating Kasparov already in 1997. But actually, humans are
more than their brains. I don’t think Watson walked
out of the Jeopardy room, went to the grocery store,
bought some pasta, and cooked it for his friends. So what’s this discrepancy? And if we look at what it is going
to happen in order to replicate all the human behavior, the flexibility,
and the inspiration that we have, in 2050, probably, we’ll have these robotic systems
that you see walking among us, connected through high-speed
Internet lines, communication lines, to cluster supercomputers, being able to drive them
in different modes, to change tasks, to give them
other tasks, to be flexible, potentially almost
seeming like a human being. But hey, 2050 is quite far away. I’ll probably be sitting in a wheelchair
wearing such a thing by that time. But let’s have a look
at what we can do now, and what these robotic systems
that already exist can do in order to serve mankind, in order to create
economic and societal value. We found this article
in the Wall Street Journal back in March, reporting that for Fukushima – which was the biggest nuclear disaster
we have ever seen – robots couldn’t enter. Japan was shocked. Their robotic systems
were not able to go there. Now they reached a stage in which they can do
monitoring inside the plant, and these guys are close to
the robotic systems monitoring them, are still receiving the radiation dose. Look at their interface. It’s a computer. It’s computer screens. Look at the gloves. And now, please, how many
of you have programmed a robot? Well, that’s a couple or five, maybe. This guy has to learn it, and with his gloves,
almost like astronaut gloves, he has to enter commands
and do some tasks there. This is very difficult. Remember the Deepwater Horizon oil spill? It took 87 days to fix
the hole on the ocean floor, and some people say
it’s still leaking there. World’s largest oil spill ever. It cost BP in the amount
of 40 billion US dollars, and was also
a great natural disaster. This made me believe
we should use our robotic technology, the robots that you saw before,
potentially, in a different way. We should make best use of the combination
of humans and robotic capabilities, and this made me think about the mechanisms of understanding
human-robot interaction. The goal here is let’s use a robot
as simply as we can use an iPhone. Everyone here can use
a smartphone, use apps, communicate, actually do things that are
hard to program in a very simple fashion. So let’s have a look at
what the mechanisms are in order to do human-like operations
with robotic systems remotely, where we have a robot on one side
in Fukushima, or underwater, and a human somewhere,
separated over great distances, reaching the globe, reaching
to the Moon, reaching to Mars. And there are actually
three simple things that really matter: you have to transmit vision
from the robot to the human operator, you have to be able to feel
what the robot feels. Those of you who have tried
lacing up your shoes under the table have no view of what you are doing, but you still know what you are doing
with your fingers and your hands. How can we make optimum usage of this? And how can we communicate in an efficient
way with the robotic systems that we have? Imagine the social value
that we could provide. We could send those robots to asteroids
to mine precious metals, rare earths, to acquire resources we need here on
Earth to make our life more sustainable. We could fix other problems. And those of you who were
afraid of robotics in the future probably thought about,
“Mmm, robotics. That’s unemployment.” How about this? We moved from hand-crafted
industry to mass production. Robots came onto the factory floors,
people became unemployed, the economy grew, but now we are on the verge
of a new economic scheme, which is called ‘mass customization’: we have 3D printing, we have the fashion industry that wants
people to have customized T-shirts. And actually, 5% of the world’s
fashion market in 2020 is predicted to be mass customized. Imagine having robots that can work
side by side with human people in order to do tasks in a flexible way, using the best of the robotics,
and the best of humans in one system. This can be an enabling technology. We could fix such problems in the future, in one day, in a couple of minutes, thereby achieving a safer
exploitation of our resources, and protecting nature. So it’s worthwhile thinking in detail
what would such a system look like. If you’re looking at the properties
of human vision and robotic vision, in fact, human vision is extremely good. I’m seeing all of you there like you can see the picture of our lab,
just yesterday, on the left-hand side. It’s a big mess. We were assembling that guy behind me, but actually you can all understand
what is happening in this scene. And you see the robotic task board
on the right-hand side, and you’re immediately able
to understand what you’re seeing. There’s a gripper that is distinct from the robot, from the task board,
from switches and knobs. This is pretty good. If you look at what a robot does, vision
starts to be a more problematic area. Let’s have a look at a Google car
driving through a city. Actually, how this works is:
a Google car has different sensors – obviously, it doesn’t have eyes,
It has different sensors – and it compares – essentially,
it compares databases with databases. And here you can see it looks at a city
and a road as a collection of points, and wherever there is a point
where there shouldn’t be a point, there must be something moving
that the car wants to avoid. That’s a very different mechanism on how vision and perception works
between humans and robots. The idea here is to actually merge
the two in the most optimum way. So there is a third potential field
we can use, which is beyond human vision, where we can actually merge
robotic vision with the human one. In the case of controlling
a robot system from the Space Station, you get a very poor-resolution
real-time video. There is nothing you can do about this, so astronauts use features
of virtual reality perceived through robotic sensors
like forces, like cameras, in order to actually perceive in the scene
what is going on and visualize forces. How about the sense of touch? There must be some advantages
of robotic systems. And this is Robonaut 2, developed by a friend and colleague,
Rob Ambrose, and his people, now on the Space Station. And Robonaut can continue
doing this forever. He takes these weights, he holds them
to a certain point, and he stops. Well, I tried beating him.
I didn’t succeed. Humans, in contrast, have
this intricate understanding of what is going on in their bodies. When we touch an object, be it
a jellyfish, or a brick of concrete, we instantly know the difference, because our perception
is influenced by our experiences. So if we move
at the human side of things, we have again this flexible element
we want to merge with the robotic one in order to have
a truly intuitive interface that can give us haptic information
back from the robotic system. So in any teleoperation
– this is what we call this field – we have a robot and a human. The robot can transmit forces
back to the human from its environment, such that the human understands
what the robot does, and thereby can control,
very naturally that robotic system. That’s the simple part. You can add other forces
from the robotic system as a whole – from cameras, from other sensors – in order to actually guide
the human operator in performing higher-precision tasks. In terms of communication,
distance is the challenge. Distance means time delay, and time delay, for
the control people under you, means it’s getting really difficult. With time delay,
it is hard to control systems. If we communicate in a human way, we have many options:
gestures, speech, experience. Robotics does this
in a very different way. Robotics thinks in terms
of zeros and ones. And we actually can teach a robotic system in a very intuitive way with exoskeletons. So here you see some video footage
from many years back, when we actually controlled some very simple
17 degree-of-freedom robots, but this shows how nicely you can just mimic your motion
to the robotic system, and in fact, let the robot
be present for you in a place where you don’t want to be. At ESA, and in space flight in general, we actually want to bring people
from Mars in a safe-return trip. Now, how are you going to do this? Well, we robotocists thought, “OK, let’s give the astronauts a hand,
let them get to Mars and back. But actually, let’s put them in orbit.” They will control the robotic systems
from the orbit on the surface to do tasks that are human-like,
projecting their presence on the surface, doing the geology,
doing the additional tasks, and then return
from orbit back to Earth, which saves mass, fuel, maybe lives. Since we have this great,
orbiting station around Earth called ISS, we decided to do a couple
of experiments on it in order to control
robotic systems here on Earth, in order to rehearse what it takes
to really do intuitive operations, to do facility assembly, to do
inspection tasks remotely from space. And the first hardware was delivered
two weeks ago, on ATV5, Haptics-1 which will be the first experiment where haptic interactions
will take place in space ever. It’s a small joystick and a tablet. And this really is the precursor
of the next thing to come, which is an exoskeleton
in the EXO1 Project. And what you see here
is a prototype of that exoskeleton. And we envision
that with an exoskeleton arm, a tablet PC, and nothing else, you can actually control any type
of robotic system that already exists, and that will exist
within the next 10 to 20 years. Let’s give you a real demo of this. And of course,
we want to demonstrate a system that is fully autonomous
and doesn’t have any connections to it. Every human operator should be
able to use that in a simple way. So I’m doing my best to getting in there, strapping myself in, being
an enhanced human being, sort of. And, of course, I could remain here,
standing here, but in fact, the actual fun of it
is taking this thing off for a ride. So let’s share my screen with your screen so that you can follow
what I’m doing here. We’re looking at the lab
in the Netherlands, in Noordwijk. At the moment, on a cooker
robotic system that we have there, and I’m going to teloperate this,
just to show you some of these interfaces. I didn’t calibrate this system.
In fact, not even during the break. We just switched it on. I get my arm in.
I’m looking at the display. And I will maneuver my arm
into the position of the robot arm. And you see, the exoskeleton
actually guides me there. I’m enabling here,
and the network connection being good, I can start controlling this robot here. And as you can see, I can move
the gripper in the video stream and then I’m using
these augmented reality features here in order to get force information
back from the display to me. Let’s try this again, “Enabling.” It’s forcing my arm up
to this reference position here, and for this live demonstration,
the robot doesn’t move. Oh, it does. Well, we’re actually experiencing
a lot of time delay here as we’re going through a cellular network and as you can see,
there are no cables attached to me. So this is really not staged. I could use this system
anywhere in the world, in the desert, in the Sahara, controlling
any system anywhere else in the world. So, OK, let’s see. It’s better.
It depends a bit on where I’m standing. There’s a huge force now on me
because the system moved, I hadn’t been attentive. I’ll try to actually get to a point where
the network reception’s a bit better (Laughter) And you can see my arm
is being driven away with quite a significant force here. Well, this is difficult. How hard can it be to control Fukushima
where you receive force feedback (Laughter) with some aliens
interacting in your scene. (Laughter) But, of course,
we’ll try to make this happen. And again, I’m touching the object here. Now this is pretty bad.
Did you all switch off your cell phones? OK, I’ll relax. Now we’re having probably something like
two or three seconds of time delay, which really makes this task difficult. And it would bring you
already to the Moon and back, in terms of the distance,
and data transmission. Let me move close to this object here. And you see, I’m enabling the gripper now. But it just takes too long
for data to come back. But I’m grabbing the pin; I extracted that pin, and what we are doing here
is haptic guidance which allows us to actually
guide the robotic system to points of interest
when we’re nearby. And I should be really paying
more attention to this. And then I can try to actually do
an insertion task into that hole, which is actually very difficult, it’s about a one millimeter tolerance. I’m feeling the force here. And in you go. (Applause) Let’s try to get the system back. And in fact, I want you
to remember one thing. With robotics, the sky is not the limit. Thank you very much. [Applause]

6 Replies to “An exoskeleton to remote-control a robot | André Schiele | TEDxRheinMain”

  1. Remarkable ! As he was guiding through his speech, I could imagine countless scenarios where exoskeletons can make a huge difference.

  2. Man…. I was sweating with you while watching this demo bro. Many times I've had my robot working fine in the lab but when it came to the presentation some times for one reason or another the system failed. It's like the old cartoon with the singing frog.

Leave a Reply

Your email address will not be published. Required fields are marked *