3D Printing Stainless Steel with Giant Robot Arms

3D Printing Stainless Steel with Giant Robot Arms


This is a teach pendant, and traditionally, it’s how you teach a robot arm to do something. It’s awkward. Two of the problems with programming robots
the traditional way are that you need to tell it exactly what
you want it to do, point to point. The other is that the robot assumes that nothing
changes. You just manually drive the tip of the robot to where you know you need it to go and then hit ‘record,’ and remember that
point, then you drive it over to the other place
and hit ‘record,’ and remember that point, and then, when you
play, it goes from one point to the next. And it can do that over, and over, and over again. In the event that there is some error or something
moves, the robot has no way of knowing that. The robot will still go exactly where you
told it to go the first time, and that can result in a crash, or breaking
something, or injuring a person. It’s just like any
computer. It will do exactly what you ask it to do, even if that’s not what you meant! In the early days of computers, writing code generally meant using what’s
called ‘assembly language’. You would have to literally tell the processor to move things from one memory address to
another, or to tweak some values, or to change specific pixels on the screen. Like giving instructions to this arm, you would be telling the chip every single
thing that it had to do. As time moved on, programmers got higher level
languages. Write some words and some brackets, the system works out the boring details for
you. If you’ve ever worked with a spreadsheet,
then that counts. Writing a formula in Excel, that’s enough. In the end, sure, it all gets converted to
assembly language. For most programmers, they never even have
to think about it. The team here at Autodesk’s Pier 9 in San
Francisco are working on more or less the same thing, just for making physical things. Basically, the biggest robot that we have
in the lab, which we call ‘Ash,’ is essentially a
big, robotic 3D printer that’s printing in stainless
steel. We have a MIG welder that is depositing stainless
steel onto a metal plate. By activating the MIG welder while moving
the robot, we’re building a weld bead, and then we’re building beads on top of
each other. You would typically use welding to stitch
two pieces of metal together. What we’re doing is using the same technology, but stacking the metal on top in order to produce a separate piece of finished
material. Now, ultimately, the result is the same. The motors in the robot arm are told, “move
this way.” It’s just that human’s original instructions
are a bit more abstract, and they’re filtered through another couple
of layers. Using a teach pendant would be pretty impractical
for these complex curves. Normal 3D printers do basically 2½D. They go to an X and a Y, and then up and down. This robot can point in various directions. The robot needs to know not only where it
is, but how it should point when it gets there. We give it a piece of geometry, and the software figures out the instructions
set for the robot that will result in a print, that is what
we want. One of the things we’re developing is a
closed loop feedback system, where the robot is actually aware of what
it’s doing. Before it completely fails a print, it’s keeping track of the quality of the
print that it’s doing. It will actually correct, reprogram itself
in real time, in order to avoid an actual failure. What we’re working on is a vision system, where between vision and a couple of other
sensors, we can monitor and supervise the status of
the prints. If the welder runs out of wire, or if something
else happens, the robot would traditionally have no way
of knowing. Now, the robot can know that. Not all the robot’s movements are being
directly controlled by a person. If it goes wrong, it’s a bit different than just having an error message pop up. This arm here weighs two tonnes, and when it wants to, it moves fast. The only reason I’m allowed this close is
because I’m literally holding the emergency stop button
in my hand, just in case. Our robot currently has no way of sensing
us. What currently happens is when a person that shouldn’t be near a robot
gets close to the robot, you shut everything down. It would be great, and we’re interested
in a future where the robot can know that person is there, with vision or some other kinds of sensing, and then actively avoid that person, and continue doing what it’s doing. Self driving cars, hospital heart monitors, basically everything electronic: ultimately, the code in it is just 1s and 0s. The more levels of abstraction between the
programmer and the bare metal, the easier it is to write code, but when something goes wrong, fixing it might be out of your hands. Thank you very much to all the team at Autodesk, and to their applied research lab, here at Pier 9 in San Francisco. Go and check out their YouTube channel, or pull down the description for some links
to see the amazing projects they’re working on.

Leave a Reply

Your email address will not be published. Required fields are marked *