Neural Networks: How Do Robots Teach Themselves?

Neural Networks: How Do Robots Teach Themselves?

Hi everyone! Welcome to the new Seeker Elements set. We’re still going to be covering all of
the mind bogglingly awesome discoveries in the science and tech world, we’ve just got
a fun new way to do it. And what better way to kick things off then
with an update from the field of machine learning and robotics. If you’ve ever seen a baby kick and squirm
involuntarily or watched a small child clumsily try to do a complex physical task, you know
that it takes humans years to develop fine motor skills. So it makes sense that intuitive, fluid motions
are hard to get robots to do on their own…up until now. And get this–the robots are teaching themselves. A new robotic demonstration from a company
called Open AI uses something called machine learning, specifically a neural network, to
allow this robotic hand to perform a complicated series of independent object manipulations. That means the motions you’re seeing here? The robot is doing that by itself, without
any input from or control by a human, and without any DIRECT programming to perform
each action. But first–what is machine learning? Machine learning is a subset of artificial
intelligence. It’s getting computers to perform tasks
without being explicitly programmed to do them. Take one of the most advanced robots we have
today, a robot that helps us perform surgeries. This is a traditionally programmed robot,
that has to be explicitly told what to do every time. The programmer has to write: “if this happens,
the machine will do that”, for every step of that robot’s action. For tasks where that would be prohibitively
time-intensive, machine learning algorithms can be used instead. These are algorithms that you can expose to
vast quantities of data, from which they can ‘learn’ certain criteria and identify
patterns. So how is this applied in something like the
robotic hand from Open AI? In this situation the main data sets are all
the different positions of the hand and the block. But the combination of all of these possibilities
gives us way too many options for the robot to practice in real life so that it can ‘learn’
each one. So instead, the researchers used a massive
amount of computing power to simulate training the hand –they designed a virtual space in
which the robot could experience myriad hand and block positions at an accelerated pace
inside a computer model. The team estimates they exposed the robotic
hand to about 100 years of trial and error experience in just 50 hours of simulation
. In addition to letting the hand ‘practice’, the researchers also randomized some aspects
of the simulation, variables like the size and slipperiness of the block and even the
force of gravity. While the simulation couldn’t reproduce
everything the robot would encounter when handling the cube in the ‘real world’,
these variables made it more likely that the simulation practice would be useful in real-world
conditions. Because of all the variables involved, the
research team used a kind of machine learning algorithm with a ‘memory’ –based loosely
on the way a human memory would work–making this particular algorithm a neural network:
a kind of machine learning loosely based on the human brain and its logic structures. They then transferred all of this ‘learned’
information to the real-life robotic hand, which is equipped with a set of cameras that
can estimate the object’s position and orientation. The end result? Simply ask the hand to manipulate the object
in a certain way–say, to reorient the block with its purple side up–and it’ll do it. You ask for an outcome, and the robot can
provide that with no further input from you because it taught itself the series of motions
it needed to get there. As you can see*, this robot developed motions
you may recognize from your own hands, just by learning what motions were most efficient
and effective at moving the block without dropping it just with input from visual stimuli
and joint sensors on each hand. Other experts in the field of robotics and
machine learning state that while this example is exciting and comparatively elegant, it’s
not necessarily new. It’s also still quite limited to an object
of convenient size in a hand that’s facing up, so it’s not dealing with as many challenges
as a machine-learning robot being asked to complete a task that would be useful in say,
an assembly line. So while robotics is still catching up to
human capabilities, it is making strides. This work from Open AI shows us that developing
machine learning algorithms and neural networks could help us make more precise and dexterous
robots, that not only help us with the things we don’t want to do or can’t do–but teach
themselves how to do it. For more on AI, subscribe to Seeker, and check
out this video here about how AI is being used in the real world to monitor your data. And fun fact, some machine learning algorithms
are also teaching themselves as they go. As they complete the task they taught themselves
to do, they’re learning from their mistakes and evolving their own algorithm to make it
more accurate–with some small guidance from human programmers. Thanks for watching Seeker.

100 Replies to “Neural Networks: How Do Robots Teach Themselves?”

  1. OpenAI made computer that can play video game called DOTA and tried to defeat professional players. openai played milions years of dota defeated averege players but couldnt defeat professionals

  2. This doesn't make any sense, using cheap computing power it's simply bruty force,
    Ant's can pickup small things and take it to another place, without using 5000 CPU

    FYI I'm software engineer working on something which don't require this much CPUs but do all task's. Wait for it

  3. Interesting…🤔 Is it possible for the AI in the near future to solve unsolved crimes from centuries ago?

  4. They are already learning how to destroy all humans without harming nature. Their hands will become ball crunchers.

  5. Interesting stuff. However, I don't see how changing the background and calling it elements makes this any different from all the other videos.

  6. Open AI? Soon we will see robots mastering running, jumping, climbing, swimming, flying, I see the Terminator movie coming to life…scary

  7. A super underwhelming report on one of the best papers of the last 5 years. For Robotics manipulation …best paper in the entire history of the attempt.

    If there were a Nobel for Robotics advances this paper and its authors should win it…from here on in…everything changes much faster.


  8. just imagine or picture this…

    Everyone thinks that robots are gonna take over the world with ai and shit, but i think that maybe robots turn out to be like us. perhaps they try to coexist with us without being hostile. and live like is in science fiction. we kinda need each other. the thing is how will humans react to the fact that they dont own this planet anymore. it’s either all or nothing. perhaps they can go out and live somewhere else in the solar system then everyone will be happy. maybe they smart enough to think reasonably and with commonsense. idk im on my 3rd dab lol but really why will everything has to turn out bad ? 4th dab im out lol

  9. I've watched so many things on machine learning that I find it boring when the "machine learning" explanation comes up everytime.

  10. University of London and King’s College London Research indicates toddlers love touchscreens

  11. pleaaaase, just relaaax. speak and express normally. You are not a kindergarten teacher, I feel like you're insulting my IQ constantly. i like Seeker so much but every time i see you on the thumbnail i don't want to click even if the title is extremely interesting.

  12. It is just like you programing to write a program nothing special

    One thing is sure, even after 1000 years robot still has no consciousness

  13. In the end, the best robots will be biological, or some sort of hybrid. Even nano-bots will likely be DNA-reprogrammed bacteria or viruses in most cases. The push for purely nonbiological machines will be seen as a fad in human history.

  14. When you realize that at least 1 person in the comment section will be talking about how we teach robots to do things and we not do not realize that they are manipulating us . Actually there is a disadvantage to having robots do surgery , it's that if there is a blackout then it will not be able to do any surgery or be able to help only if it has a power source of its capable of help

  15. Robot vs human if robot teaching themselves so human in the future no control on robot because they can program themselves 😏.

  16. I would have wanted the video to be enlightening about the idea instead of story telling. It just ended up being a fancy video which gave not much information about the logic behind "how" it worked. I enjoy most of the videos by Seeker but this one really disappointed me. I am thrilled about the idea but just story telling is not going to satisfy my hunger for being enlightened about the idea.

  17. kind of i can see singularity happening before i die, maybe… even i might get to benefit from it, that in 40 years or so.

  18. the next big thing would be if it could generate a simulation by itself based on the real world, effectively giving itself a limited imagination of sorts.

  19. Come on, most of us know what machine learning is. Its practicably in all of our electronic devices. Please more in depth content

  20. This is the question that really matters: Are scientists creating this AI technology too free Mankind from labor , or to substitute humans which will eventually mean the death of millions of people ?

  21. The way she talks about this topic demonstrates she has no idea what she talking about. "Something called machin learniin."

  22. You could be inside a simulation narrating interactively it is fantastic when this quality narration is shared with the best UI

Leave a Reply

Your email address will not be published. Required fields are marked *