Real-Time Multisensor People Tracking for Mobile Robots

Real-Time Multisensor People Tracking for Mobile Robots


Welcome. My name is Linda and I will guide you through this video. In the next 5 minutes, we are presenting an easy to install and use… … real-time multisensor human tracker for mobile robots. It is based on the Robot Operating System. ROS Indigo and Ubuntu 14.04. Our robot operates on two embedded PC, one for navigation and one for image processing. No dedicated graphics cards are required. All visualisations are shown in rviz and have been recorded using screen capture. We are using a laser based leg detector shown as a green sphere. And a depth based upper body detector, shown as a green cube. The bounding box of the depth template for the upper body, is superimposed on the image in red. The red figure represents the tracked person using a Kalman filter and NNJPDA for data association. A person walking around the robot is first picked up by the leg detector … … as soon as he or she enters the field of view of the depth camera, the two detections are joined. If one of the detections is missing, the other one is used to compensate. The produced trajectory is automatically generated and can be saved using rosbag or a database. We can clearly see where tracking began and stopped when leaving the field of view. Our approach uses the robot’s odometry via ROS tf, and is therefore able to track dynamic objects as well as static ones. For the prediction, we use a constant velocity model, which assumes mostly linear movement. This has been shown to fit human behaviour quite well. The observations via the two different detectors are updated using a Cartesian observation model. The noise can be separately defined for each detector. This makes the tracker very modular, because it allows to add as many detectors as necessary, via an easy to configure parameter file. The tracker offers the choice between an Extended and Unscented Kalman filter, using the mentioned prediction model. This can be switched on start-up, and allows for easy configuration. Once the detection is lost, the tracker updates the position of the person only as long as the uncertainty is not exceeding a predefined threshold. While driving around, the robot is picking up a few false positives for the upper body and leg detector. The tracker prevents those from being identified as a human by restricting the creation of new tracks to detections that are persistent over a certain amount of frames. To show the capabilities of the tracker in a crowded environment, we sent 5 people to walk around the robot in irregular patterns. Given the constant occlusion, especially when people are standing close together, the tracker is being pushed to its limits. Nevertheless, even in this scenario, the tracking works in real-time and still quite reliable. The majority of false tracks is created by not precise enough noise parameters for the Cartesian model. Future work, investigation off-the-shelve detectors in this tracking framework, will improve on this issue. We hope that this demonstration has shown the benefits of using such a modular tracking framework for Human-Robot Spatial Interaction. Please, visit the website linked below, if you want to try it for yourself. All software shown, is freely available and installable via debian packages. http://lcas.lincoln.ac.uk/cdondrup
http://www.strands-project.eu

One Reply to “Real-Time Multisensor People Tracking for Mobile Robots”

Leave a Reply

Your email address will not be published. Required fields are marked *