Localization in a known map with Erle-Rover

Builds a map to let the Erle-Rover autonomously move around

Source code

This tutorial shows how to use the Erle-Rover with a known map. This assumes that you have a map of robot environment. such as the one generated by the previous tutorial.

For this propuse we are going to use amcl, this is a probabilistic localization system for a robot moving in 2D. It is a particle filter based probabilistic localization algorithm which estimates the pose of a robot against a known given map. Yes, AMCl requires a map to start with. In a nutshell, AMCL tries to compensate for the drift in the odometry information by estimating the robot's pose with respect to the static map.

It requires nav_msgs/OccupancyGrid(map) , sensor_msgs/LaserScan topic ( laser scan source), tf/tfMessage topic (pose of the robot) and the geometry_msgs/PoseWithCovarianceStamped (initial pose of the robot) to be published.

Particle filter are initialized by a very high number of particles spanning the entire state space. As you get additional measurements, you predict and update your measurements which makes your robot have a multi-modal posterior distribution.

Particle filter in action over progressive time steps

The steps followed in a Particle Filter are:

  • Re-sampling: Draw with replacement a random sample from the sample set according to the (discrete) distribution defined through the importance weights. This sample can be seen as an instance of the belief.

  • Sampling: Use previous belief and the control information to sample from the distribution which describes the dynamics of the system. The current belief now represents the density given by the product of distribution and an instance of the previous belief. This density is the proposal distribution used in the next step.

  • Importance sampling: Weight the sample by the importance weight, the likelihood of the sample X given the measurement Z.

Each iteration of these three steps generates a sample drawn from the posterior belief. After n iterations, the importance weights of the samples are normalized so that they sum up to 1.

For further details on this topic, (Sebastian Thrun's paper on Particle Filter in Robotics)[http://robots.stanford.edu/papers/thrun.pf-in-robotics-uai02.pdf] is a good source for a mathematical understanding of particle filters, their applications and drawbacks.

Make sure the minimal software has already been launched on the robot and you have configured your network correctly.

Download the github repository and compile the code following these instructions:

mkdir -p ~/erle_ws/src
cd ~/erle_ws/src
git clone https://github.com/erlerobot/gazebo_cpp_examples
cd ..
catkin_make --pkg ros_erle_rover_navigation

AMCL package maintains a probability distribution over the set of all possible robot poses, and updates this distribution using data from odometry and laser range-finders.

roslaunch ros_erle_rover_navigation localization.launch

The package also requires a predefined map of the environment against which to compare observed sensor values.

rosrun map_server map_server my_map.yaml
[ INFO] [1472193111.778090474]: Loading map from image "./map.pgm"
[ INFO] [1472193111.840535180]: Read a 2048 X 2048 map @ 0.050 m/cell

When a map is requested this message will appear:

[ INFO] [1472193901.189821495]: Sending map

The filter is "adaptive" because it dynamically adjusts the number of particles in the filter: when the robot's pose is highly uncertain, the number of particles is increased; when the robot's pose is well determined, the number of particles is decreased. This enables the robot to make a trade-off between processing speed and localization accuracy.

The teleoperation can be run simultaneously with the navigation stack. It will override the autonomous behavior if commands are being sent. It is often a good idea to teleoperate the robot after seeding the localization to make sure it converges to a good estimate of the position.

rosrun ros_erle_cpp_teleoperation_erle_rover teleoperation
ErleRoverManager : using linear  vel step [10].
ErleRoverManager : using linear  vel max  [1560, 1440].
ErleRoverManager : using angular vel step [50].
ErleRoverManager : using angular vel max  [1900, 1100].
Reading from keyboard
---------------------------
Forward/back arrows : linear velocity incr/decr.
Right/left arrows : angular velocity incr/decr.
Spacebar : reset linear/angular velocities.
q : quit.

When starting up, the Erle-Rover does not know where it is. To provide it its approximate location on the map:

  • Click the "2D Pose Estimate" button
  • Click on the map where the Erle-Rover approximately is and drag in the direction the Erle-Rover is pointing.
  • Green arrow represents odometry.
  • Blue arrow represents position estimation by amcl
  • Red arraws represent posible position estimation by amcl

  • Even though the AMCL package works fine out of the box, there are various parameters which one can tune based on their knowledge of the platform and sensors being used. Best way to tune these parameters is to record a ROS bag file, with odometry and laser scan data, and play it back while tuning AMCL and visualizing it on RViz. This helps in tracking the performance based on the changes being made on a fixed data-set.

  • Depth cameras can also be used to generate these 2D laser scans by using the package depthimage_to_laserscan which takes in depth stream and publishes laser scan on sensor_msgs/LaserScan. More details can be found on ROS wiki.