Lab 10: Grid Localization using Bayes Filter (Sim)
The purpose of this lab was to implement localization in a grid-representation of the arena using a Bayes filter. This lab was done in a simulation using a jupyter notebook. The notebook contained the necessary functions to draw the arena and the virtual robot, move the robot around the arena, and collect ToF sensor data. Furthermore, I could plot the ground truth of the robot’s location in the arena and the estimate of the robot’s location using an odometry motion model which will be explained below. My job was to write the code to implement the Bayes Filter, and then run a pre-planned trajectory of motion for the robot, and at each movement compute the filter’s belief of the robot’s location. This way, the simulation could plot the actual robot’s position in the arena and the filter’s belief to see how accurate the Bayes Filter was.
For the simulation, the robot state consisted of (x,y, theta), where theta is the yaw of the robot. The arena has dimensions of (-5.5, 6.5) ft in the x direction, (-4.5, 4.5) ft in the y direction, and (-180, 180) degrees for yaw. Since the robot’s state space is continuous it needed to be discretized. Each grid cell has a size of 1 ft for x, 1 ft for y, and 20 degrees for yaw. So the total number of cells along each axis of the state is (12,9,18), giving a total of 1944 states. The Bayes Filter will try to estimate which grid cell the robot is currently in at each time step.
Below is a picture of the general Bayes Filter algorithm I implemented. The first step predicts the probability of being at each cell given the prior beliefs from the last time step and the transition probability given the action taken at the current time step. The next step uses the sensor readings to update the current beliefs of each cell.
Implementation
The first step was to implement the odometry motion model. At each time step I have access to the previous state (x,y,theta), and the current state (x’,y’,theta’). The motion model is used to extract the control input u which moved the robot from the previous state to the current state. For this setting the control u is represented as a tuple (rotation1, translation, rotation2). The below image shows how to calculate the rotations and translation from the previous and current state.
The below code snippet is my implementation to calculate the control input u used to get to the current state. The normalize_angle function makes sure that all angles lie between (-180, 180) degrees.
Now that I have a means of computing the actual control needed to move between two states I can calculate the transition probability in the prediction step of the filter. This is the odometry motion model. I can use the previous function to calculate the actual control u’, and then see how likely it is to use control u to move between x and x’ when the actual control needed is u’. To do this, I modeled noise in the control input u as a gaussian, and used a separate gaussian for the first rotation, translation, and second rotation. This is demonstrated in the code snippet below, which calculates the transition probability. The standard deviation of each gaussian is a global parameter, and the mean of each gaussian is the actual control u’.
With the transition probability I could then implement the prediction step of the filter. This step requires calculating the belief of being in every state x. This is done by looping through all states x’ and summing the probability of transitioning from x’ to x given the control input u used at the current time step multiplied by the prior belief of being in x’ at the previous time step. Looping through every state requires 3 nested for loops (state space is 3-dimensional), so a total of 6 nested for loops are needed. This is one of the main bottlenecks for the speed of the algorithm. Below is the code snippet. The function from_map converts the indices of a grid cell to the actual coordinates of the center of the cell in (ft, ft, degrees). For example, grid cell (4,5,8) could be represent position (3.5 ft, 2 ft, 100 degrees).
To implement the update step I first needed a function to calculate the sensor model. The simulation has a command to tell the robot to collect sensor data, which initiates a 360 degree spin as the robot collects 18 sensor readings with equal angular spacing. These readings are noisy, but the simulation also stores an array consisting of the true sensor readings at each grid cell that would be returned if no noise existed. In the code snippet below, loc.obs_range_data accesses the noisy sensor readings collected by the robot in its most recent spin. The function takes in the true readings given a cell and returns an array with the probabilities of getting each actual sensor measurement given what the true reading should be. The noise is modeled as gaussian with the true reading as the mean.
The last step of the filter is the update step. The function uses mapper.get_views to get the true sensor readings at the given cell, passes those into the sensor_model, and then multiplies all of the individual probabilities in the returned array together. This is then multiplied by the prior belief from the prediction step of being in that given step. This results in the final belief of currently being in a given state. At the end of the function the beliefs are normalized so all the probabilities add to 1.
To run the Bayes Filter, first the belief of all states is initialized using a uniform distribution, and then the update step is ran once after collecting one round of sensor data so that the Bayes Filter selects some state as the having the highest likelihood to start in. The code snippet below then loops through a pre-planned trajectory of motion, running the Bayes Filter at each step.
Here is a video of the Bayes Filter running. On the left is the arena and the robot moving around the arena. You can see the robot’s ToF sensor collect readings in a straight line. The robot moves, then spins and collects data. The left image is the plotter. At each time step the plotter plots the ground truth of where the robot is (green), the odometry estimate of the robot’s location (red), and the Bayes Filter’s estimate of where the robot is (blue). To get the filter’s estimate, the cell with the highest probability is taken. As you can see, the odometry estimate of location is very poor. However, the filter’s estimate does a great job of estimating the location, as the difference between the filter’s estimate and the ground truth is not too large. The filter follows the trajectory and shows why it would be beneficial to implement on the actual robot.