Lab 11: Localization on the Real Robot
The purpose of this lab was to implement localization on the actual robot using the Bayes Filter from the last lab. However, in this lab we only used the update step of the Bayes Filter. This means we did 360 degree scans using the ToF sensors to capture 18 readings each 20 degrees apart. Just this step alone is used to localize where the robot is in the arena. This is because the motion of the actual robot is incredibly noisy, so the prediction step of the Bayes Filter is not reliable as it uses the motion of the robot.
The goal was to place the robot in four marked positions in the arena and localize from each. Since there is no way to get the ground truth of where our robot is naturally we had to place it in known positions each time so we could hard code the ground truth.
First we were given a fully optimized copy of the Bayes Filter code from the previous lab. The first step was to test this code to make sure it worked. So using the correct filter code, I ran the simulation from lab 10 again to verify its correctness. Below is a screenshot of the final plot of the robot ground truth, odometry, and Bayes Filter belief while executing a pre-planned trajectory as in lab 10. As seen, the filter’s belief is very accurate and working. The arena was divided into the same grid cells and state space as in lab 10 (x,y, theta) (1944 cells total).
In Lab 10, to perform the update step a function was first called to have the robot rotate 360 degrees and collect 18 ToF sensor readings. This was virtual. To implement this in reality, I needed to change this function to have my robot do the same behavior. To do this, we were given a RealRobot class with member function to return the robot’s ground truth location and a function to rotate and collect ToF readings. To implement the latter, I used the MAP command I wrote for lab 9. For lab 9 this command rotated the robot at 25 degree intervals collecting readings, and I just had to make a small change to tell the PID controller to rotate at angles of 20 degrees. This gives a total of 18 ToF sensor readings per rotation, which are then sent back to my computer via bluetooth. So, I call the MAP command and setup a notification handler to capture ToF readings as they come in. While MAP is running my Python code for the Bayes Filter is sleeping via the use of the asyncio class until 18 readings have been returned. The first function of the RealRobot class returns the ground truth, and I hardcoded the ground truth of the robot to be whatever known position I currently had placed the robot at. Below is a screenshot of both functions.
Then, I could place the robot in the arena at the same known position I hardcoded into the get_pose function, initialize the Bayes Filter, run a single update step, and plot the filter’s belief after one step vs the actual location. Below is a code snippet of the code needed to do this.
Here are the four marked poses in the arena I tested at.
Starting from the first known position, here is an example of the ToF readings that got printed in the observation_loop function right after I ran the above code to localize.
Here is the resulting plot generated, with the green being the ground truth and the blue being the filter’s belief of where the robot is given the sensor readings taken. The rest of the update step is the same as in Lab 10, using the sensor model and prior beliefs to estimate the new belief. As seen, the filter did a pretty accurate job of localizing given how noisy the sensors are, and the fact that the 20 degree rotations were not exact.
Here is an example of my robot performing the rotation at position 2.
For position 2 the belief of the Bayes Filter was exactly the same as the ground truth of the robot. The localization was perfect. Below is a video of the plot generated to show that the ground truth and belief are the same point.
After localizing at position 3, there was around the same amount of error as position 1. Below is the generated plot. In both positions it estimated the robot closer to a wall then it actually was which could be due to several reasons including noisy ToF readings.
For the final position the localization performed a little better. The error was smaller than that of positions 1 and 3. The robot was estimated to be closer to a wall similarly to positions 1 and 3 also. Below is the generated plot.
Overall, the robot localized the best in position 2, where the Bayes Filter was exactly correct. This could be because position 2 is relatively unique compared to the others, so there are less groups of sensor readings that could likely correspond to position 2. Position 2 is the only one with no close wall or obstacle behind the robot. Position 1 has wall behind, as does position 3, and position 4 has an obstacle close behind. This would help position 2 be more recognizable and easier to localize within.