Lab 12: Path Planning and Execution
The goal of the final lab was to have my robot navigate around the arena and hit a list of waypoints as quickly and accurately as possible. This lab was very open-ended, so there were no restrictions really on what to implement to get the robot to navigate the trajectory. Below is in image of the waypoints and goal trajectory.
My initial thoughts were to calculate the translations and rotations needed to move between each waypoint by hand, and then hardcode a sequence of movements implementing them as a command to tell the robot to execute the trajectory. Since the arena was not super large and waypoints were generally close I thought global plath planning was not necessary. My implementation was more aimed towards local planning, using the turn-go-turn procedure. I planned to use position PID to control driving forward and orientation PID to have the robot rotate different angles. I considered implementing localization and a Bayes Filter because I knew that hardcoding the path would allow small motion errors to build up and potentially make movement harder. However, I figured that localization would not be as helpful for all the time it would take to implement. This is because rotating 360 degrees at each waypoint will offset my robot from the waypoint further as my 360 degree turns are not consistent. Furthermore, the localization is not completely accurate and it could cause the robot to move further away from the desired location.
Below is the basic starter implementation I went with. I used the same orientationPID function as previous labs to turn. I created a function named positionPID that takes as an input a distance to move forward. In the function I average 5 ToF readings first to get the average starting point, then I run my position PID code from lab 5 with a setpoint of the starting point minus how far I want to travel. I implemented this in a command I could call over bluetooth to execute the trajectory.
I found that to get from the first waypoint to the second my robot first turned -45 degrees, and then had to do position control to move forward. The problem was that from the first waypoint when angled -45 degrees the robot was looking diagonally across the entire arena, which was outside of the ToF sensor range. This didn’t allow the position PID control to run and caused erratic behavior. For example, sometimes ToF sensor readings would be captured but they would be so noisy that consecutive readings would change by 0.5 m with no robot movement. I thought about implementing a Kalman Filter but decided to instead write a function to hard code moving forward by ~2.5 ft using timing. Since the first two forward movements from waypoints 1 and 2 had this sensor range issue I used the timing for both. After those waypoints the next couple movements had less open space so I figured the PID control would work fine there with the close walls.
Below is a video of a good test run where the robot made it all the way to the upper right corner waypoint then crashed. By then small motion errors had built up and the robot ended up rolling forward into the next waypoint on a turn. The robot eventually moved far enough off course to where it couldn’t correct. Although it did make it pretty far.
I noticed that a lot of the turns had continued momentum from the forward movements and would cause slight drift when turning. This threw the robot further off from the target waypoint it was supposed to be at. I also just wanted to traverse the arena faster. So, I drastically reduced the amount of time I ran orientationPID control down to around 0.4 s to get more accurate turns and eliminate drift. Between each motion (forward or turn) I added a 0.5 s delay to allow the robot to settle and stop before moving again to further eliminate drift and momentum carrying over between movements. One other problem I noticed was that in the bottom right corner of the arena my robot often overshot the waypoint, so I reduced the distance I told the robot to travel forward at these waypoints to get better results. The video below shows the faster motion in action. But still, after one turn the car rolls too far forward and then hits the wall trying to reach the next checkpoint. However, it recovers to hit the next checkpoint, but after loses ToF readings and crashes into the top wall.
There was high variability between each run which was expected using the turn-go-turn method. I found that the turns were the most important, so I tried to find PID parameters that made each turn work best individually. Then, in my code I changed the PID parameters before each turn to suit that angle to try to get some more repeatability. Some of the turns started looking much more accurate, but still one off turn ended the whole trajectory as seen in the video below.
After navigating the bottom right corner of the maze, my robot seemed to be consistently struggling with reaching the top right waypoint. There is a long straight drive up the right side of the arena which was problematic. My robot might build up too much speed going forward there which introduces more room for error. To try to be as accurate as possible I considered splitting this forward motion up into multiple drive forwards of smaller increments using PID control.
With trial and error I was able to get my robot to successfully make it to the 7th waypoint, the top right corner without crashing. I struggled in the bottom right corner because my robot would overturn and point at the obstacle rather down the right side of the arena. I lowered the power being output to one of my motors by a fraction of 0.8 since I noticed the right motor was weaker. This helped my robot drive more straight down this long stretch. Below is a video.
The turns were unreliable as I tuned the PID parameters before but then in the arena while moving the 90 degree turns would always overshoot by around 40 degrees, so I had to lower the angles sent to the PID controller. At this point in my work I saw a large degradation in performance of my robot’s hardware. The first two forward drives are done with timing, and used to regularly drive the correct distance if I used a fully-charged battery. At some point, the motors started behaving very unreliable. One test the timing drive would only push the robot a couple inches, the next it would drive 3 feet. I spent a lot of time testing over and over but the motors were becoming too unreliable for timing methods. Below is a vide of some of the erratic behavior I started seeing.
To solve this, I couldn’t use PID because the ToF sensor is out of range at the first waypoint. So, if I couldn’t use timing I decided to try and implement a Kalman filter. After the first two waypoints I used PID control on position for driving forward, so I just needed the filter for the start where I used to do timing.
Since the ToF sensors didn’t function at this range I had to implement the Kalman filter by only running the prediction step. At first I tried to run PID control on the robot’s position using predictions from the Kalman filter, but with the limited time I had I wasn’t able to get reliable results. I decided to output a constant motor PWM value while using this input to run Kalman filter predictions. Once the predicted ToF reading got past some setpoint I decided, then the robot would stop. Below is a code snippet of my filter setup, using the parameters from lab 7. Also, from lab 7 when discretizing the A and B matrices I used the sample rate of 50 ms. Here, my prediction step is running each iteration, not just when sensor data is available, so I had to re-discretize using a sample rate of 10 ms.
I then altered my positionPID function to accept an input designating whether to run the Kalman filter only or run regular PID. I also added a parameter for the starting point of the robot if using the filter. This is used to create the initial state, which starts with a ToF reading of the initial position. For the first two waypoints I used the filter, tuning the setpoints I passed in to get the robot to move the correct distance. I had to tune the setpoints because the filter is not entirely accurate with the dynamics of my robot. Below is a snippet of the filter in the PID function. When the error is below a threshold the PID function breaks and returns. Matrix operations were done using the BasicLinearAlgebra Arduino library.
The results were not great. The behavior of driving from the first two waypoints was still pretty unpredictable. Although, it was a little more reliable than before. Running the exact same filter code with same setpoints multiple times lead to the car driving distances that varied by around 1 ft. Overall the filter helped but wasn’t able to save the system. It seemed that more parts of my system degraded and I believe it was due to the either the motor drivers or the motor itself but either way there wasn’t enough time to resolder new drivers. The turns were also unreliable with the same PID parameters, which made it very hard to test because one set of PID parameters could make the car turn by 45 degrees one time and then 70 the next.
Because of the variability I just re-tested over and over to get a sense of the average behavior to make adustments. Below are a couple videos of one of the more accurate runs I had using the Kalman filter. At the start the turns and motions are almost exact. However for each of these runs I saw around 15 inconsistent ones.
One other issues was the ToF sensors would stop working mid-run. I saw that the first use of PID control works fine moving to the waypoint right next to the bottom obstacle, but the next PID control movement just has the car drive straight into a wall because no ToF readings were given.
Overall, my best run was without the Kalman filter, when I hit 7/9 of the waypoints. I am proud of this result, as it took me countless hours of debugging to get there. This class was one of the most interesting I have taken at Cornell. In class you usually learn about the theory behind systems but here I had the chance to build something real myself. I would like to thank professor Jonathan Jaramillo for making this class as fun and engaging as it was. I would also like to thank the course staff for keeping the lab open for long hours everyday to help me work. With more time, I would have tried to implement localization and the Bayes filter. Below is the video of my most successful run again.