ECE 4960: Fast Robots is a class focusing on implementing a dynamic autonomous robot.
Students in ECE 4960 design a fast autonomous car and explore it's behavior in the real world.
This website documents the work I completed in ECE 4960 for this class during the Fall 2021 semester.
Lab 1: Artemis
Introduction
ECE 4960 revolves around the SparkFun RedBoard Artemis Nano. Throughout this class we will be replacing the internals of a toy car with an Artemis and sensors, allowing the car to behave autonomously. The objective of Lab 1 was to configure the Arduino IDE and upload a few basic programs to the Artemis. In this lab we ran four basic programs on the Artemis, all of which can be found in the examples tab in the Arduino IDE.
Setting up the Artemis
Setting up the Artemis was a straightforward process. Unlike some other Arduino capable boards, the Artemis required us to add an Additional Board Manager URL, which can be found in SparkFun's tutorial for using Arduino on the Artemis: SparkFun Guide.
Example 1: Blink
The first example we ran was the Blink example found in the Arduino examples. Blink is a simple program that blinks the LED of the Artemis on and off in 1 second intervals (I changed the interval to 500ms in my demonstration). Here is a video of the program running:
Example 2: Serial
The second example we ran was the Serial example in the Artemis tab. Running this example would check whether we could input and output with the serial monitor in the Arduino IDE. Here is a video of the program running:
Notice how when "Hello World!" is input to the serial monitor, the serial monitor echos the string back to me.
Example 3: AnalogRead
The third example we ran was the AnalogRead example, also in the Artemis tab. This example is slightly more complicated than the other examples, and is the first example we ran which would take advantage of other devices on the board (other than the LED). The AnalogRead example reads the temperature of the chip, and outputs the raw data to the serial monitor. Here is a video of the program running:
When I put my fingers on the board (not shown), the temperature data increases from around 3000, up to about 16000. You can see on the left of the serial monitor that the number changes from a 4-digit number to a 5-digit number.
Example 4: MicrophoneOutput
The fourth example we ran was the MicrophoneOutput example, which can be found in the PDM tab of the examples. This example listens for the highest frequency of noise in proximity of the Artemis, and outputs that frequency to the serial monitor. The frequency could be triggered by a number of things like whistling, speaking, tapping on the table, etc. Here is a video of the program running:
When I whistle at the board, the frequency shown jumps to about 2000, and is displayed on the serial monitor.
Lab 2
Introduction
The objective of lab 2 was to begin using bluetooth to communicate with the Artemis. This would serve as the base for a much more comprehensive control protocol in future labs.
Bluetooth
First, we had to run a basic Arduino program provided by the course staff. This program would just get the Artemis to print it's MAC address out to the serial monitor. The purpose of this was to use the MAC address to replace the value in the provided connection.yaml file. The output is pictured below:
Using this address in the connection.yaml file allowed me to connect to the Artemis over bluetooth. Here is an image of the connection confirmation in JupyterLab:
Following this, the rest of the provided demo file ran a few of the other basic commands. Here is the output of the serial monitor after the all of the instructions in the Jupyter notebook were executed:
Task 1: Echo Command, Task 2: Send Three Floats
The first lab task was to send an ECHO command to the Artemis board using bluetooth. To check whether or not the board received the message, I had to write code that would print out the message to the serial monitor once received.
The second lab task was to send a SEND_THREE_FLOATS command to the Artemis board. I wrote another block of code that would print the values received by the Artemis to the serial monitor. Here is the output of the serial monitor after both tasks, and a screenshot of the code I wrote to print the three float values:
Task 3: Notification Handler
The third lab task was to setup a notification handler that would receive a float value, and store that float value in a global variable every time the characteristic value changes. Here are images of the callback function I wrote to store the value, as well as a video of the periodic output of the function:
Using receive_float() vs receive_string()
Using receive_float() is the most efficient way to communicate a float value between the computer and the Artemis. Using receive_string() and then converting the string value to a float in python would introduce unnecessary complexity into the program, since the more characters there are in the string the longer it would take to convert into a float. Using receive_float() would allow the program to read larger float values without becoming slow.
Lab 3
Introduction
In lab 3, we began working on with the sensors that our robot would run on. The first two sensors that we worked with were two time of flight (ToF) sensors, and an inertial measurement unit (IMU). These are two of the sensor types that our entire robot design will operate on later.
Time of Flight Sensors
Time of flight sensors (ToF sensors) are a light emitting sensor which measures the time it takes for light to bounce back to enable distance mapping capabilities. The time of flight sensors are the most critical sensor on our robot. We will be using two of these to allow the robot to sense and avoid nearby obstacles. This lab also introduced us to the I2C commincation protocal, since the Artemis communicates with the ToF sensors over I2C.
One difficulty of using two identical TOF sensors is that they came with the same I2C address. This meant that with both sensors connected to the Artemis, we wouldn't get any meaningful information from either unless the address for one was changed. To do this, we connected the XSHUT pin on one of the sensors to one of the digital pins on the Artemis. This allowed us to turn off one sensor and change the address of the other using some of the built in I2C functions. Once reassigned, the sensor addressing and I2C communication behave as expected.
Testing the ToF sensors yeilded promising results. Using the built-in ReadDistance example in the Arduino libraries, I found that the ToF was capable of producing accurate readings at both shorter and longer ranges, and never seemed to produce outlying values. Despite not having all the equipment (a tape measure) necessary to verify the exact precision of the time of flight sensor, the sensors produced accurate readings even at distances up to 3 meters. Despite there being distancing modes that provide a deeper field of vision, the shortest distancing mode (optimized for 1.3m) will likely be the best option for our robot, as most of the critical data will be within 1 meter of the robot. Even when changing variables such as the angle of the sensor with respect to the surface, the color of the surface, and the texture of the surface, the time of flight sensors seemed to stay impressively accurate. These results are good news for later labs, as it means the time of flight sensors will be robust to varying environmental conditions. Here is a video of the sensors functioning in a short distance scenario (15 to 30cm):
Inertial Measurement Unit
The inertial measurement unit that we are using has three major components, an accelerometer, a gyroscope, and a magnetometer. In this lab, we enabled data collection from all three, but focused on the data from the accelerometer and the gyroscope. The following sections detail the exact tests run with each sensor.
Accelerometer
The accelerometer will be one of the major components that we use to track the absolute location of the robot throughout its movement. The accelerometer returns the acceleration in three orthogonal directions. In this lab, we compiled data from the accelerometer in a static scenario, using data from the force of gravity to determine the roll, pitch, and yaw of the sensor. First I verified that the IMU data was correct when the IMU was flat on a table. You can see that when laid flat, the IMU shows a value of 1000mg in the Z direction, which is caused by the 1g force of gravity.
And here is a video of me testing the roll and pitch measuring capabilities of the accelerometer across a 180 degree range:
Across my testing, I found that the accelerometer data, even without the complimentary low pass filter (implemented below), was very accurate to the angle that I was holding it at. This is great for the robustness of our robot design, as it means we'll be able to spin and flip the robot during stunts without losing track of its orientation.
We implemented a complementary low pass filter using data taken from the IMU to generate a frequency spectrum and cutoff frequency. Since the robot will be moving over rough ground, flipping and banging into things, we need to make sure that the sensors are able to ignore noisy data generated by the robots erratic movements. Here are images of the plots I generated:
In the tests I conducted, there was relatively little difference between the unadjusted data collected by the accelerometer and the data output from the low pass filter. Despite the similarity here, the ability to implement a filter will become more useful once the sensors are mounted on the robot and are subject to much rougher conditions than I was able to test in this lab. Here is a video comparing the roll and pitch outputs from each method (default vs low pass filter):
Gyroscope
The IMU's gyroscope provides an alternative way to compute the roll, pitch, and yaw angles of the robot. In the tests that I ran, The gyroscope seemed to be a less favorable alternative to the accelerometer. In static scenarios, the gyroscope seemed to report less accurate measurements than the accelerometer. Under similar levels of movement, the gyroscope had a tendency to accumulate error, especially during sharp movements (which would be very bad for a fast moving robot). After applying a complementary filter, the gyroscope was much less succeptible to errors caused by sharp movements, but still suffered from error accumulation. It is possible that in later applications, the data from the accelerometer could be used to correct the error from the gyroscope, which would reveal some other strengths of the gyroscope, but for now my testing shows that the accelerometer is far more reliable. Here is a video of the gyroscope functioning without a complementary filter:
Next are the gyroscope measurements with a complementary filter applied. The complementary filter shows a noticeably improved response to the sharp movement. The gyroscope data is less thrown off, but still accumulates error.
Images
Here are some images from throughout this lab:
Lab 4
Introduction
The objective of lab 4 was to collect diagnostic information about the car and its capabilities, so that we can mimic those capabilities once we replace the internals. Much of this lab involved taking measurements on the robot, and driving it using the included remote control. I worked with another student, Ronin Sharma, to test the robot's capabilities, and had his help in recording the stunts I performed.
The Car
First I collected some basic diagnostic information about the car. The car measures 18cm in length, and 14.5cm across. The wheels of the car span heigher than the actual body of the car, so the effective height of the car is 8cm (the diameter of the wheels). The robot seems to function equally well whether the robot is rightside-up or upside-down. The robot weighs slightly over 500 grams. I did not manage to measure the exact time it takes to charge the battery, but the battery was consistenly at nearly full capacity after 15-20 minutes charging (although the exact time might be shorter). A full-capacity battery would typically last around 10-15 minutes of mixed usage (flips, spins, accelerating and decelerating), depending on how much movement the robot was doing.
Movement and Durability
One of my immediate discoveries about the car, is that despite the high speed of the car, the car could come to a near instant start by coming off of the accelerator and spinning in place (like a hockey stop). Without this technique the car would roll for a long period after coming off the accelerator. Another thing I discovered about the car is it's incredible durability. The car seemed to take whatever I could throw at it, including crashing into walls at full speed, and driving off a table.
In addition to its durability, the car could handle pretty much any surface, although it would drive better on flatter hard surfaces, like a hard floor or a tabletop. On a carpet it felt like the car was subject to drifting, and would handle less accurately and less responsively. This was most notable when spinning the car around its own axis. When spinning on a carpeted floor, the robot would bounce around a little, and would skid slightly during the stunt. On a hard tabletop, the robot stayed anchored in place, only shifting slightly because the table was not level.
Tricks
Getting the car to flip was incredibly easy. The key to flipping was to send the car forward at a high speed, followed by a quick burst of backwards motion. this would result in the car doing a frontflip. Getting the robot to jump around in place was also relatively easy, achieved by just spamming the buttons in rapid succession.
Movement and Durability
Here are some short clips of me attempting some flips and spins with the car:
Lab 5
Introduction
The objective of lab 5 was to take apart the car and begin replacing the internals with the circuits we were building. In this lab we would integrate the two motor drivers to the circuit to allow us to control the motors using the Artemis.
Motor Drivers
The first thing we had to do was to take apart the car and remove the control PCB and sever the wires to the motors so we could replace them with our own. After stripping the internals from the robot, I hooked up the motor drivers to the motors that are built into the robot chassis, and wired them to the battery lead that came with the car. We were provided a pair of motor drivers, one for each motor in the car. Each motor on the chassis is meant to control one pair of motors (the motors on each side of the robot spin together). For each motor driver, a set of connections was made to the Artemis, the motors, and the 850mAh battery.
Arduino Code
Getting the motors to spin involved using Arduino's analogWrite function to write a value to each motor. The speed of each motor was controlled using a pair of pins. Writing values to one pin while keeping the other pin at 0 would allow us to control the speed of the robot's motors, where one pin was used for forward motion and the other pin was used for backwards motion. Here are the loop codes I used for spinning the robot and moving it in a straight line, respectively:
Here is a video of one of the robot's motors in action:
Control
"
Controlling the robot involved a series of straights and turns. To move the robot in a straight line, I set the "forward" arduino pins to the same value for each motor, and to spin the robot, I made the motors turn in opposite directions. Below are a few videos of the robot working. Over the course of this lab I was able to make the robot spin in place, move in a straight line, and perform a random series of movements. In later labs the robot will be able to perform more complicated tasks like turns and flips. In the straight line task, you can observe the robot moving straight based on how it moves in relation to the pattern on the tiled carpet.
Lab 6
Introduction
The objective of lab 6 was to use PID control to let the robot move based on its surroundings. In my implementation, I used data from the front time of flight sensor to achieve this.
PID
Proportional-Integral-Derivative control, or PID control for short, is a control loop feedback mechanism that calculates the error between a data point and a desired output (called the setpoint) to correct a robotic process. In our lab, we use PID control based on the time of flight sensors to allow the robot to move more accurately, and to circumvent the need to hard-code speeds and delays. Using PID control the robot is able to adjust its speed based on it's distance from a certain target.
Bluetooth
Some bluetooth connectivity issues prevented me from implementing a finished bluetooth debugging system, but I was able to use bluetooth to collect sensor values from the Artemis. Sending values over bluetooth was the same as the procedure from lab 2, but I added a new UUID for the time of flight sensor data. Since the time of flight sensors return float values, much of the code was drawn from the float characteristic transmission code from lab 2. Here are some snips of the code I used to send the time of flight sensor values back to my computer:
Using the data obtained from the runs, here are some of the Error vs Time plots I was able to generate:
I was also able to determine that the highest velocity recorded (among any of the three runs) was 3.6m/s.
The Lab Task
In this lab, we had a choice between three different challenges to implement. The challenge I chose was to make the robot stop at a target distance away from a wall. To accomplish this task, I used PID control to adjust the analog values being sent to the motors, and used data taken from the front-mounted time of flight sensor as the input to the PID logic. Here are some images of the robot, with the front time of flight sensor mounted, as well as some images of my PID code:
In my testing, I found strong positive impact for setting values of kp and kd, but found that I could not get a consistent response from changing the value of ki. I tested many values, but had some of my most accurate and most consistent runs with values of kp = 0.15 and kd = 1. Note that before I output the result of my PID code, I use min() and max() functions to threshold the result between 50 and 150. I felt like these were appropriate values for the minimum and maximum PWM signal values that I could send to the motors. In addition to the fundamental PID logic, I also included a flag that would indicate the direction of the robot based on whether the error was positive or negative. Since PID can return a negative value, I took the absolute value of the PID result using the abs function, and if the result was negative I instead applied the PID result in the opposite direction, since that would make much more sense than trying to send a negative value to the motor drivers.
I use the output of the PID function as the PWM signal sent to the motor driver. This means that a greater error induces a greater speed in the motor, and visa versa. Therefore, when the robot is far away from the target, it tries to close the gap as fast as possible, and when the robot is close to the target, it would move with more precision.
Consistency
One of the most important parts of this lab was proving that the robot had a consistent response using the chosen values. Below are a few videos of my robot attempting the lab task, starting at a few different distances away from the wall and target. Given a target setpoint of about 300mm (approx. 1 foot) in each trial, the robot is able to stop within an inch of the target which shows that the robot has a consistent behavior.
Lab 7
Introduction
The objective of lab 7 was to implement a kalman filter, which will allow us to refine the behavior from lab 6 and will be useful for performing the stunt in lab 8.
Kalman Filtering Concepts and Setup
Kalman filtering is an estimation algorithm that allows us to interpolate data between time of flight sensor measurements, effectively allowing us to act as though our time of flight sensor operates at a higher frequency than it really does. By using a kalman filter, we can refine the behavior of our lab 6 code, as we are essentially giving our PID logic more data points to work with.
To setup the Kalman filter, we first needed to perform a step response to obtain the steady-state speed and 90% rise time for the robot. We then used these values to calculate the A and B matrices for the Kalman filter. For this task I chose a PWM magnitude of 80, which gave the robot adequate time to accelerate to the steady-state speed before hitting the wall. Below is a graph of the ToF values that I obtained during the step response.
After executing the step response and computing the necessary constants, these were the values I got for the 90% rise time, steady-state velocity, and for the Kalman filtering variables [d] and [m].
Implementing the Kalman Filter in Python
Using the data obtained from the step response, I computed all the necessary setup for the Kalman filter, including the A and B matrices, Delta_T (which I took as the average time interval between ToF measurements), and using all of the above values to compute the Ad and Bd matrices. Here is a snapshot of the code I used to produce those values:
For the sigma, sigma_u, and sigma_z matrices, I found that the following values produced a Kalman filter model that I wa satisfied with. In my testing I never found the time of flight sensors to be terribly inaccurate, even when nearing the top of their range (around 4 meters). Therefore I found it appropriate to choose covariance matrices that produced outputs that "trusted" the data more than the model. Here were the values that I chose:
Below I discuss more of how varying the values in the covariance matrices affects the output of the Kalman filter.
To perform the Kalman filtering, I iterated over the list of ToF values and applied the provided kalman filtering code. Here was the code I used to do this:
Here was the result of the Kalman filter. The blue line represents the raw data taken from the time of flight sensor. The orange line is the data produced by the Kalman filter. As shown, the Kalman filter output is very similar to the measured data. I chose covariance constants that made this happen because I had never found the ToF sensors to be inaccurate.
Varying Covariance Matrices
Varying the covariance matrices produced interesting results for the Kalman filter output. Using the matrices I used above, I produced a Kalman filter output that "agreed" with the raw ToF data output. However, increasing the constant in the sigma_z covariance matrix relative to the sigma_u matrix produced a very different result. Evening out the constants in each matrix (thereby indicating lower trust in the data and higher trust in the model) produced the following graph:
Although the result was interesting to see, I would not use these constants in my Kalman filter. As I already discussed, the sensors have been accurate for me up to their advertised range, so there was no need for me to override the sensor data with my Kalman filter prediction.
Kalman Filtering on the Artemis
After understanding the Kalman filter in python, I produced an identical implementation written in Arduino, which would run natively on the Artemis. Unlike the python "dry run" I did, the Kalman filter would be running in real time on the Artemis, making predictions of the ToF sensor values to use in my PID control code for a more refined stunt. Here was the code I used to implement the kalman filter in Arduino:
I used the output of the kalman filter directly in my PID computation code, and relied on the Kalman filter predictions whenever the time of flight sensors had not generated a new value in-between loop iterations. Below is a graph of the kalman filter output for a PID run with a setpoint of 400mm. The Kalman filter output is depicted in orange, while the raw ToF sensor data is depicted in blue. It is clear that the Kalman filter is able to interpolate some of the wider gaps in between ToF readings, indicating that the Kalman filter is in fact working. Not only is the Kalman filter able to interpolate the ToF data points, but the filter output is accurate to the raw ToF readings.
Lab 8
Introduction
The objective of lab 8 was to combine everything we've done into the stunt!
My Stunt Implementation
My stunt implementation was simple. I programmed the robot to attempt it's flip once it crossed a setpoint which I sent over bluetooth. To flip the robot I programmed it to move full speed at the wall, followed by a short pulse of full speed in the opposite direction. The sudden change in acceleration caused the robot to perform a frontflip. After the short pulse in the opposite direction, I programmed it to move in the opposite direction for a set time, also sent over bluetooth. Here was the Arduino code I wrote to make the stunt work, the ToF sensor data throughout the stunt, and a video of the stunt.
Custom Stunt
My custom stunt is a simple stunt in which I programmed the robot to sense for a large drop in the time of flight sensor readings. If a large drop in distance is detected, the robot interprets this as me waving my hand in front of the robot to "retrieve" it. It's a simple stunt motivated by my own laziness and not wanting to get out of my chair to retrieve my robot. Nonetheless it works. Below are the videos of it working and the code I used to make it happen:
Video 1Video 2Video 3
Bloopers!
Lab 9
Introduction
The objective of lab 9 was to use the robot to create a static scan of the demo area. The map we create of the demo area will become useful in later labs for performing localization and navigation tasks. This lab puts together everything we have learned so far. We use the ToF sensors to measure distances, the IMU to track the angle of the robot in the demo area, and the motors to spin the robot to get a complete scan of the maze.
Taking Meaurements of the Maze
To create the map of the room, I used the front ToF sensor to collect distance measurements while using the IMU to measure the orientation of the robot in the demo area. To get the angle of the robot I used the IMU to measure the angular velocity of the robot around the Z-axis, and integrated this with respect to time using the time between IMU measurements to keep track of the angle of the robot at all times. I created a function to update the angle of the robot using the IMU data, and called that function at the end of my loop to make sure that the robot's angle is continuously updating. By plotting the measured distances and angles we could obtain a polar-coordinate plot of each of the corners of the demo area. Here is an image of the code I wrote for the IMU updates:
After each measurement I sent a short PWM pulse to the motors, rotating the left and right motors in opposite directions for a tenth of a second to turn the robot in small intervals. I generated a full 360 degree scan in each corner of the demo area by repeatedly turning the robot and taking new measurements approximately every 15-20 degrees. I added a new bluetooth command type called CMD.TURN to instruct the robot to turn and report data back to my laptop.
By splicing together scans taken from 5 different areas of the maze, I was able to create a complete approximation of the dimensions of the maze. Here are the individual scans that I took in each corner of the maze, plotted in polar coordinates:
Spinning the Robot
To spin the robot I used a short constant PWM pulse to jostle the robot by a small amount. I was able to get 18 to 20 measurements per position, or a measurement every 20 to 18 degrees. Since my intervals of measurement were small enough, and my IMU readings were accurate, I was not concerned about getting turns at any precise angle and instead focused on the raw number of measurements. Here is a video of the robot scanning one section of the maze:
Video
Splicing together scans
By splicing together scans taken in 5 different areas of the maze, I was able to create a complete approximation of the dimensions of the maze. To splice them together I translated the polar coordinates from the measurements into cartesian coordinates using the following formula:
Lab 10
Introduction
The objective of lab 10 was to become familiar with the simulation environment and software that we will be using in later labs. The simulation environment models the robot, the time of flight sensors, and the IMU.
Concepts
In this lab we were introduced to the concepts of odometry and ground truth. Odometry is "the use of data from onboard sensors to estimate change in position over time." The robot simulator provides simulated IMU data with which we can perform odometry. Ground truth is the most accurate measurement available, and is the standard to which we compare our sensor measurements and predictions. For the virtual robot, we are able to determine ground truth from the exact position of the robot within the simulator.
Using the Simulator
The first thing we had to do in the simulator was to move the robot in a square using the provided funtions. I used the set_vel() function from the simulator and python's built in asyncio.sleep() function to draw the square with the robot. Below are a video of the resulting movement pattern and a screenshot of the trajectory plotter. We can notice that the predicted (odometric) data (in red) and the ground truth data are not consistent. We expected this, since we already knew that the simulated IMU data to calculate position is not very accurate.
Closed Loop Roaming
For closed loop roaming in the simulated environment I programmed the robot to use simple logic for when to move forward and when to turn. More specifically, when the time of flight sensors read a value less than 400mm, I told the robot to turn until the value being read was more than 400mm, thereby allowing the robot to move forward again. Here is a video of the simulated robot moving around the demo area, and the code I used to make this happen.
When it came to moving the robot around the simulation, there was always a trade-off between moving the robot fast, and making sure it doesn't hit walls. At slower speeds, like at 0.2m/s, I found that I could get the robot as close as 100mm to a wall before needing to turn it. However at faster speeds, like 1.0m/s, I had to have the turning distance setpoint higher at 500mm to prevent the robot from crashing into the wall because it couldn't "react" fast enough.
Although the virtual obstacle avoidance code is good, there are still some times when it fails to see an obstacle, most notably when the robot is extremely close to a wall, or when the ToF sensor misses vision of a corner, but the robot's width causes one of the wheels to clip the corner while passing. Running the simulation is a good way to get a sense of different types of problematic scenarios without using the actual robot. Within a few minutes of using the simulation I was able to identify those 2 movement issues that I might face when driving the actual robot around the maze, whereas actually testing the robot in the maze would have been much more time consuming.
Lab 11
Introduction
The objective of lab 11 was to implement grid localization using Bayes filter and the robot in the simulated environment. Robot localization is "the process of determining where a mobile robot is located with respect to its environment." As we observed in lab 10, non-probabilistic models for localization yield inaccurate results. Thank you to Ronin Sharma for collaborating with me on this lab assignment.
Bayes Filter
Bayes filter is a robot localization technique that maintains a probability that the robot is in a given location. To setup the filter, we divided the demo area into a 3D grid, where each cell of the grid spans 0.3048 meters in the X and Y directions, and 20 degrees around the rotational axis of the robot. The objective of Bayes filter is that instead of predicting a definite location for the robot, we instead compute the probability that the robot is in a given cell for each cell in the 3D grid. We were provided with the framework for the lab, and were tasked with completing the following five functions: compute_control, odom_motion_model, prediction_step, sensor_model, and update_step.
compute_control()
compute_control was the first of the five incomplete functions in the lab. compute_control is the function that takes the previous and current poses, and returns the transformation matrices (rotation -> translation -> rotation) that transform the previous pose to the current pose. Here are the formulas associated with this and my implementation in python:
Note that the provided normalize_angle function returns an angle between -180 and 180 degrees. Normalizing the result is necessary for later steps in the Bayes filter.
odom_motion_model()
odom_motion_model calculates the probability that, given a set of transformation matrices, the current pose is the result of the previous pose. Summing these probabilities for a given cell over all previous cells is what allows us to make a prediction of the robot in the maze. We use the compute_control function mentioned above to calculate the transformation matrices that are needed to go from one pose to another, and then use the following formulas and implementation to calculate the aforementioned probability:
prediction_step()
This function is the body of the Bayes filter. The prediction step iterates over every pair of cells (once from the previous grid, one from the new grid) and uses the odom_motion_model function to calculate the probability that the robot is in each cell in the new grid.
One key note to the prediction_step function is that it ignores all cells in the grid for which the belief is lower than 0.0001. This speeds up the algorithm significantly, as it allows us to avoid calculating the probability of transitioning from cells that are unlikely to be the location of the robot in the first place. This is incredibly important because of the complexity of the operation.
sensor_model()
sensor_model is a simple function that iterates over the 18 sensor measurements taken at the robot's location, and calulates the likelihood of each measurement, given the robot's state. Here is the code I used to make this happen:
update_step()
This function updates the probability that the robot exists in each grid cell using the predicted and computed probabilities from the earlier functions. This step executes the complete Bayes filter algorithm and puts together everything we created in the previous few functions. Most importantly, because the prediction step omits probabilities less than 0.0001, we need to make sure that the new probabilities still sum to 1, so we normalize the probabilities on the last line of the function. Here is the code I used to make this happen:
Video
Here is a video of my complete Bayes filter in action. It seems that the filter is accurate for most of the simulation but towards the end of the simulation begins to lose track of the robot's location in the demo area. Higher opacity (white boxes) indicate a higher probability that the robot is in that location in the maze.
Lab 12
Introduction
The objective of lab 12 was to perform the update step of the Bayes filter on the robot. Thank you to Ronin Sharma for collaborating with me on this lab assignment.
Bayes Filtering
Using the same logic as lab 9, I spun the robot in 20 degree increments and sent a ToF sensor reading back to the base station with each rotation. Overall my turns and ToF readings were accurate. After getting the data, I used the function
The task in this lab was to repeat the localization from lab 9, where we take 360 degree measurements at four specified points in the maze. The four marked poses in the lab were (-3, -2), (0, 3), (5, -3), and (5, 3), where the coordinates are specified by floor tiles.
Simulation Verification
First I ran the simulation with the provided optimized Bayes Filter implementation to verify that the code was working. Overall the results were as expected. Here were the results.
Turning
I used a short constant PWM pulse to move the robot in small turns. This was nearly identical to the implementation that I used in lab 9, during the mapping taks. Since during lab 9 I was okay with having extra points in the data, I had to tune the turning just a little bit to make sure I was getting 18 points, not 19 or 20. Structurally the code was the same as lab 9 though, and I used the same command on the python side. Here was the code for the turns:
Results
Using the same logic as lab 9, I spun the robot in 20 degree increments and sent a ToF sensor reading back to the base station with each rotation. Overall my turns and ToF readings were accurate. The task in this lab was to repeat the localization from lab 9, where we take 360 degree measurements at four specified points in the maze. The four marked poses in the lab were (-3, -2), (0, 3), (5, -3), and (5, 3), where the coordinates are specified by floor tiles. When I ran the procedure I received predictions that were pretty accurate to the points from which the measurements were taken. Here were images of the predicted locations plotted against the actual location from which the measurements were taken.
Tasks
Lab 13
Introduction
The objective of this lab was to navigate through a set of waypoints in the lab area using all the techniques we have learned so far. In particular, feedbakc control, localization, and offboard computation were the most useful techniques for us. Thank you to Ronin Sharma for collaborating with me on this lab assignment.
Planning and Strategy
Our strategy was to use a PD controller for straight line movements, to use integrated gyroscope values when turning, and develop a set series of straight line and turn commands to traverse the maze. We would then apply localization when necessary to correct the robot if it was too far from the target point.
PD Controller
We developed a PD controller to perform straight-line movements throughout the maze. Using a PID command, we were able to send a P value, a D value, a setpoint, and a cushion to the robot. On the Arudino side, a case in our finite state machine called UPDATE_PID would receive the PD constants and update them in our Arduino code, and a performPID() function would compute the PID output on every iteration of the loop. Here is the Arduino and python code we used for this:
Turning
In addition to the PID command, we also used a bluetooth command to turn the robot, which sent a turn speed for each pair of wheels, a turn direction, a turn angle, and a turn buffer to the robot. To turn the robot accurately, we maintained an updated value of the gyroscope at all times, and used a simple turning function that would stop the robot once the gyroscope read a certain angle. Here is the function for how we updated the gyroscope, and the function that we used for turning, as well as the python side functions to send the turn command to the robot.
Localization
We designed our localization code to be used as a tool to correct the robot if it missed the targets during the final demo. Our logic for localization was as follows. When the robot misses the target, send the target point to the robot over bluetooth. The robot will perform localization to determine its current pose. Using information about the current pose, we used the following python-side function to determine the angle and distance that needed to be covered by the robot. The robot would then turn by the required angle using our turn function, and cover the required distance using our PD controller. Here is the code we used:
Our localization command showed to be accurate in controlled scenarios, where the robot was on the exact centers of the floor tiles. Localization also proved accurate in geometrically unique corners of the maze, like corners. However, we were not able to integrate localization into our final demo due to the inconsistency when not at standard locations, and inconsistencies in motor strength when turning due to battery charge. Despite this we were able to generate good results when using localization for a few point-to-point movements. Here are two videos of our localization working to move the robot from (-2, -1) to (0, 0), and from (5, 3) to (0, 3).
Challenges
Throughout the lab we encountered a few challenges. One of the major challenges that we encountered was the inconsistency of our bluetooth connectivity. Sometimes our robot would disconnect from the base station during our demo, so we would have to wait while it reconnected. Another challenge we were running into was diminishing battery life resulting in variable turn distance. Overall despite these challenges we were able to generate a functional demo run.
Final Demo
Overall in our final demo we used open loop control on the arduino side to perform micromovements in the maze like turning and moving forward between points, and offboarded the navigation to the python side by sending those PID and turning commands to the robot from the base station. Here is a video of our robot traversing the maze using those techniques: