Eli Zhang
This is where all my lab work and progress will be documented.
This is where all my lab work and progress will be documented.
This first lab was very straightforward. There was no code I wrote for it (all of the code was included in example Arduino sketches). The purpose of this lab was to test the Artemis board to make sure it's working and to get familiar with the Arduino IDE.
The first part of this lab was to ensure that the built-in LED was working on the Artemis Nano. In the video, you can see that it blinks in a regular interval every second.
The second part of the lab was to ensure that the Serial port works on the board. By running the Serial demo and making sure it echoed the Serial input to the Serial output, I could ensure it was working. In the video I type, "wii tennis" and it echoes it back in the Serial monitor, which shows that the Arduino is successfully reading and writing through the Serial port.
The third part of the lab was to test the temperature sensor and ADC on the board. The example sketch does an analog read and prints a count corresponding to the internal die temperature. As the temperature rises, the count printed increases. I verified these components were working by placing my hand on the sensor on the board and ensuring that the serial output increased. In the beginning, the count was just below 33,000, but after a few seconds it was around 33,200, showing a clear (but gradual) increase.
The final part of the lab was to test the microphone on the board. The example sketch performs an FFT and prints out the loudest frequency. As you can see in the video, when there is only ambient noise the loudest frequency is around ~200 Hz, but as a hum a slowly rising note it goes from 80 Hz to 250 Hz. The FFT does fluctuate quite a bit, but this is probably because I was not humming extremely loudly and there was quite a bit of ambient noise in the background.
In this lab, I learned about how to use the Bluetooth module on the Artemis Nano to communicate with my computer (and vice versa). I learned about how data is formatted and how it is parsed when it is received. After implementing some simple functions to test transmitting and receiving data, I then learned how to set up a listener to constantly check for updates on a Bluetooth channel without constantly calling a function to poll.
The first part of this lab was to set up an ECHO command on the Artemis board. The goal for this command was to receive a message from the computer, augment the message by prepending/appending to it, and then send it back to the computer. Doing so was pretty straightforward, since there was already a command to parse the next value of the message to a character array.
Prepending/appending to the message just required calling the append function before and after appending the actual message received on the Arduino. Then the final string could be written back onto the TX line, similar to the PONG command.
The second part of the lab was to set up the SEND_THREE_FLOATS handler on the Arduino's side. This was also very straightforward, since it was almost identical to the SEND_TWO_INTS function. There was already a method in the RobotCommand file to get the next value as a float, so all I needed to do was change the data type to float and add an additional value for the third float.
The third part of the lab was to set up a notification handler that would update a global variable in a callback function. This would make it so I wouldn't have to call receive_float() constantly in order to get the most recent float value. Instead, I could just reference the global variable. This was the most complicated step so far, but I was able to find the bleak Python Bluetooth library documentation online, which helped me figure out how the start_notify and callback function was supposed to be written. The library also already had a function to transform a byte array to a float, so I didn't have to implement it.
There are a few differences between using receiveFloat() on a BLEFloatCharactersitic and using receiveString() on a BLECStringCharactersitic and casting it to a float. They should functionally produce the same value, since receiveFloat() unpacks the byte array to a float form and receiveString() decodes the byte array and will have the same level of precision when cast to a Python float.
However, if the data throughput is limited by the Arduino's end, it may be better for the Arduino to do fewer conversions to strings due to its limited memory. In Arduino, a char is 1 byte and a float is 4, so a string that is too long (e.g. 3.1415926) may take up more space than the float. This seems like it would make sending and receiving floats with receiveFloat() more efficient. Strings are also terminated by the null character, so if data size is extremely important, then receiving a string would be slightly less optimal.
In this part of the lab, I connected two ToF sensors to the Arduino Nano and measured how effective they were with different testing parameters.
The first part of this lab was to find the I2C address of both ToF sensors. I did this by running the i2c_scanner.ino sketch after connecting all the I2C lines using the Qwiic connector. Although the datasheet specifies that the default address is 0x52, when I initially ran it with one sensor, I got 0x29 (which is half the expected value). This seemed like some sort of bit shift error.
When I ran the address scanner sketch after daisy-chaining the two ToF sensors, it ended up printing all addresses. It's possible that this strange behavior was caused by the two ToF sensors having the same address and not throwing errors as expected when scanning different addresses. Regardless, after talking to the TAs, it seemed like this was a common and expected problem when daisy-chaining multiple sensors.
The second part of the lab was comparing the different possible distance sensing modes. Each mode trades off timing budget with sensing distance; in other words, the farther away you want the ToF sensor to sense targets, the longer it will take (and the more power it will consume). For short distance sensing, the timing budget is only 20ms but the max distance is 1.36m. For long distance sensing, the timing budget is 140ms but the distance increases to 3.6m. This is described in the datasheet, as shown below.
The short distance also responds better under strong ambient light; its maximum sensing distance doesn't change with strong ambient light, but medium and long distance sensing falls off rapidly to around 75 cm. Since its better to sense and react quickly to obstacles (as well as not have to worry about ambient light), the shorter distance mode is probably optimal. I definitely don't think it will take over 3m to react to any obstacles, but this is something that can be tested later.
The next part of the lab was testing different characteristics of the ToF sensor: its error and variance at different distances, etc. I decided to make measurements in the short distance mode. In order to capture the mean and standard deviation, I put the current measurements in a running values array of size 100. After around 100 successful measurements, I would record the mean and standard deviation.
Even in the short distance mode, the percent error in distance measured doesn't grow too much even when leaving the specified ideal range of operation, as seen below. However, the standard deviation of measurements does start to increase dramatically after around 130 cm.
For the short distance, the ranging time was consistently 23 milliseconds. I measured this by incrementing a counter every time the data wasn't ready after ranging began and I had to delay for a millisecond.
There was no noticeable difference between white, blue, yellow, green, purple, and pink colors. Surprisingly, there was no noticeable difference between a reflective surface (mirror), cardboard, and paper. The sensor also performed the same in the dark.
The final step in setting up the ToF sensors was getting both ToF sensors to work together after daisy-chaining them. Since both ToF sensors have the same address, you can enable/disable them using their shutdown pins or change their addresses programmatically. I ended up doing a combination of the two: I turned off one of them on startup and changed the address of another, then turned the other one on again. This is better than switching each one on and off whenever I need to use them, since it would take longer.
I initially thought I could solder the GPIO1 pin of the ToF sensor to the XSHUT pin and use the GPIO1 pin on the sensor, but I wasn't able to figure it out, so I eventually used the GPIO from the Artemis Nano. The remaining code is just duplicated from the single ToF code, pictured below.
In this part of the lab, I connected the IMU to the Artemis Nano and recorded its response to movement. I placed the two ToFs on the front and side of the car, since the car can only drive into obstacles when it is going forward or making a turn. When connecting my two ToF sensors, my IMU, and the Artemis together, I also measured the wires to make sure they could fit in the car according to the diagram in Lab 5.
The AD0_VAL is the last bit of the I2C address; it's supposed to be 0 when the ADR jumper is closed on the breakout board. The jumper was closed on the board (and it didn't work with value 1) so I set it to 0.
When looking at the raw accelerometer and gyroscope data, it's not very clear how the sensor values change directly. The accelerometer values go from positive to negative upon flipping the board, and the gyroscope values seem to correspond somewhat to angular velocity.
The effects of rotation become much more evident when computing the pitch and roll using the accelerometer data.
As shown above, as I rotated the IMU from 90 to 0 to -90 degrees, the corresponding pitch and roll also moved roughly between -90 and 90 degrees (with small exceptions due to accidental shaking). Pitch is blue and roll is red.
Overall, the magnetometer was surprisingly accurate. If there were more inaccuracies, I would have adjusted using the Arduino map function, but since everything was within less than a degree I didn't.
Overall, the magnetometer was surprisingly accurate. If there were more inaccuracies, I would have adjusted using the Arduino map function, but since everything was within less than a degree I didn't.
In order to determine the specifications for the low pass filter I should use to cut out noise, I plotted the FFT of my angle readings after perturbing/tapping on the sensor. I copied the roll and pitch data into MATLAB and followed their FFT tutorial, generating the frequency responses shown below.
The frequency responses above were not very informative about what frequencies should be filtered out, so instead I swept alpha values until I found one that was effective at filtering out noise (for me it was alpha = 0.1).
The next step of the lab is to compute the pitch, roll, and yaw from the gyroscope. The way we calculate the gyro information is by reading the sensor values every small time step and updating the overall reading over time. However, regular addition (as shown below) causes there to be significant drift, since values just accumulate over time. As you can see below, all the values for yaw, pitch, and roll drift almost linearly without some restricting factor.
Unlike the accelerometer, the gyroscope is dependent on initial conditions (the orientation of the board when you start the code) but isn't as noisy.
Increasing the sampling frequency did marginally lower the drift, but not by a significant amount. But by adding a complementary filter, we can eliminate the drift. The complementary filter uses the accelerometer data and alpha to prevent drift; in my case, alpha = 0.1 entirely stops the values from drifting. Even after several minutes, the values still remain around 0 when the board is put back into its default position. You can also tell on the graph that the measurements are more robust to quick vibrations, since they do not peak up randomly above the actual angle.
In this lab, I tested different features of my car! The primary purpose of this lab was to get a better understanding of how my car's mobility would permit/restrict it from doing future stunts. Unfortunately, I ran out of battery after testing stunts, so I wasn't able to use the ToFs and IMU to actually test the car's mobility.
First, I measured the car dimensions with a ruler. From the front end of one tire to the back end of another, the car is 18 cm long. It is 14.5 cm wide and 8.5 cm tall. The weight of the car is around 520 grams, and the additional hardware is about 30 extra grams.
The battery lasts about 7 minutes (with active use) before it runs out of power.
The car can turn about its own yaw axis relatively well on carpeted surfaces. However, in order to turn it in place you have to do it in short bursts, otherwise it gets off the center axis. It is pretty easy to manually control when moving forward and backwards, but very difficult to control (precisely) when turning unless you use short bursts.
The simplest stunt the car can do is a front flip or backflip; you can easily do these by inputting forward and then backwards after the car has accelerated (or vice versa).
When approaching a wall at a high enough speed, the car can flip off the wall without needing to brake in the opposite direction.
The car struggles to climb ledges; its wheels do not have much traction, its center of gravity may be too high, and since the chassis is bulky and its wheels are close together the front of the car cannot make it very far up a ledge before the back reaches it.
However, on the same ledge with a ramp, the car is able to easily accelerate to be fast enough to get over.
The purpose of this lab was to prepare the car to execute a series of commands (without manual control).
Each motor controller board has 4 PWM pins. Since we are parallel coupling the inputs in this lab, we only need 2 PWM pins per board. This means we need 4 in total for both motor controllers. Pins 11, 12, and 13, and A15 are located close together on the board, and their only other use is for ICSP, which we probably won't be using. I decided to use these pins because they're located close to the edge and support PWM!
After wiring up the motor controller, I tested to make sure it worked with a controlled power source. Initially, since I didn't want anything to short, I used banana plugs to connect to my board. However, it turned out that the banana plugs were current limited, so the car wheels moved extremely slow. In the video, you can see a rotating wheel test that I did initially to ensure that a single motor controller was working with a controlled power source. For the power source, I used 3.7 volts (approximating a single-cell lipo battery voltage) and 3 amps. The motor controllers are rated for around 1.2A of continuous current per channel (so ~2.4 amps combined). Then I ran both motor controllers with the same controlled power source to make sure both of the motor controllers worked.
Since I didn't have access to an oscilloscope, I was able to qualitatively test the varying power output of the motors by sending different PWM signals to the motor controller inputs. For example, in the video at 0:20, you can see a lower PWM value moves the wheels a lot slower than higher values at other points in the video. You can also see in the rotating wheel test how changing the PWM input for the first and second inputs of the motor controller allows you to reverse the direction of the motors.
The next part of the lab was deconstructing the car and fitting all the components inside the vehicle. This was pretty straightforward since in the previous labs I had already measured out the optimal lengths of the wires for the ToF sensors, IMU, and batteries. I had to flip the JST connector on my battery because the one we were provided was backwards.
First, I tested the left and right wheels individually, and then I tried to find the lowest PWM value that would cause the car to move forward. The lowest I could get the car to move on carpet was 50 (out of 255), but the wheels move much faster off the ground than on the ground. I needed to add a calibration factor since my car tended to drift towards the right; I ended up using a ratio of 19:12 for the left:right power in order to get the car to go forward in a straight line. I didn't have tape for my video to demonstrate that the corrected path was straight, but as you can see the car drives parallel to my bed for about 2 meters before slowing down at the wall.
Next, I reconfigured the bluetooth lab to allow me to send commands to the car to drive it remotely. In the video below, you can see how I move forward, then turn, then move forward again. Overall, this was a very useful feature to add, since I didn't have to connect my cable to the car every time to program it again.
The purpose of this lab was to use closed-loop PID control to get the car to complete certain tasks. The task I chose was getting car to drive as fast as possible up to a wall and stop at a certain distance away from the wall.
Most of the time I spent on this lab was actually setting up the prelab, which required integrating all the previous labs together. I had to combine all of my bluetooth, motor control, ToF sensor and IMU code together in order to set up a good system to execute commands remotely and log data back to the computer.
The first thing I had to do was add additional bluetooth commands to support logging and on-demand parameter adjustment. This meant adding new commands to the CommandTypes enum in my Arduino code and in my cmd_types.py script in my Jupyter Notebook. I also added commands to adjust the right-to-left power ratio (whenever the car would turn to the side instead of going straight) and the PID values for later on.
Next, I had to set up ways to log data back to the computer. Data is sent on RX/TX_STRING line, but the actual act of logging data during closed-loop control would introduce significant latency. Instead, I chose to store all the data in large data arrays while the closed-loop system was running, and then after it reached its target position it would transmit all of its data over Bluetooth.
While a command is executed, I write to several arrays that share the same index. Each array represents one category: timestamps, the accelerometer's x axis, the ToF sensor's distance reading, etc. Then, when I write to the Bluetooth serial line, I concatenate all the information together into one EString and transmit it. One problem I had was that including all the data in one message ended up exceeding the maximum packet size and the program would crash without printing out any errors. After I made the logged information more concise, there were no more issues.
After I combined all the code for the IMU, ToF sensors, and motor controllers, the next step was to set up the PID function. I initially tried using several PID libraries: the first was ArduPID. ArduPID had some deprecated functions and it was harder to view internal computations that it used. It also used doubles for all its inputs and outputs, which was a little annoying in terms of casting since the bluetooth library only supported loading in floats. As a result, I eventually implemented my own PID function, which was very short.
Next, I had to modify a lot of my functions to be non-blocking. The original distance sensor and IMU code had delays to wait for the data to be ready. However, we want to continue calculating with our PID controller and updating other functions even when the ToF or IMU is not ready to update its value.
As you can see in the screenshot above, the lastTimeIMU and imuUpdateInterval make it so I can constantly check to see if the IMU is ready to fetch updates.
In order to get the car to work correctly, there were several parameters that I had to tune. First were the motor parameters: sometimes my robot skewed to the left and sometimes it skewed to the right, so I added a command I could run to tune it to whatever ratio I wanted on the spot. The next was the motor power. Since the battery level changed, which affected the motor speed required to get the robot moving, I added another command to affect the max power. Finally, I added a command to tune PID parameters.
P: Affects how fast the car ramps up in speed. If P is too low, it will never reach max speed and reach the wall at a slow speed. If P is too high, it will accelerate way too fast in the beginning due to the large error and crash into the wall.
I: As the error gets smaller, the I term will keep the speed high while the P term diminishes.
D: Dampens oscillations after reaching the target and decelerates when getting close to the setpoint.
As you can see in the video, too high of a P value resulted in the car accelerating extremely fast in the beginning and not being able to slow down in time. Too low of a P value meant the car would remain too slow and never reach a higher speed. With just P = 0.3 and I, D = 0, the car did reach its target, but it kept oscillating back and forth after it did. I found that increasing D ended up reducing these oscillations and making the car come to a stop faster. I found that changing the I term only made the car accelerate far too fast, so I kept it at 0.
I eventually ended up with P = 0.3 and D = 0.3. I tested this on bare floors and carpeted surfaces, and it worked well for both. Even when it did overshoot (from too far away it would sometimes hit the door if I made the max power too high), it would quickly reverse and return to the designated set point.
The purpose of this lab was to set up a Kalman Filter to give a more accurate prediction of the car's location, which would allow it to stop sooner and be closer to its target.
In order to set up the Kalman filter, the first step is determining its A and B matrices. These matrices are dependent on the values of the d and m variables, which are proportional to the steady-state speed and 90% rise time, respectively.
In order to determine these parameters, I executed a 1 second step response with maximum power over a long distance and plotted the data.
From my step response data, I calculated the steady state speed from around 0.85 seconds to 1 second, when the speed graph seemed to plateau. The slope at this point was around 1112 mm/s.
Next, based on the steady state equation and equations from Lecture 12 (pictured below), I found d to be around 0.0009. Then I found the 90% rise time by looking at when the speed was 90% ramped up, which seemed to take around 0.8 seconds. From this, I found the value for m to be around 0.0003125.
With these d and m values, I could make my A and B matrices. Afterwards, I discretized my A and B matrices according to my average sample time.
Next, I needed to estimate the noise parameters to use in my Kalman filter. I receive an updated measurement for position every 35 - 40 milliseconds (0.035s to 0.04s). Based on my lab 3 data, measurements up to 1 meter away with the ToF sensor still only had a standard deviation of around 1 mm. The resulting sigma value for position noise I got was 5.345224838. For speed, since my speed measurement was directly dependent on position, I also got 5.345224838.
Using the values I calculated above, I was able to plug in my noise approximations into the provided lab 7 functions.
Then, I was able to run the Kalman filter function on data I measured before!
I ended up not having to adjust my covariance matrices because the Kalman filter graph looked good. However, if I were to increase sigma_1 and sigma_2 (affecting sigma_u) or decrease sigma_4 (affecting sigma_z), it would have made the Kalman filter prediction more reliant on the sensor readings than the Kalman model, which would make the Kalman filter graph closer to the original distance graph.
As you can see in the graph above, the Kalman filter predicts a little ahead of the ToF reading for the car, which is exactly as intended. Using this data, I should be able to predict more accurately where the robot actually is instead of lagging behind with only the ToF readings.
The final part of this lab was implementing the Kalman filter on the Arduino itself. This just involved translating the Python code to use the BasicLinearAlgebra library to work with matrices.
All that's left was setting the initial conditions. Since before I set the initial condition for mu to be based on the first reading from the ToF sensor, this time I set it to a pretty arbitrary starting distance, since it should quickly correct itself.
Finally, I tested the Kalman filter on the Arduino and logged the results. Below you scan see a snippet of what was logged:
After negating the distance readings and graphing the results, it was pretty clear that the Kalman filter was behaving as expected. As you can see, the Kalman filter predicts that the robot is actually closer than the readings suggest, which makes sense if the distance readings aren't updating instantaneously.
The purpose of this lab was to prepare the robot to perform stunts! This lab was the culminating work of all the previous labs (PID, Kalman filter, etc.). Most of this lab was extremely dependent on having a fully charged battery, which was very frustrating for me because I came in to try out my stunt before the lab was open (to avoid waiting for other people).
The first part of this lab was executing a controlled stunt. Initially, I chose Controlled Stunt A. In this stunt, the robot drives towards a wall, and when it is half a meter away from it, it flips and turns around.
In order to set up my robot for the stunt, I added quite a few new commands:
Before this lab, I was mainly using coasting to slow down. I added active braking to test whether stopping suddenly would cause the car to flip. The BRAKE_THEN_FORWARD and BRAKE_THEN_BACKWARD commands allowed me to immediately move after actively braking briefly, which I also wanted to test to see if the car would flip earlier. Since there was a bit of latency introduced when sending a bluetooth command, I added this command so the car could immediately execute the next command after the first.
The most important command that I added was MOVE_TOWARDS_WALL_AND_FLIP, which gets the estimated distance using the Kalman filter, accelerates at maximum speed until it reaches the target distance, and then accelerates in the opposite direction at maximum power. Initially, I used PID for the forward movement to reach the target, but since I wasn't getting enough speed and I wanted to accelerate as fast as possible anyways, I got rid of it and opted to just use maximum PWM values.
Unfortunately, for the first controlled stunt, I needed the robot to build up enough speed before it reached the sticky pad in order to flip. No matter what I tried, I was unable to get the robot to flip even at the maximum speed. I even tried moving all my batteries to the top of my robot to change where the center of mass was; I thought if I had a higher center of mass, the car might be more unstable and prone to flipping.
Because I couldn't get the first stunt to work, I decided to work on Controlled Stunt B. The goal of this stunt is to drive forward starting from ~4 meters away, execute a rapid drift for 180 degrees, and then return back to the starting line as fast as possible. For this stunt, I combined the Kalman filter model with PID control and IMU readings from earlier labs. First, I wrote a new command, MOVE_TOWARDS_WALL_AND_TURN.
The first part of my code moves forward at a specified max power until it exceeds a custom set point that can be set through a bluetooth command parameter. When the Kalman filter predicted distance to the wall is below a certain threshold (with a small margin of error), I switch to the next stage.
The next portion is very similar to the work I did in lab 9: I reset the set point to 180 degrees, then apply the active brake (to stop moving forward and start the drift). In reality, I included the distance updates from the ToF only for debugging the logs later on. The important part of this step comes later, where I use PID updates that are based on yaw calculations using the IMU.
As shown above, in this step I constantly use PID to calculate the motor speed based on the change in yaw from the initial starting point. I also update the IMU constantly to get new gyroscope readings. Finally, when I reach the target angle (with a small margin for error), I stop and go to the next stage.
The final stage is simply driving forward as fast as possible, since the car has now turned 180 degrees.
Overall, this controlled stunt went much, much better for me. I was able to reuse parts of the first attempt as well as parts of lab 7 and 9 which I had already completed. The Kalman filter did actually seem to help make the car act properly before it was too late. When I looked at the data, it did seem like the PID was indeed updating properly.
When I begin the spin, I set the Kalman filter distance logs to 0, since the predicted distance is pretty arbitrary after the drift begins. You can see in the data how the PID control modifies the power of the turn while it is drifting to adjust for the angle. Then, when it reaches the correct angle (now KF dist is -1 as an indicator), I set power to the maximum again and go straight. So everything seems to be working well!
At this point in the lab, everything went wrong. For some reason, I stopped being able to program my Artemis board and my ToF sensors and bluetooth stopped working. I kept getting data underflow errors for some reason too. I did everything I could to try and debug the situation, like commenting out my code and changing the allocated memory for all my logging. However, since the Artemis bootloader didn't work on my computer these fixes didn't work.
Luckily, for some reason I was still able to program the board on Aryaa's computer. We worked together for the open loop stunts, but since my board's bluetooth wasn't working, we had to program them manually.
We did two simple stunts. The first stunt was spinning in a figure 8, which we achieved by varying the ratio of the left wheels to the right wheels and swapping the ratio when turning in the opposite direction. The second was spinning in place, which is useful for lab 9! I'm still interested in doing cooler stunts (with ramps and stuff), but I might have to come back to it when I figure out how to actually program my microcontroller again.
The purpose of this lab was to map out a small space using the ToF sensor and IMU.
This lab involves calculating the yaw values using the IMU. At first, this seemed pretty easy, since we did most of the setup in lab 3. However, yaw values were tricky to work with since the yaw axis is perpendicular to the direction of gravity. Because of this, I couldn't use the accelerometer values from the IMU.
As a result, I opted to use the gyroscope, which detects only changes in yaw. As mentioned in earlier labs, gyroscope values are prone to drift over time, since they only track differences and sum them over time. For pitch and roll, these were easy to counteract using a complementary filter with the accelerometer. However, since the accelerometer values weren't usable, I tried using the magnetometer. Overall, the magnetometer readings were often garbage values, maybe because of the proximity to the motors (which generate magnetic fields) or just because the sensor wasn't working consistently.
I decided to use only the gyroscope yaw values instead of the complementary filter, since the drift was quite minor (only a degree every few seconds) and I was rotating the robot pretty qucikly relative to the drift rate.
To get the robot to spin in place, I had to make a new bluetooth command, since in earlier labs I had only driven it straight towards a wall. My new command, called SPIN_AND_MEASURE, also used PID, which meant I had to adjust the set point for my PID function (I set it to 30 degrees).
For my SPIN_AND_MEASURE command, I also added maxPower, totalAngle, and duration parameters that you can pass in. The maxPower allowed me to adjust the speed of the rotation, which I had to do depending on how charged the battery was. The totalAngle let me change the rotation from 360 to 720 to see whether the rotated amount and readings are consistent. The duration determined how long to pause after every angle increment; in my case, every 30 degrees, I stopped for a short time.
The rest of my code was pretty similar to the PID code I wrote earlier. However, this time I had an additional outer while loop that compared the cumulative angle with the total target angle. The cumulative angle increased in small increments; I used PID control to move towards these incremental set points.
For the rest of the code execution, I continuously append power, yaw, and raw data to my data arrays and update my IMU. I also check to see if I arrived at the target angle; if so, I brake and wait the specified duration.
Since I got my robot to move slowly, I was able to make continuous measurements, which often got me over 300+ measurements.
I measured the entire space two times; you can find my data here.
For each measurement, I placed the robot with its ToF sensor pointed down and had it rotate in a full 360 degree circle.
When looking at the distance versus time graph alone, it's hard to understand the surrounding space since the car was not moving continuously.
The raw distance versus yaw graph is a bit better, but the raw yaw values are slightly offset due to drift inaccuracies accumulated over time.
Here are all the individual distance versus raw yaw values.
We know all yaw values lie from 0 to 360, so I map the yaw readings to be between 0 and 360. Normalized:
In polar coordinates:
Visually, the PID controller seemed to work very well; every small increment, I would reset the target angle according to the current angle, which mitigated any yaw drift accumulation. I was able to always get around 360 degrees of turning (overshooting maybe a maximum of 30 degrees or so, even if the final yaw reading was not 360 degrees from the initial starting point.) I also tried testing how consistent my rotation and measurements were by using a larger total angle. To do this, I did a 720 degree rotation. As you can see in the graph below, the measurements are pretty much exactly periodic, which shows that my PID was working correctly.
Overall, given the speed of my rotations, the drift of the sensor was a maximum of 5 degrees off (this is a pretty lenient margin given my actual observations, since I reset the target every 30 degrees). On average, the drift was around 1 or 2 degrees. In a 4x4m room with my car at the center, the maximum distance I could measure is around 2*sqrt(2)=2.83m, which would be at the corners of the room. That means the max I could be off would be around 2830 - 2000/cos(45 - 5) = 2830 - 2610 = ~220 mm in either the x or y direction. With only 2 degrees of drift, it goes down to 2830 - 2000/cos(45 - 2) = 95 mm. Not too bad!
Translating to Cartesian coordinates from each starting point is pretty straightforward; since the car always started with the ToF facing downward, I could use the following equations:
x = -distance * sin(theta) + x_origin * 304.8 (where 304.8 is the conversion factor from feet to mm)
y = -distance * cos(theta) + y_origin * 304.8
For the translation portion of the matrix, I simply need to shift the value to the point where my robot is placed for each scan. For the rotation, I needed to flip my y and x (explained below). There was no other angular rotation necessary. This matrix can be multiplied by a vector of the distance * sin/cos(theta) values, like in the equation above.
I had to negate my y and theta values to get an interpretable cartesian mapping because I think my ToF sensor is installed on the back of the car. The final mapping is as follows:
I decided to calculate the values again with my first set of data, which I skipped before because I thought it was less accurate.
Most of the data looks good (and looks like how I remember it in lab). However, I was still a bit unhappy with the bottom right corner, which seems to have failed in the second pass (first graph) and looks a bit ambiguous in the first pass (second graph). I ended up graphing it with the data I collected from when I tried to do a 720 degree rotation to see how consistent my PID and rotation was. This one looked slightly better!
And finally, here is everything combined:
Here are the points I manually filtered. The inner box border I based off of the scans I did at the origin and the bottom and top right corners. The points I used are below.
The purpose of this lab was to set up a virtual simulator environment and learn how to control and plot information about the virtual robot.
This lab had a pretty straightforward setup (just downloading and installing the files from the repository). I initially had some problems installing PyQt5 since I have an Apple M1 Silicon mac, but I installed PyQt6 and changed one line of the source code to use PyQt6 instead of PyQt5 and it worked. The simulator can be started with a simple GUI. Once it's started, you can control a virtual robot using the arrow keys and adjust the view accordingly.
The first task was getting the simulated robot to drive in a square. This was pretty straightforward and just involved setting the linear and angular velocity do different values.
After making small modifications to my code to log the data after every delay, I was able to graph the odometry and ground truth.
As you can see in the two runs above, even though I was setting the angular velocity to make the robot turn in perfect right angles, the ground truth was not actually a perfectly accurate square. This could either reflect errors in the angular velocity or the async timer sleep function, which determines the duration of the velocity command. Since the odometry position estimate is based on simulated on-board data (like an IMU), you also see how prone it is to building up errors over time.
I ran the simulation multiple times with the same commands. You can see how the ground truth is relatively consistent but still differs slightly, whereas the odometry data is extremely unreliable.
The second task was getting the simulated robot to navigate the simulated space without crashing into obstacles. This was also pretty easy to do with closed loop obstacle avoidance.
The first technique I tried to avoid obstacles was simply getting a distance reading (equivalent to a ToF sensor reading) with cmdr.get_sensor(), then turning if it was below a certain threshold, otherwise going straight.
This worked okay, but it was very slow and still grazed walls when the ToF sensor just barely passed by a corner. Next, I tried with smaller distances and greater anglular velocity.
Next, I wrote a simple program to back up and turn 90 degrees, which reduced the frequency of grazing walls and worked pretty well with the 90 degree layout of the room (0:54 in the video). However, the 90 degree turns were still inconsistent, like before.
With some tweaking, it was possible to get the robot go up to 2 m/s and faster without any crashes (1:29 in the video)! I ran it for over 16 turns and there were no crashes. I had a 0.4m distance tolerance at this speed to avoid crashing into walls, but for the slower speeds I needed only 0.2m.
The purpose of this lab was to implement grid localization using Bayes Filter.
The first function I had to implement was compute_control, which calculates the control information necessary to move the robot from one pose to another. Implementing this was easy: first, you figure out how the robot needs to turn to get into the direction to drive straight towards the destination point, then drive it and turn it so it's facing the correct destination direction.
Next, I based my odom motion model function, which is supposed to calculate how probable it is to transition from prev_pose to cur_pose given actual_u, off of the following equation:
I used the provided localization Gaussian functions and provided noise parameters:
The next part was writing the prediction step. In this step, I update the probabilities of moving to a certain area given the previous estimate of position and the motion model.
I spent a while trying to come up with ways to optimize the prediction step with numpy since there is a 6-nested for loop used to update the predictions. I decided to precompute the grid_to_map values, since the function would be called multiple times with identical values and it should be idempotent. However, I couldn't figure out a way to do better optimizations since the odom_motion_model function can't be vectorized easily. Instead, I ended up also doing what the lab assignment suggested and checking to see if the probability of it being in a certain area was above a certain threshold, since extremely low probabilities would be negligible but still computationally expensive to calculate.
Finally, for my update step, I calculated the likelihood of observing all 18 measurements from the current raycast for each location, then updated the bel values to reflect the likelihood calculations. This assumes that each observation from the raycast is independent, otherwise I wouldn't be able to multiply the probabilities. The update step is based on the sensor model, which calculates probabilities based on a gaussian distribution that uses the observed and expected readings from the sensor.
The next part of the lab was actually testing the Bayes filter in the simulated environment. In the simulation, the car moves throughout the environment to certain points, then spins around and makes distance measurements, which is essentially doing a raycast. This was very reminiscent of the work we did in the earlier mapping lab.
Overall, the Bayes filter seemed to work extremely well! As you can see below (I included multiple trials), when running the simulation, the blue line (which is the belief) follows the ground truth quite closely. This performance is so much more useful than the regular prediction that can be made with only the odometer data.
In the last plot, you can even see how the predicted location gets off for a little bit but then immediately corrects itself. It's likely that it got off because it was near multiple regions that could yield similar sensor readings. This shows the power of the Bayes filter: errors don't propagate very far because they're corrected by future observations!
I also wanted to see how the Bayes filter prediction changed based on the number of observations made per each raycast. When I changed the ray count to 4, the predicted values were way off, but surprisingly with more rays the prediction didn't actually get that much closer to the ground truth. In fact, at some points, it seems to have gotten pretty off. This seems to suggest that the ray count is pretty optimal; given the noise and uncertainty of the measurements, it's possible that adding more rays would have very slight but negligible returns.
The purpose of this lab was to implement localization in the real environment. This involved configuring the robot so it would receive a bluetooth command to rotate in place, make 18 measurements in a 360 degree circle, and then return the distance measurements over bluetooth to the computer. From there, the computer runs the update step for localization, predicting the most likely location and orientation the robot is in given the true data.
The first part of the lab was testing the simulator localization, which was essentially what we had done in lab 11. This was just so I could familiarize myself with the simulator environment and plotter and Bayes filter, which we end up using later in the lab. The provided simulator Jupyter Notebook moves the simulated robot around the obstacle course, plotting the true location, the predicted location, and the odometry data. The resulting output looked extremely similar to the simulated localization from lab 11.
The next part of the lab was to update the robot to turn in a circle and make distance measurements every 20 degrees. This was essentially the same as the raycasting that the simulated robot did. It was also relatively straightforward to implement because I had already done it in lab 9 to map out the obstacle course space.
There were two important differences with the implementation this time: the robot in lab 9 was supposed to turn around 14 times, but here we were recommended to do at least 18. Additionally, in this lab, in order to correctly match with the data used in the update step, the robot needs to rotate counterclockwise to make measurements.
In order to implement these two functions, I added a new command to the robot's Arduino code, SPIN_AND_MEASURE_ACCURATE. As you can see in the screenshot above, the command has several parameters you can pass in to tune the rotation information, including: maximum power, total angle, angle increment, duration after each angle, and angle tolerance. I also had to change my set point and target angle to be negative, since the spinning was counterclockwise. Initially, I tried rotating clockwise and then reversing the list in the Python code, but I couldn't get it to work. Afterwards, I just had to change my logging function to only output the distance readings from the ToF sensor, since I didn't need any IMU values. Everything else on the Arduino was identical to lab 9.
In the Jupyter Notebook, I needed to implement only one function, called perform_observation_loop. The purpose of this function is to tell the robot to make its 360 degree rotation and then receive the data back over bluetooth. I accomplished this with two simple bluetooth commands (one to tune the PID for better results and the other which I described earlier).
Initially, I tried setting a global "done" variable when the bluetooth receiver line contained a certain end flag message, but this caused my code to hang (probably some incorrect asynchronous logic). Since it was just as easy to wait a certain duration to be sure the robot had finished making measurements and then combine all the messages, I did that instead. I did have to filter out all the non-numeric debug messages that were also sent over the bluetooth line, but that was pretty trivial.
For the next part of the lab, I tested the localization by putting it in the real life environment, running the update step, and viewing the predicted location. I used a uniform prior, since the robot was not moving from a previous location.
Here are the predicted locations when placed at the following spots, respectively: (-3, -2), (0, 3), (5, -3), (5, 3)
As you can see, the predicted location was actually quite accurate, even just with 18 distances and a uniform prior. For (-3, -2) and (5, -3), the location was actually exactly where I placed it. However, for (0, 3) and (5, 3), it was offset, but only slightly.
Below you can see the Bayes Filter probability for (-3, -2). The units are in meters, but when converted to feet, it is pretty much exactly -3 ft and -2 ft.
Below is the belief for (0, 3). The y coordinate is off by 1 foot (4 feet up instead of 3 feet). However, everything else is correct.
The rest of the values for the remaining two points are similar.
Here is a video showing the localization of all four preset points in the lab space:
Overall, localization with only 18 points worked surprisingly well for my robot. I was impressed by how accurately it predicted, even with a uniform prior. It was clear that some spots worked way more consistently than others (bottom left and right), probably because there were very few other locations that could have yielded similar readings. The localization seemed to perform the worst when there were long stretches of open space, where it's possible the ToF readings may have also become more inconsistent. For the most part, it worked super well! This lab also set a lot of good foundation for lab 13.
This lab was the culmination of all the previous labs I did. I got to test out a ton of different techniques to get path planning to work with high fidelity and repeatability. It was really fun to play around with the entire toolkit we had set the foundation for earlier in the semester, but it was also very time consuming and frustrating at times because the real environment was very tricky to work with.
Since this lab was open-ended, there were a lot of different ideas to consider. In the recent lab 12, we had done real localization, which was a very appealing candidate for the waypoint navigation. Beforehand, we had done precise PID control for moving quickly to reach a certain point away from the wall and rotating in place. In our localization labs, we also had written the foundation code to calculate the steps necessary to navigate between two poses. With all these ideas in mind, here were the approaches I mapped out:
The idea of this approach is to spin 360 degrees, localize in place, and then calculate the turns/distance to travel in order to move from one waypoint/pose to another. All of this would be repeated in a loop, so even if an angle or distance were to be off, I could just repeat the localization and slowly converge on the correct waypoint. This would require an Arduino function to turn a certain angle and a function to move forward a certain distance.
With this approach, I can pre-compute the turns and distances away from the wall to hit each waypoint quickly. Then, after starting in the proper spot, I would execute all the bluetooth commands in series necessary to hit the waypoints. With precise PID and tolerance tuning, localization would be unnecessary, since the starting angle would be known and the path would be the same every time.
With this approach, instead of having to fully localize, I could have a pretty decent idea of the angle of my robot with respect to the walls. Using the second ToF sensor on the side of my car, I could rotate my car back and forth until I minimized the distance to the ToF on the side of the car. As you can see in the diagram below, minimizing the distance to the second ToF means the car would be consistently parallel to the wall.
Having consistent right angles means I would be able to do a very rapid path navigation with fast straight line movement and consistent 90 degree turns.
If everything were to go wrong, I wanted to have a contingency plan. I could always resort to open loop timed controls to quickly navigate the course. Luckily I didn't have to!
Since I had so many possible ways to approach the problem, I wanted to do a comparison of which ones would be the most feasible and realistic.
Approach 1 seemed the most interesting to me, since having the robot approximate its own location and automatically navigate between waypoints seemed ideal. However, it would be slow, since it would have to frequently turn 360 degrees to localize. In my experience, my robot didn't get accurate localizations when turning quickly, so it would be hard to speed up the process.
Approach 2 would sacrifice the benefits of localization (being able to orient and drive the robot towards a waypoint no matter where it was) but it would also be much faster and potentially more accurate. The localization update step only returned predicted angles in increments of 20 degrees, which means the robot could actually get off. This was a large consideration in favor of the second approach compared to the first.
Approach 3 would be faster, but also could be prone to more errors. For some waypoints in more open spaces, using just the two ToF sensors to determine orientation and location would yield a much worse prediction for the robot than a full 360 degree localization.
Approach 4 would most likely be the easiest and quickest to implement, but it would also use the fewest things we learned in class! So it was the last desirable choice.
The first thing I attempted was approach 1. I started by writing a function to convert the waypoint feet to meters, since the update step outputs predicted location in meters. I also wrote a simple function to extract the update step prediction to use in my code. In order to rotate the car, I used the same function as before, in lab 12.
In the rest of my code, I do the following:
The waypoints I used were calculated earlier and stored in an array of tuples.
The localization steps are also identical to those in lab 12.
The problem:Although the localization ended up yielding pretty accurate locations, the predicted angle was very inconsistent. Firstly, localization angles came in 20 degree increments, which means the pose to pose navigation was not very precise. Sometimes, the angle was completely off, too, as you can see in the video below. In the video, the position is pretty correct (it predicts (-5, -4) when the robot is around (-4.5, -3.5)). However, because the angle is wrong, the robot continuously rotates according to the instructed changes, localizes, and has to re-adjust its angle again.
Testing localization was very annoying because I had to wait for my car to constantly rotate 360 degrees and make update step calculations. This was exacerbated by the fact that several people in the class were also waiting to use the same obstacle area. I decided to move on to PID control for both rotation and forward movement, since it was faster and more consistent.
The functions I used for PID control were both from earlier labs. In lab 6, I had written a bluetooth command to drive at the wall and stop a certain distance away. In lab 9, I had written a function to rotate any given angle, and in lab 12 I had added parameters to make it more flexible. Getting the car to work was therefore just a matter of calculating the proper angles and distances away from the wall to drive.
It was easiest to use right angle turns in this course, especially at the beginning to line up. I actually tried purposefully backing up against the wall to straighten any small angle offsets, which you can see later in the video. For earlier attempts, sometimes my PID control would get stuck because the minimum power was too low to actually inch the car forward to its set point. I eventually ended up fixing this error by adding a custom timeout duration that would move on to the next command if a command was taking too long.
Overall, I was very pleased with how the PID control went. I didn't use the Kalman filter data because the waypoints weren't sufficiently far apart to accelerate to a very high speed. However, the angle rotation PID code ended up working very consistently and the command to approach the wall until a certain distance also worked well once I tuned the PID parameters to not oscillate. I was also able to get pretty consistent results (hitting at least the first 6 waypoints) towards the end in consecutive runs. Each successful run took around a minute.
Since I got my PID control to work, I decided it wouldn't be necessary to do approach 3, which would essentially only add more accuracy to the turning component of the waypoint navigation. I was really sad and frustrated that I wasn't able to get my localization code to work, especially because the positions seemed to be accurate but the angles were always off. I felt as though the best localization could do for me was calculate the initial position of the robot and sacrifice a lot of known angle information. After I had rotated 360 degrees, I never felt like the angle output was reliable. If I had more time I would go back and try reconfiguring my sensor_sigma to change my sensor noise dependency. I was lucky (and happy) that my PID control worked so well though!
I was happy to have learned so much in this class that there were so many ways to approach the problem. Localization, PID control, and graph search algorithms are very powerful tools and I hope to have the opportunity to use this knowledge elsewhere!