Hi! My name is Sophia and I am from New York City. I am an MEng student in Electrical and Computer Engineering. I also studied ECE at Cornell for my undergrad, with a minor in Mechanical Engineering. I have interests in power electronics and embedded systems.
The prelab for this section of the lab was to simply install the Arduino IDE, learn about the Artemis, and review the lab instructions. I also needed to install the CH340 driver on my Windows so my computer could recognize the board.
The first task of the lab is to successfully connect the Artmis board up to the computer on the ArduinoIDE.
The first sample program blinks the onboard LED>. In the following video, the LED is blinking at a rate of 0.5 Hz.
This second sample program interacts with the serial monitor in the Arduino IDE. The program mimics the echo command, where any string that is entered into the command line will be displayed in the serial monitor. This task verifies that communication between the computer and the Artemis is functional. To obtain readable data in the serial monitor, I needed to set the baud rate to 115200.
The analogRead program utilizes an onboard ADC in the Artemis device to collect data from the temperature sensor. This example outputs the temperature in raw units. In the video below, it can be seen that when blowing on the chip, the temperature increased from 32780 units to up to 33400 units.
The final test was with the microphone. The purpose was to output the most dominant observed frequency in real time. I tested this function with a variable frequency generator, and the microphone was able to successfully pick up and detect changes in frequency.
After interacting with a few basic functionalities of the Artemis board, we are tasked to create a program that blinks the onboard LED when a C note is detected. For the program, a large portion of the code can be derived from the microphone sample code. Two main additions I made were a no delay blink and a helper function to detect the note C.
The helper function isC() checks for if the loudest frequency is a C by checking if the frequency falls within a specific range. To account for different octaves of C, I accounted for powers of 2 multiples of the middle C frequency. This is because each C note is 2x the frequency of the C note in the previous octave.
Additionally, I implemented a no delay blink function so that the moment the microphone detects the note is not a C, the blink would stop. With a delay in the blink, the program would only be sensitive to respond 2 seconds after the C is played. Blink without delay is much more sensitive and responsive to changes, and can turn off the LED immediately, as soon as C note is no longer detected.
The prelab for this section of the lab was to set up the virtual environment. I needed to install python on my device, activate the virtual environment, and then install the necessary packages. I downloaded the lab codebase onto my project directory, and then started the Jupyter server, where all the python code for this lab was written. I also set up the Artemis board with the downloaded Arduino file from the codebase. The final part of the prelab was to read through the python and Arduino files and attempt to understand the functions used and how the program works.
Running the ble_arduino file reveals that the MAC address of my Artemis is c0:81:75:26:ab:64.
The codebase for this project included the demo.ipynb file and the ble_arduino.ino file. The python notebook file controls how the computer sends commands and receives messages from the Artemis. The arduino file uses BLE UUIDs, BLExCharacteristics, EStrings, and various command types to process commands on the Artemis board and interact with the computer through Bluetooth.
The ECHO function sends a string from the computer to the Artemis, and the board sends a modified message back. In this case, the returned messages will have the form: "Sophia says -> [message] :)". The string modification is done in the Arduino code, before sending the message back to the computer.
On the computer side, the following is how the command is sent and the message is receieved.
The SEND_THREE_FLOATS function sends three floats from the computer to the Artemis. These three floats are then extracted onto the serial monitor of the Arduino IDE.
The following show how the function is called from the python script, and how the information is received on the Arduino serial monitor.
The GET_TIME_MILLIS function outputs the time in milliseconds and sends it from the Artemis to the computer.
On the Arduino side:
On the Python side:
The purpose of the notification handler is to process and receive strings without having to manually call the receive_string function each time. The callback function is able to extract time and temperature data and create two arrays of the correponding information.
To determine how fast messages can be sent, I created a loop that continuously sent the time in milliseconds for 5 seconds.
I was able to send 168 messages in 5 seconds, which is a data transfer rate of 33.6 messages per second.
On the Arduino side:
On the Python side:
Modifying the loop from Task 5, the time in millis is now stored in a global array instead of being sent directly back to the computer.
There is a separate function called SEND_TIME_DATA with the purpose of looping through the array and sending each individual time data back to the computer. After each element in the array is sent, a counter is incremented. The counter is printed to the serial monitor at the end when the entire array is sent, to confirm that all timestamps were sent to the computer.
In addition to time data, the temperature data at the corresponding time is also collected. Both groups of data are stored in an array in Arduino and then the function GET_TEMP_READINGS loops through both arrays and sends corresponding time and temp elements together to the computer. The notification handler in python then extracts both time and temperature data from received string into two lists.
In the first method, where data is sent to the computer one at a time, new data is able to reach the computer side sooner. However, this process is limited by the data transmmission rate of Bluetooth. The second method of storing information in an array first and then transmitting later is much faster in the sense that data points can be collected much closer together in time. There were many points in my arrays that were collected within the same millisecond. The downside to this method is that data arrival to the computer in real time will be slightly delayed, since the program needs to collected all the data into an array first before sending the computer any information.
If the Artemis can store 384 kB of RAM. Assuming that each datapoint can be represented by a float (4 bytes), the maximum number of datapoints that can be sent is 96000.
I created a program that sends a message from the computer to the Artemis and then receives an echoed reply back. I calculated the data transfer rate for replies from 5 bytes to 120 bytes, in intervals of 5 bytes, and plotted the data rate against byte size. It can be seen that this graph is essentially linear, showing that larger packets have a higher data rate. Each packet requires a certain amount of overhead to be transmitted. Smaller packets do introduce a lot of overhead because it takes more packet transmissions and more occurances of overhead to transmit a larger message. On the other hand, transmitting a single larger message reduces overhead and increases the transmission rate.
The computer is still able to read all the data published when the Artemis sends data at a higher rate. I tested this by sending 500 integers (0-499) to the computer, and the computer was still able to receive each integer without dropping any.
Overall, this lab had many components that provided a good introduction to how to use a microcontroller, interact with some of its peripherals, and use Bluetooth to communicate between the Artemis device and the computer. I became more familiar with Jupyter Labs and learned how to interface between Arduino and Python codes.
The AD0_VAL definition in the code represents the last bit of the peripheral device's I2C address. This address is used to differentiate which commands are being sent from the microcontroller to which specific peripheral. By default, this value is set to 1. This number can also be set to 0 to change the I2C address. This can be useful for example, if there are two of the exact same IMUs, and you need to communicate with each separately.
In the following video, it can be seen that the program starts with a blinking of the blue on board LED. The accelerometer measures change in linear acceleration, while the gyroscope measures the angular velocity of the IMU. As the acceleration along a certain axis changes, the value of the accelerometer will change with it. When I wiggle the IMU back and forth along a certain axis, it can be seen on the Serial Plotter that the motion is captured by the accelerometer. I can also see that the numbers output of the gyroscope vary as I change how fast I am turning the board.
The two following formulas were used to compute roll and pitch angles from accelerometer values.
To achieve outputs of {-90,0,90} pitch and roll, I aligned the IMU with edges on my laptop keyboard. Although not the most reliable anchor, it can be seen that the angles displayed in the serial monitor are approximately the desired angles. This can also be seen on the serial plotter. This means that the accelerometer has pretty good accuracy, with a tolerance of only a few degrees.
Arduino Code:
The following is an example of roll from the accelerometer. It can be seen that in the time domain graph, there exists some noise. By conducting an FFT, it is more obvious that there are other noise frequencies present in the system. I chose a cutoff frequency of 10 Hz because according to the graph, most of the larger peaks were at the lower end of the frequency range. I wanted to preserve wanted frequencies while muting out unwanted noise. Cutoff frequency affects the output in that if the frequency is too low, wanted signals might be excluded. If the cutoff frequency is too high, there may still be too much noise present.
Python FFT Code:
The following formula was used to evaluate low pass filtered data points. The value of alpha was determined by performing T/(T+RC), where T = 1/sample_rate and RC = 1/(2*pi*cutoff_frequency). Using a sample rate of about 200, and a cutoff frequency of 10 Hz, my alpha value for the filter was about 0.24.
I implemented a low pass filter for the accelerometer data. It can be seen in the comparison graphs of raw data vs filtered data for roll and pitch that there is less random noise presence in the filtered line. It can also be seen in the FFT of the pitch that the peaks on the filtered system are less in amplitude than that of the raw data.
Raw vs LPF: Time Domain
Raw vs LPF: Frequency Domain:
Knocking on the table:
Arduino Code:
The following formula was used to derive angle measurements from gyroscope data:
With this formula, the following graph was generated, with roll, pitch, and yaw data from the gyroscope.
The two graphs below compare the data for roll and pitch from the accelerometer and the gyroscope.
It can be seen that the gyroscope data is less noisy than the accelerometer. Additionally, in the motion test that I ran, I tilted each x, y, and z axes once. This means that there should be only one peak for roll/pitch/yaw in the entire duration of the test. It can be seen that the accelerometer has more than one large peak, while the gyroscope has only the intended peak. This is why for the complementary filter I decided to choose a low alpha value.
An increase in sampling frequency improves the accuracy of the data.
Arduino Code:
The following formula is used to implement a complementary filter. This filter is essentially a weighted average of the data from the gyroscope and the accelerometer. An alpha value is chosen so that if this value is low, the filtered data would be more similar to the gyroscope data, and if alpha is large, the filtered data would be more like the accelerometer data. I chose a low alpha value so that quick vibrations from the accelerometer are ignored but also the drift from the gyroscope is compensated for.
Graphs of Roll and Pitch comparing accelerometer, gyroscope, and complementary filtered data:
Arduino Code:
After removing delays and print statements from the code, I was able to sample 1553 data points in 5 seconds. That is a sampling rate of about 310 messages per second. The main loop on the Artemis does run faster than the IMU produces new values because the Artemis does not have to wait for the IMU to be ready with data to continue with the loop.
I thought it would make more sense to create separate arrays for each type of IMU data. I had 9 arrays in total, one for each: accelerometer roll and pitch (raw and LPF), gyroscope roll, pitch, and yaw, and complementary filter roll and pitch. I thought this made the most sense because it would be easy to compile corresponding data to be sent together through BLE to the computer. It is also easier to keep track of which data points correspond to what data type. I used floats to store my data because they have more resolution than ints and take up less storage than doubles.
This image shows the population of the 5 accelerometer and gyroscope arrays containing raw data. The remaining 4 filtered data arrays are populated in the LPF and complementaryFilter helper functions.
I had 9 arrays with IMU and assuming one array for timestamps, that is 10 total arrays. With each array consisting of floats, that is 4 bytes per sample. 384 KB / 4 B = 96000 samples can be stored in total. Since I am using 10 arrays, 9600 elements can be stored in each array at max. Approximating sample rate to be about 310 samples/second, it would take 9600 samples (with each sample having 10 float data points) / 310 samples/second = 31 seconds to use up the available memory.
My code was able to capture 5 seconds worth of IMU data and output 1553 datapoints. These datapoints were then sent over to the computer via BLE. The data was extracted into arrays and then made into graphs, like the ones for Gyroscope and Complementary Filtering.
This picture shows that after 5s of data is sent over from the Artemis, I checked the length of the arrays that were sent. This verifies that the entire set of data was successfully sent over via BLE.
I played around a bit with the RC car and recorded a video of it driving forwards and backwards, doing flips, rotating in place, and rotating while translating.
Overall, this lab was a good introduction on how to use IMU, and the components nested within. I learned how to filter out noise and vibrations, while also compensating for sensor drifting. I worked with Steven Sun and Benjamin Liao, in order to determine the best method to implement FFTs in python, as well as how to best test the IMU so that relevant data can be seen in the graphs.
The I2C address of the time of flight sensors is programmable but the default address is 0x52. The last bit of the address determines whether the controller is reading or writing information to the peripheral. A 0 indicates read and 1 indicates write.
To control two ToF individually, I think it a better idea to change the address of one sensor programically. This allows for control of the sensors better in sync and minimizes the delay between the two.
The two sensors will be placed one at the front of the car and one at the rear end. If the car is moving fast and there is a small obstable directly to the sides of the car then the sensors might miss it.
Below is a block diagram of the setup for this lab. There are 2 ToF sensors and 1 IMU connected to the Artemis via the QWIIC breakout board. The Artemis is then connected to the computer.
In lab, I soldered a JST connector to a 700 mAh battery. I connected the battery to the Artemis board and verified that the board was able to run and receive power without being connected to the computer.
Below is a video of the Artemis sending BLE messages back and forth between my laptop while only being powered by the battery.
I soldered a QWIIC connect cable to the pins of the ToF sensor, with the blue wire being SDA and the yellow wire being SCL. After connecting a single ToF sensor to the Artemis via the QWIIC breakout board, I ran the example code Wire_I2C and saw an I2C address of 0x29. This is unexpected because the I2C address of the ToF sensor is 0x52 according to the datasheet. However, it still makes sense because in 8 digit binary, 0x52 is 01010010. 0x29 represents the first 7 static digits (0101001) of the I2C address.
The ToF sensor has 3 distance modes: short, medium, and long. The short mode has the lowest distance range but the best ambient immunity, meaning it is the best at detecting the desired target signal while minimizing interference from the surrounding environment. The long distance mode is able to operate with the largest range of distances but is more susceptible to environmental interference. The medium mode is a compromise of the two extremes. Given the small size and speed of the robot car, it does not need to sample large distances in very short amounts of time. Thus, the short mode with a maximum distance of 1.3 m but with better ambient immunity would be the best option.
To test the absolute range of the sensor in short distance, I positioned the ToF sensor facing the ceiling. The maximum value I could get from the ToF sensor was about 7 ft before it gives 0.0.
Although the sensor can still take measurements until 7 ft, the accuracy of the sensor drops close to the limit of the short mode distance (1.3 m) so I took distance measurements at 10 cm intervals to observe the behavior. It can be seen in the graph below that at distances less than 1300 mm, the ToF sensor measurements are pretty close to the expected value (within 10 mm). However, as distances increased beyond 1300 mm, the accuracy dropped quickly.
The following is a closer look at the accuracy of the ToF sensor measurements.
Arduino code to send ToF data to computer for graphing:
To test for repeatability, I kept the sensor a certain distance from the wall and took 100 measurements. I plotted these measurements and observed that the difference between the largest and smallest measurement was 4 mm. This shows that the sensor has a pretty good repeatability, and doesnt deviate too much between measurements.
To calculate ranging time, I took the timestamp in ms before gathering any data points, took 100 sensor measurements, and then got the time stamp again at the end. I calculated the difference between the two time stamps and averaged it over 100 data points. It takes about 51.52 ms to gather each data point, or a rate of about 19.4 measurements per second.
Arduino Code:
To configure the I2C address of the second ToF sensor, I utilized the XSHUT pin. I soldered a wire between the XSHUT pin on the ToF sensor and the A2 pin on the Artemis. To change the I2C address, I shut off one ToF sensor, programmed the other sensor to have an address of 0x50, and then turned the off sensor back on. This way the two ToF sensors have different I2C addresses of 0x50 and 0x52 and it would be possible to communicate with both sensors simultaneously.
I confirmed that both the ToF sensors had different I2C addresses and were able to output distance measurement data.
Next I wrote loop that prints out the timestamp as fast as possible and only prints messages from the ToF sensors when both have ready data. The loop runs about once every 6 ms, or 167 Hz. It takes about 68 ms for both ToF sensors to have new ready data. It is clear that the loop runs much faster than the ToF sensor data being ready, and the two ToF sensor data is the limiting factor.
Arduino Code:
Once I connected all three sensors (2 ToF and 1 IMU) to the Artemis via the breakout board, I recorded data for 8 seconds. I did one rotation each in the directions roll, pitch, and yaw. Then I took each ToF and waved my hand in front of it. It can be seen that each IMU plot has one large spike and the ToF plots each have a section of oscillations at the end. I collected the data in Arduino and sent it to my computer via BLE. The notification handler in python parsed the data into arrays for plotting.
Arduino Code:
Some IR sensors include the amplitude-based IR sensor, the IR triangulation sensor, and the IR time of flight sensor. The amplitude-based sensor is used for applications of short distances (<10 cm). This sensor is cheap ($0.50 USD), has a small form factor, and high sample rate. However, limitations include dependence on target reflectivity and high sensitivity to high ambient light. The IR triangulation sensor works < 1m. It has the benefit of being insensitive to surface color and texture. However, it is more expensive, bulky, sensitive to high ambient light, and has a low sampling rate. Finally, the time of flight sensor works for a larger range of 0.1 - 4 m. It has a small form factor and is insensitive to surface color, texture, and ambient light. It is more expensive than the amplitude-based sensor, has complicated processing, and also a low sample rate.
In order to isolate the variable to the qualities of object in the distance, I did each test twice: once with ToF 1 with Object 1 and ToF 2 with Object 2, and once where the objects were swapped. This is to make sure that the difference in results was not due to inherent differences in accuracy among the ToF sensors. I put objects 50 mm away from each sensor.
To test sensitivity against color, I used a white box and a black box. They had a similar texture and were opposites in color. It can be seen from the graphs that the sensors responded better to the black box, as the values measured were closer to 50 mm.
To test sensitivity against texture, I used a white box and a white towel. They had the same color and were different textures. It can be seen from the graphs that the sensors responded better to the white towel, as the values measured were closer to 50 mm.
It can be seen that even with the slight variations due to color or texture, the measurements were still fairly accurate and still within a 10 mm deviation.
I used Nila Narayan's website as a guide on how to solder the XSHUT pin to the Artemis.
Below is a block diagram of the connections between the Artemis, the batteries, the motor drivers, and the motors. The inputs of the motor drivers are connected to the Artemis and the output of the motor drivers are connected to the motor. This makes sense because the motor driver will take in a PWM signal and ten output a corresponding voltage behavior that will cause the motors to spin.
It is better to have to separate batteries for the Artemis and the two motors. This is because the motors drain the battery quickly and it would be easier to recharge a separate motor battery than to disable the entire car because the Artemis is also not powered. Additionally, the motors draw a lot of current, which could result in unstabilities on the Artemis side if the two were connected to the same battery.
After soldering together the motor drivers to the Artemis, I connected power to the system using the power supply.
I chose a voltage on the power supply that is close to the output voltage of the battery (3.7 V). I figured that would best simulate how well the motors would perform when plugged into the actual battery. This voltage is well below the maximum voltage rating of the motor driver so it is safe to use.
To test the soldered motor drivers and their connection to the Artemis, I wrote the following code in Arduino:
I connected the motor driver outputs to two different channels on the oscilloscope and made sure that I was seeing the correct PWM signals in both. I first started by testing a single PWM value to check that I could get a signal at all. Then I swept through the range of possible PWM values (0-255) to make sure that the motor driver could detect the full range of signals.
After checking that the motor drivers are able to successfully receive PWM signals from the Artemis, I soldered the outputs of the motor drivers to the terminals of the two motors. I wrote a code in Arduino to test that the wheels are spinning in response to the PWM signals.
At first I tested spinning one set of wheels.
Then I wanted to check that both sets of wheels could turn individually, forwards and backwards.
Arduino Code:
After confirming the motors run on the power supply, I hooked up the motors to the battery instead. Similarly, I first tested that one of the motors were able to run before testing both motors simultaneously.
After verifying that both motors run on battery, I secured the components into the car on the ground and ran it for 5 seconds.
After experimentally changing the PWM sent to the motors, I found that for my car, the PWM value for which my robot moves forward is about 27. For on-axis turns, the PWM value is higher, at about 120. At 120, it can be seen that the car is attempting to very slowly make a turn, even though only one pair of wheels may move at a time. When I raised the PWM a bit higher to 150, it can be seen that all 4 wheels are now able to spin together.
Forward at PWM = 27:
Turn at PWM = 120:
Turn at PWM = 150:
When running the car on the ground, I noticed that the car was driving slightly towards the left. This meant that the right wheels were driving faster than the left. Thus I experimentally found a constant multiplier that reduced the PWM of the right wheels that made the car drive straight. This constant turned out to be 0.88. I also added a slight delay of 50 ms because I noticed that one set of wheels were reacting before the other. I put tape on the "starting line" and then lined up my robot behind it. There is another piece of tape 6 squares (6 feet) away. It can be seen in the video that after calibration, the robot stands over the ending piece of tape at the end of the test.
Arduino Code:
Finally, for open loop control, I ran a repeated combination of the robot going forward and then turning.
Arduino Code:
According to the PWM oscilloscope graphs, the PWM frequency is about 182.8 Hz. This frequency is adaequately fast for these motors because experimentally, I can see that the motors are reacting to the PWM signals that they are receiving. Manually configuring the timers to generate a faster PWM signal could allow for finer control of the motor as well as greater stability.
From previously, I found that the lower PWM limit to keep the robot in motion in a straight line is 27. In my Arduino Code, I set the PWM to 27 for the first two seconds to overcome static friction. Then I was able to lower the PWM to 26 and have it run at the slowest steady state speed. I also tried PWM = 25 and the robot was able to move for a bit but eventually stopped soon after. I figured that could not count as steady state motion. Thus I concluded that PWM = 27 is the lowest PWM to overcome static friction and PWM = 26 is the lowest PWM to maintain the slowest steady state speed.
I used Nila Narayan's website as a guide to how to solder the pins together on the motor drivers, as well as which pins on the Artemis to avoid using (PWM incompatible).
I was able to send and receive data over Bluetooth by mainly following the code used in Lab 1. On the Arduino side, I added another case command called PID. This function took in 3 float values: Kp, Ki, and Kd. It would send these three floats back to the Artemis and populate the corresponding variables. I did this so that I could tune my control without having to reupload the Arduino code to the Artemis every time. I called ble.send_command(CMD.PID, "Kp|Ki|Kd") from my computer to run the PID. On the Python side, I modified the notification handler so that it would extrapolate all the data sent over into arrays so that I can create graphs with the data.
Sending PID Command in Python:
Receiving PID Gains in Arduino:
Sending Controller Data in Arduino:
I tuned my PID controller by tuning Kp first, then Kd, and then lastly Ki. I tuned my controller in this order because it is typical to start with tuning the proportional gain. I chose to tune Kp in the range of 0.01 - 0.1, given the speed and distance the robot is travelling at. After the robot seemed to eventually reach a steady state close enough to the setpoint, I tuned Kd so that the robot would slow down more as it approached the goal. Finally, I set a small Ki value because my robot did not have big issues with steady state error at this point. My final coefficient values for the simple setup were: Kp = 0.25, Ki = 5e-4, Kd = 0.09.
Arduino Code for PID Control and Motor Input Calculation:
The suggested PWM calculated by PID utilized this formula, where dt was the time between two consecutive ToF readings. :
I included deadband compensation in my PWM value by adding 27 to the PWM calculated by adding the PID terms. This PWM limit value was determined in Lab 4. In my controller, I also set a limit to the PWM value so that the robot can travel at a maximum of 100 PWM, forwards or backwards.
Code implemented on Robot (5 ft):
Code implemented on Robot (8 ft):
Three Repeated Trials:
For the version of the controller where the PID loop is running at the same rate as the ToF sensor's data readings, I saw a rate of about 10.7 Hz.
PID Loop waiting for ToF Data in Arduino:
Data Rate Calculation in Python:
For the version of the controller where the PID loop does not wait for the ToF sensor's data readings, I saw a much higher rate of about 126.2 Hz. This means that the PID loop can run much faster than the rate of which ToF gets data.
PID Loop not waiting for ToF Data in Arduino:
Data Rate Calculation in Python:
Code implemented on Robot:
In addition to using the previous distance value when the ToF sensor has no new data, it is also possible to extrapolate values in those time frames. Every time the sensor has a new reading, I calculate the slope of (newToF - oldToF)/dt_ToF, where dt_ToF is the time between two consecutive ToF readings, not PID loops. Next I would multiply the slope by the duration of one PID loop and add it to the previous distance. This should give the new extrapolated distance. It can be seen that in the distance graph, the time between two ToF data points is no longer a constant line, but rather one with a specific slope.
Extrapolation Code in Arduino:
Code implemented on Robot:
The large peaks that can be seen on the d term of the PWM graph result when the ToF sensor gets a new reading and has a large difference with the previous extrapolated data point. Including a derivative LPF could be useful in this scenario to smooth out the peaks.
Integrator wind up protection is necessary in the case that something is preventing the robot from moving the way it wants to, especially when close to the setpoint. In my tests, I ran the robot with and without my integrator windup protection, on floor and cloth. I held the robot close to the setpoint and prevented it from moving forward. In the time that I was stalling the robot, the integrator term should have been rapidly increasing. Without the windup protection, it is expected that the robot shoots forward and has a hard time correcting back to the setpoint quickly. Integrator windup is more clearly a problem at stalling close to the setpoint because if the car is stalled too far from the goal, the proportional term may be the dominant factor in setting PWM.
In my solution, I set an upper and lower bound to the accumulator variable. Doing this ensures that the integrator term cannot indefinitely increase PWM when it is stalled. I also checked that if the robot crossed the desired point, reset the accumulator to 0. Doing this helps the control react to steady state error faster.
Integrator Wind Up Protection Arduino Code:
Floor - No Windup Protection:
Floor - With Windup Protection:
Cloth - No Windup Protection:
Cloth - With Windup Protection:
It can be seen that in these examples, the tests with wind up protection are able to overshoot less and correct steady state error faster.
Ben Liao helped me maximize my PID loop rate. I also took inspiration from him while implementing the accumulator reset.
The setup of Bluetooth was similar to the process seen in Lab 1 and Lab 5. Please reference those sections for detail on Bluetooth implementation.
The main difference in the usage of Bluetooth between Lab 5 and 6 is how the control of running the controller is implemented. In lab 5, a timer limited how long the controller could run for. In contrast, Lab 6 uses a trigger to activate and stop the PID controller. The case command TOGGLE_PID takes in an enabler variable ("1" to start and "0" to stop), a desired angle setpoint, and the PID gains. The ANGULAR_PID case command is reponsible for sending relevant data arrays over to the computer via Bluetooth.
ANGULAR_PID:
Notification Handler:
Data Transmitted via BLE:
I initially started with only a P controller. I adjusted my kp to be in the order of single digit values. My controller with just the P term was working surprisingly well. It was able to quickly turn in the direction of the set angle as well as return to the set angle after disturbances with little oscillations and no steady state error. This was when I tested my PID controller on the smooth table. However, when I tried testing my controller on the carpet, it was apparent that the P term itself was not enough to properly control the robot due to steady state error. I added I and D terms to correct for steady state error and minimize oscillations. My final PID gains were: Kp = 3, Ki = 0.8, Kd = 0.2. I tuned these gain values on the carpet, and then tested the robot again on the smooth floor.
Small Push:
Big Push:
It can be seen that the car accurately returns to the initial orientation after being distrubed.
In my code, I waited for the IMU to have ready data before continuing with the PID loop. The sampling time for the data collection was around 132 messages per second. I was able to receive 546 data points over 4.123 seconds.
Digital integration may accumulate error and cause gyroscope drift. This means that errors in each measurement will add up over time, and measurements taken will become more inaccurate over time. This problem can be minimize by utilizing data from other sensors such as the accelerometer or magnetometer (sensor fusion). For instance, for roll and pitch values, it is possible to combat gyroscope drift by implementing a complementary filter with low-passed accelerometer data.
The gyroscope in my IMU is pretty good in regards to drift. I did not have to adjust for accumulating error resulting in drift. The below graph shows the drift of my gyroscope over 4 seconds. It can be seen that over 4 seconds, the yaw drifts by about 0.25. This indicates a drift of about 0.0625 per second, which is insignificant.
One limitation of the sensor to be aware of is the value of the maximum angular velocity that the IMU can process in a singular data point. According to the datasheet, the full-scale range of the gyroscope defaults to +/- 250 dps. The sensor also has the ability to be adjusted for +/- 500 dps, +/- 1000 dps, and +/- 2000 dps. The default of +/- 250 dps is not sufficient for our applications because at high PWMs, our car is able to exceed 250 dps. To combat this problem, I increased the maximum angular velocity value to +/- 1000 dps.
IMU Datasheet:
Code addition for full-scale range adjustment:
In my implementation to acquire yaw data, I simply integrated the raw gyroscope data. In this case, taking the derivative of a signal that is the integral of another signal is essentially equivalent to using the original signal itself. It makes sense mathematically, but it is also possible to use the original gyroscope signal itself to replace taking the derivative of the yaw.
Derivative kick did show up in my implementation when I changed the setpoint while the controller was active. It can be seen in the pwm graph of the tests that the d term spiked at each change of the setpoint. However, this did not cause a problem in the actual behavior of the robot. The graph of angle and setpoint for each of the tests above show that there were no strange movements of the robot when the setpoint was changed.
Because the derivative kick is not really an issue for my robot, I did not need to implement a low pass filter before my derivative term.
In Lab 5, I had a timer to dictate when my controller was running. In this lab, I implemented a TOGGLE_PID case command to start and stop the controller. I also moved the calculation of the PID from inside a case command to a separate function run_angular_pid() that runs in the main loop. This way, when commands are received from the computer and executed, the loop to run the controller continues.
TOGGLE_PID:
run_angular_pid:
Main Loop:
TOGGLE_PID takes in an enabler flag, an angle setpoint, and the three PID gains. This was helpful in tuning PID gains without having to reupload code to the Artemis. When the controller is enabled, is it possible to send the Artemis different setpoint values while the controller is still running. Below is a video of the car turning 360 degrees, as the setpoint changes to the following angles: 0, 90, 180, 360.
Python Command:
360 Test - Floor:
In the future, when implementing stunts or navigation, it will be important that the setpoint can be updated in real time. Paths that the robot travels through will likely not be in a singular direction, especially in regards to search algorithms or avoiding obstacles. In those situations, it is crucial that the robot be able to turn in different directions while the controller is running.
Controlling the orientation while driving forward or backward will be important for keeping the car driving in a straight line for a certain distance. A good implementation of this would require good PID control of both distance and orientation from the robot. There would be two setpoints: the distance of the car to the nearest obstacle, and the orientation of the car. To make implementation of this smoother later, I could translate the code I had from lab 5 for linear control activation to be in similar format to the one I have in lab 6. That is, I would have a case command for linear control that takes in an enabler flag, a setpoint value, and PID gains. There could be two separate functions for linear and angular control so that it is easier to call each operation later.
Integrator windup protection was implemented similar to the one in Lab 5. Please refer to Lab 5 section for more details. Integrator windup protection is helpful to ensure that the controller functions as desired on different surface textures such as floor vs carpet.
Integral windup Arduino code:
Below is a video of the car running the same pushing and 360 rotation tests but on the carpet.
Pushing Test - Carpet:
360 Test - Carpet:
It can be seen that in these examples, the controller reacts well to tests performed on the carpet. There are still no extraneous spikes or oscillations in the angle. Both angle and pwm graphs for carpet testing look similar to the graphs for testing done on the floor.
Ben Liao helped me figure out how to set the full-scale range to a higher dps value.
The Kalman filter is important as it provides an efficient and accurate way to estimate the state of a system from noisy or incomplete measurements. It is especially useful in real-time applications where the system evolves over time and there are uncertainties surrounding the sensors and the system.
The Kalman Filter requires a dynamic model of the robot, which can be represented by the following equations:
Using these system equations, we can create a state space model with matrices A and B. In order to derive these matrices, it is necessary to find the drag coefficient d, and the mass m, of the system.
The first step in building a state space model for my system is to estimate the drag and momemtum values. d and m can be found using the following equations:
To get d and m, we will first need to find steady state velocity and 90% rise time. To get steady state data, I drove the robot open loop towards a wall at PWM = 100, which is the maximum PWM that I allowed in my PID controller. I made sure to have a foam cushion at the wall to protect the robot from collisions. A time step of 2s was enough for me to reach steady state.
Below are graphs of the TOF sensor output, the motor input, and the computed velocity. The velocity graph was obtained by taking the derivative of the distance data.
I used an exponential fit of the speed graph to find the estimate of the steady state velocity.
The following image shows the steady state speed, the 90% rise time, and the speed at 90% risetime calculated in Python. The steady state speed is 2.69 m/s, the 90% rise time is 1.6 s, and the speed at 90% rise time is 2.42 m/s.
The Kalman Filter consists of two main parts: the prediction step and the update step. In the prediction step, the filter uses a mathematical model of the system to predict the next state based on the previous state estimate and any control inputs. It also predicts the uncertainty (covariance) associated with this estimate. In the update step, the filter incorporates a new measurement to correct the prediction. It calculates a weighted average between the predicted state and the measured value, where the weights are determined by the relative uncertainties of the prediction and the measurement. This results in an updated state estimate with reduced uncertainty. By repeating these steps over time, the Kalman filter continuously refines its estimates, providing accurate data points even when there are no sensor measurements available.
Kalman Filter Python Code:
The sampling time that I recorded was 80 ms. This is used in discretizing the A and B matrices.
To test my Kalman filter in python, I used a set of data from lab 5 of a straight run towards the wall. I looped through the TOF data while calling the Kalman filter. I adjusted my covariance matrices by setting low values if I wanted to have higher trust in the corresponding element. A lower covariance signifies less noise. For instance, when I set sigma_3 to a low value (high trust and low noise in sensor), it can be seen in the graph below that the Kalman filter strongly overlapped with the sensor data. When I adjusted the covariances so that there was more trust in the system dynamic equations, or even noise amongst the states and the sensor inputs, it can be seen that the Kalman filter behavior starts to deviate from the original sensor dataset.
The parameters that affect the Kalman filter's performance are the covariance matrix and the A, B, and C matrices. For a system with a good dynamic model but noisy sensor measurements, it would be useful for the Kalman filter to put more trust in the model. For this system, it is acceptable to put higher trust in the sensor because it is not too noisy.
Matrix and Variable Initializations:
Kalman Filter function:
Executing Kalman Filter with Linear PID:
I tried executing Kalman filter with different coefficient matrices. It can be seen that with high trust in the sensor, the KF output was aligned well with the TOF data points. However, when instructed to not trust the sensor and instead rely only on the system equations, the KF deviated highly from the TOF data.
The drift stunt consists of 3 states: drive forward, turn 180 degrees, drive forward again. The first drive forward state was done in open loop. The robot would repeatedly check if the TOF reading was less than 3ft. If the robot was more than 3ft from the wall, it would keep moving forward and if it was less then it would move on to the next state and attempt to initiate a 180 degree turn. The 180 degree turn is done in closed loop using the angular pid control implemented in lab 6. When the robot senses that it is within a few degrees of the desired angle (180 degrees), it moves to the final state of going straight again. The final state is also in closed loop control, as I used the angular pid function to make sure that when the robot was driving back, it was maintaining the correct orientation the entire way. I kept track of the current state using a variable. In order to make the robot turn 180 degrees for the drift no matter the current orientation that it is placed down at, I set the desired angle to current angle +180 degrees.
Arduino Code:
Please refer back to lab 6 for run_angular_pid function code.
After implementing the drift in three states in the main loop, I needed to slightly retune my pid gains. I ended up using kp = 4, ki = 0.8, and kd = 0.5. Just like lab 6, the PID was activated using the TOGGLE_PID bluetooth case command.
In the last video, my robot was able to travel 6 ft in about 3.82 s.
SHAKIRA SHAKIRA
I tried to also use angular closed loop pid in the inital forward drive to make the drive straighter but ended up making the car do a shimmy.
the wheels on the bus go round and round
I tried drifting on the rubber mat out of curiosity but robot ended up spinning in place.
Runaway Princess
I tried adding tape to the wheels to make the stunt go faster but car ran away.
After running away, one of the wheels got decorated (tied up) in a pretty pink bow.
The goal of this lab is to use angular pid control and tof sensor readings to obtain a map of the robot's current environment.
I chose to control my robot using orientation control. I used the same code from Lab 6's angular pid control to turn the robot at increments of 25 degrees. In the main loop, after calling the run_angular_pid function, I checked that if the current angle of the robot is within 2 degrees of the desired angle, stop the robot and record tof measurements. It can be seen in the graph below of setpoint and gyroscope yaw data over time that the two graphs line up very well. This means that the angular pid controller is tuning the angle correctly.
In the main loop of the Arduino code, if the pid is enabled, the angular pid function will run. If the gyroscope yaw angle is measured to be within 2 degrees of the desired angle, the robot will stop and take tof measurements.
I ran the following Python code to make the robot turn full circle and collected data via BLE. I incremented the goal by 25 degrees each time and sent it to the Artemis. I called ANGULAR_PID command to send over the stored time, setpoint, angle, and distance values to the computer.
Robot turning 360 degrees on its axes:
Robot turning 720 degrees on its axes:
Historically in the previous labs that I have completed with this robot, the gyroscope drift has shown to be minimal. It can also be seen from the graph above that the robot also accurate turns to the setpoint orientation with minimal error. The most likely errors from my robot would be from the tof sensor readings or that when the robot spins 360 degrees it does not exactly rotate in place. Assuming that the robot's rotations are confined within a 304 mm square, and that the tof gives an error of up to 30 mm, the average error in the map would be 167 mm and maximum error would be 334 mm.
Gryo drift test from Lab 6:
I took about 4-5 distance datapoints per setpoint angle. In order to make my maps look cleaner, I averaged the tof measurements at each setpoint before plotting the data.
Finally I converted all the angles to radians and distances to feet before creating each polar plot.
I executed 5 total turns of the robot, once each at (0,0), (0,3), (5,3), (5,-3), and (-3,-2). I plotted the tof data against each setpoint on a polar plot.
(0,0)
(0,3)
(5,3)
(5,-3)
(-3,-2)
You can see the corners on each scan, which shows the wall of the maze as expected.
In order to plot my map on cartesian coordinates, I performed a matrix transformation on my tof data. I followed this formula
where phi is the gyroscope yaw angle and r is the tof measured distance. R(phi) is the rotational matrix, v is the input vector, and T is the translational vector. I translated each point by the coordinates of the position of the robot relative to the origin. Below is the matrix multiplication executed in python code.
I transformed every datapoint to cartesian coordinates using this transformation matrix and plotted it.
After obtaining the full cartesian map, I outlined the scatter plots with estimates of the walls of the map.
The following arrays contain the start and end points of the line segments that I drew. The first set of two arrays represent the outer trace of the map. The second set of two arrays is for the outline of the inner square.
Steven Sun helped me implement tranformation matrices and matrix multiplication in Python.
In this lab, I implemented Bayes filter in Python by implementing the functions compute_control(), odom_motion_model(), prediction_step(), sensor_model(), and update_step().
The compute_control function takes in the current pose and previous pose and returns the control information based on the odometry motion model. Each pose is a tuple of (x, y, theta). The odometry model attemps to depict a movement by splitting it up into three separate movements: an inital rotation towards the goal, a translation to the destination, and a final rotation to correct orientation.
Next I implemented the odom_motion_model function, which returns the probability that the robot in a certain state given its previous state and a control input. For each state there is assumed to be noise, resulting in a Gaussian distribution.
This function represents the prediction step of the Bayes Filter. It takes in current and previous poses of the robot and loops through every state and computes the probability it is at every other possible state. Therefore this computation requires iterating through multiple nested for loops, which can be computation intensive. States with probability less than 0.0001 were disregarded to speed up the loop.
The sensor model determines the probability of a measurement given an array of observations. Essentially it calculates p(z|x). Sensor measurements were modeled with Gaussian noise.
Finally, the update step of the Bayes filter updates the local belief based on sensor values. The code iterated through each possible state, finds the probability of that measurement, and multiplies it by the belief that the robot is at that location. Lastly, the local belief is normalized.
These are the results of running my Bayes Filter algorithm. The green line is where the robot actually is, the blue line is where it believes it is, and the red line is the odometry measurements. It can be seen that the blue and green lines trace each other decently well while the red line is quite off.
Below is the most probable state of each iteration along with its probability, as well as the ground truth pose.
It can be seen that the probability the robot is in the belief state is almost 1.0 for all steps and errors were minimal. I would predict that the robot would perform slightly worse at estimating where it currently is when it sees more obstacles in different directions that appear, compared to a singular continuous wall. Generally though, the robot performs quite well in determining its current position.
I ran the initial simulator of the Bayes filter from the provided code. It can be seen that this graph is similar to the one shown in lab 10.
Below is the python code for the perform_observation_loop function:
It utilizes the TOGGLE_PID and ANGULAR_PID case commands from the Arduino. Detailed information can be found on these commands in previous lab writeups. The target goal is reset at every data point, and with 18 total data points, this means the target angle increments by 20 degrees each time. Finally, when all the data is collected, it is sent over to the computer and extracted. There are sleep functions between commands to ensure that enough time is given for the command to execute.
With the python code, I performed localization at four points: (5, -3), (5, 3), (0, 3), and (-3, -2).
(5, 3)
(5, -3)
(0, 3)
(-3, -2)
Overall, it can be seen that the robot did a decent job at localizing its position. It was able to localize at (0, 3) and (-3, -2) quite well, with the predicted position being directly aligned with the actual position. These two points were localized better than the other two points. This could be because closer obstacles surrounding (5, 3) and (5, -3) are being measured less accurately due to ToF sensors being in long range mode. Additionally, there was some deviation in the rotation angles, where a full 360 turn didn't exactly end at the starting orientation. This could have also contributed to the deviation from actual position.
Aravind Ramaswami and Ben Liao helped me with determining where to place delays in the code to ensure complete data transfer before running update step.
The objective of this lab is to have the robot navigate through the world along the path shown above. The ultimate goal is to hit these points in order:
In a purely open loop system, no form of closed loop control is used at all. Essentially I could manually adjust the PWMs and time the system to turn or move forward for a certain angle or distance. This method is highly unreliable and prone to variations due to battery level, floor friction, and could easily change in behavior from run to run. This would be very annoying to tune and largely inconsistent.
Closed Loop Orientation Control:With closed loop orientation control, it is possible to control the exact angle the robot is turning. This is useful because it will stablize the system in terms of turns a lot better. With this method, I would calculate the ideal angle I need to turn in order to move from one point to the next consecutive point. This angle would then be set as the setpoint for the angular PID control. In addition to making a turn, using angular control while moving in a straight line motion will help the robot move forward in a straight line. With angular control, the controllabiliity and consistency of the robot performance is improved.
Closed Loop Distance Control:In order to improve consistency in linear movement, it is possible to use the linear pid function to determine when to stop the robot when it is moving forward. By calculating how far away the robot needs to stop in regards to the nearest obstacle it is heading towards, I can program a distance setpoint for the robot to stop at. This linear control makes sure that the robot isn't travelling too far or not far enough when it is moving in a line.
Localization:In theory, using localization would be the most robust method of execution. In implementation, the robot needs to turn slowly and collect distance data points, process those points, and finally predict where it is in the world. By using localization, it is possible to correct for previously incurred errors and discreptancies, minimizing compounding errors. However, localization is much more involved in implementation as well as execution. If localization is inaccurate it would hurt the execution and the robot would take the wrong steps.
The final chosen execution methodology was a combination of linear and angular PID control. I did not think purely open loop was a good idea. It would take too long to tune and still wouldn't work very well for either orientation or linear movement. I also did not choose to implement localization because in my lab 11, it can be seen that the worst localization point was pretty much in a different square on the floor. Assuming worst case error, the robot would wrongly determine the step size to take and would result in a really off path. My final method was to use angular PID to make accurate turns given a calculated turn angle and then use the linear PID to stop the robot when it was a certain distance away from the nearest obstacle.
All turns would be implemented using angular PID control. Straight forward drives were either done using timing, linear PID, or a combination of both, depending on how far the nearest wall in the direction of motion was. For points that were far away from the wall, solely timing was used because the ToF sensor is inconsistent in reading distances too large. For points with a medium distance away from the wall, first timing was used to shorten the distance to the wall and the linear PID was implemented. Finally, for points with a short distance to the nearest wall in the direction of motion, only linear PID was used.
Both the angular and linear PID functions are implemented under a single function called run_pid(). This function is called in the main loop, only when the enabler variable run_pid_loop is set to true. This variable is set to true when the START_PID_MVMT is called. The SET_TUNING_CONSTS command sets the corresponding variables of kp, ki, kd and the LPF alpha for both PID controls. The SET_SETPOINTS command takes in 5 parameters to set the setpoints of the PID controls. The fourth and fifth arguments set the deadbands of the pwm. The third argument was an enabler for the linear PID function. When the value was 0, the linear PID would not be utilized, and when it was 1, the linear PID would be activated. This toggle is necessary because the linear PID control was not used for all forward movements. The second argument represents the setpoint angle for the angular PID. Lastly, the meaning of the first argument changes depending on whether the linear PID enabler is active. When the enabler is 0, the first argument represents the PWM to move straight forward. When the enabler is on, the first argument represents the distance setpoint away from the wall the robot should stop at.
run_pid()
START_PID_MVMT
SET_TUNING_CONSTS
SET_SETPOINTS
With these bluetooth commands it was possible to start the robot from the computer. I manually programmed the path I wanted the robot to follow in Python.
Below are two videos of the robot following the path in the world and reaching each of the desired setpoints.
It can be seen that in the final execution, the robot was able to take the correct path around the world and pass through each of the desired setpoints.
I did not use localization or Bayes Filter so I did not have inference data, or separate planning and execution steps.
I worked with Ben Liao for this lab and we worked together on tuning the python instructions to make the robot move in the path we wanted. Thanks to Prof. Helbling and course staff for setting up second world to use.