I am a senior in computer science minoring in robotics. In my freetime I enjoy hiking, rock climbing and sewing.
I started by setting up by installing the Arduino IDE, and the Artemis board as detailed here. To confirm I had installed the software correctly I uploaded the example file Blink.ino, which blinks the builtin LED. The succesful result is seen in the video below.
Next I tested communication between my computer and the Artemis by uploading Example2_Serial.ino. This file demonstrates sending messages from the Artemis along a Serial line to a computer. It also demonstrates how messages can be sent from the computer and handled by the Artemis. In this simple example, the Artemis simply sends the same message it recieves back. The output of the serial monitor is shown below.
To test the temperature sensor (and thereby the builtin analog sensors) I ran the Artemis analog read example code, which defaults to outputing the reading from the built in temperature sensor. I controlled the temperature by holding the artemis nano in my closed hand.
Finally I used the built in microphone to detect sounds. This code (based on an example) gets the reading from the microhpone and performs a Fast Fourier Transform to identify the most prominent frequencies. If the loudest frequency is within a range around 440 hz ("A"), the Artemis will turn on the built in LED. I tested this by playing several tones from my laptop speakers, including multiples of 440hz. The Artemis successfully identifed the correct times to turn on. There were a few times that it flickered unexpectedly, but this is likely due to the ambient noise coincidentally being 440hz for a moment.
This lab focused on getting familiar with using BLE to communicate between the Artemis and our computer, as well as setting the groundwork for data tranfer in future labs.
I started by installing the ArduinoBLE library through the Arduino library manager. Next I uploaded the starter code to find my MAC address, and saved it in the connection.yaml file. The ArduinoBLE and bleak libraries handle most of the BLE communication, which allows us to focus on application specific features. In the BLE protocol, devices can advertise services that are available. Within each service there can be many characteristics, which hold data. Depending on how the characteristic is configured it can allow or dissallow writing, reading, as well as a variety of features for subscribing to a characteristic. Characteristics use uuids to uniquely identify each characteristic, which could be an issue if characteristics were identified by a non unique field such as a human readable name.
The Arduino sets up and advertises a service and then waits for a computer to connect to it. In the starter code a variety of commands are implemented to demonstrate how to work with the ArduinoBLE library. I compiled and uploaded the code, and tested that the commands worked as expected.
I went on to implement commands to get temperature data, and the time on the Artemis board. I confirmed that I could get well in excess of 50 temperature readings using my GET_TEMP_5s_RAPID command.
If 16 bit values were taken at 150hz for 5 seconds it would total 1500 bytes. Considering we have 384kB of RAM, the Artemis could store around 256 variables for 5s (each updating at 150hz). In a more typical example with only 10 relevent variables the Artemis could store 128 seconds of activity. Assuming each variable requires 16 bits per value, the following graph shows the combinations of variable count and data frequency which could be theoretically sustained for a full sixty seconds.
I found I could achieve high reliability when using a subscribe/notify characteristic. I set up a python script simply wait for and print notifications of changes to a characteristic. Then I set the Artemis to simply increment a counter and write the result to the same characteristic. This led to an incredibly fast counter being written to a characteristic, and while I expected that the python script might miss some numbers, testing showed that it reliably output every integer. This does indicate that perhaps if I were willing to sacrifice reliability I could improve speed, but it is fast enough as is that I do not currently see a need to do so.
My tests for throughput were less conclusive. I wrote a script to send and recieve messages of various lengths, and while the first run seemed to show an interesting spike near the middle, further tests did not sustain it. I hypothesize that much of the variance seen is actually a result of background processes on my computer while I run the test. Each data point represents the average of 100 transmissions and receipts, and while I thought this would be enough to give accurate results, I believe that a better strategy might involve single tests of random lenghth messages, over a long period of time to spread out the effects of any resource availability on my computer.
This lab focuses on getting sensor data from two infrared time of flight sensors. The TOF sensors communicate using the I2C protocol, and scanning with the wire example indicates that they have the address 0x29.
While the I2C address is hardwired into the IC, the address can be set temporarily in software. This allows multiple sensors to operate on the same I2C line so long as they each have a unique address. The addresses can only be individually set if all but one sensor is disabled or disconnected. Conceniently the IC has a dedicated disable pin, so I connected one of the sensor's disables pins to GPIO8. This allows me to disable one sensor, set the other sensor's address, and then re-enable the first sensor. An interesting feature of note is that one a sensor's address has been changed it remains until the sensor loses power. This means that reseting the Artemis through the builtin switch does not reset the sensor's address, and if a program is expecting the default address this can cause issues.
I have elected to keep the TOF sensors at the front and back of the robot. Since the robot can not move sideways, it is more likely to risk running into obstacles ahead or behind itself. A sensor aimed sideways would be helpful for some tasks such as following a side wall. I elected to use the medium range mode as a balance between short and long. I thought that the short mode's maximum range could limit some robot movements, while the long likely was not needed.
As discussed earlier, setting up a second sensor required a disable pin to be connected and pulled low during the first sensor's initialization. Otherwise, it is very similar to setting up the first sensor. Using this script it becomes clear that the sensors are the limiting factor. The loop runs in around 4ms unless one of the sensors is ready. Then it takes closer to 8ms. Regardless, the loop always runs multiple timesi before a sensor has a value ready, which shows that The sensor's refresh rate is the limiting factor.
IR sensors are commonly used do to their relatively low cost, and their saftey. In addition to TOF sensors, there are also proximity sensors, IR data transmission, temperature sensors, and IR based encoders. All IR based sensors are vulnerable to external light to some degree, but they also benifit from using this wavelength since humans do not percieve it. However, in many applications this can be minimzed or is irrelevant. Encoders can be designed to measure wheel speeds without physical contact, which minimizes the risk of wear and tear. IR based thermometers are becoming more common due to not needing contact as well. From minimizing risk of contagion, to measuring items that are too hot for traditional thermometers, this feature is helpful in many settings. Proximity sensors are not very precise, but are incredibly cheap.
The color of a surface can impact the intensity of reflected light, but usually has a minimal impact on the TOF calculation. The surface material can have more of an impact, but is usually within usable ranges. For example, a perfect mirror could reflect the IR pulse such that it never returns to the sensor. Similarly, materials or textrues designed to capture light could interfere with sensor readings. However, most common environments allow for use of IR TOF sensors.
This lab focused on the inertial measuremnt unit (IMU). I started by installing the SparkFun 9DOF IMU Breakout - ICM 20948 - Arduino Library. Then I ran the example code and plotted the results in SerialPlotter.
Data with stationary IMU Data with IMU translating Data with IMU rotating Data with IMU near motorThe raw measurements from the IMU needed to be converted to useable roll pitch and yaw values. All three values can be approximated by integrating the gyroscope values. However, this method is prone to drifting over time as small errors collect. To combat this I also use accelerometer and magnetometer data to approximate roll pitch and yaw. Then I use a complimentary filter to combine the two estimates, resulting in a new estimate which is less prone to noise than the accelerometer values and less prone to drift than the gyroscope values.
I will use pins 11, 12, 13, and 14 to control the motors. These pins have pwm enabled, and do not overlap any function I forsee needing in the future. I routed all of the motor controller wires along one side of the robot, and sensor wires along the other in an attempt to mitigate EMI, and avoided passing sensor wires near the motors when possible. I used electrical tape both to isolate wires from each other, and to secure them in place to avoid movement over time.
I set the DC power supply to 3.7 v to match the robot's batteries. Both batteries are 3.7v, so the important factors to consider are the total capacity, and the discharge rate. Higher capacity is needed for power intensive components like motors, and will allow for longer debugging sessions between charges. The 850mah battery has a sufficient discharge rate, and better capacity so it will be the primary motor battery.
I implemented a simple motor controller class to handle both motors so that this code could easily be reused in future labs.
I then used the motorcontroller class to write a simple test script. By iterating and ramping through speed settings I was able to determine the approximate point that each motor turned on. My motor controller class scales values from [0-100] to [0-255], and adds an offset to handle the low values that wouldn't turn on the motor.
After some additional trial and error I settled on minimum values of 65 and 40 for the left and right motors respectively. Afterwards I ran the same script with those min values set and plotted the mean voltage and duty cycle for both the GPIO PWM signal, and the motor output.
GPIO output voltage and duty cycle
motor voltage and duty cycle
Despite a linear increase in the GPIO output, the motor's output is non linear. This is due to other elements in the circuit; The motor itself acts as an inductor, and there is a capacitor in parallel with it to help protect the circuit.
After some additional trial and error I settled on minimum values of 65 and 40 for the left and right motors respectively. Afterwards I ran the same script with those min values set and plotted the mean voltage and duty cycle for both the GPIO PWM signal, and the motor output. Once in motion, the min values can be decreased by 5 while maintaining motion.
While my initial configurion did not drive perfectly straight, I was able to tune the controller to remain on a line for a long distance.
The frequency of the PWM signal does have an impact on the motor output. The inductor capacitor circuit acts as a high pass filter, which means that at high frequencies the motor terminal's voltage will not reach its target before the next PWM pulse, causing innacuracies. A lower frequency will result in a mean voltage that is closer to the target, but may be perceptible if too slow. I spent some time investigating clock selection on the Artemis Nano to demonstrate the effects, but eventually settled on manually controlling the PWM in software due to lack of documentation.
5hz, 50hz, 500hz GPIO and Motor Signals
The goal for this lab was to use closed loop PID to control the robots movements.
I chose work on controlling my robots orientation through PID, and the lab culminated in seting the robot to drive at a wall, and then turn and drift 180 degrees to face the way it came.
I also reorganized my code to shift much of my past code into a library. This makes my codebase neater, and makes it easier to reuse between labs without copying it over each time.
It was also important that I could efficiently communicate with the robot over bluetooth. I created a data storage template class that creates 2 dimensional arrays of a specficied size. It has functions to simplify storing data, as well as functions to perform operations on that data. Primarily, a "ForEach" function allows me to send each stored data point to my computer over bluetooth. This simplified the process of storing data during runs, and retriving it afterwards since sending it as it was collected would comsume too much processing power.
In my initial PID tuning, I found that an integral term was very important to insure that my robot could perform small angle turns. This was in part due to the lower speed, and stationary state initial state I used while tuning. I found that when I worked on the stunt, the integral term was slihtly less impactufl as the robot was running at high enough speed that it usually did not have steady state bias. However, the derivative component was very helpful in avoiding overshoot. I settled on using PID coeeficients of 5, 2, and 0.5 respectively.
Initially I included a simple state machine to track the robots current actions (Heading towards the wall, turning, heading away). To shift between states I waited until the desired sensor readings were read multiple times to avoid eronious state changes.
However, I quickly found that this slowed the response to approaching the wall too much when driving at the wall.I also realized that I didn not need a separate state for turning/driving away from the wall. I could simply keep running the orientation controll even as I drive away, which should both insure that I remain on a straight path, and avoid the need to detect when I have completed and stablized in the correct orientation.
While I did not observe any problems with my IMU data, I chose to set the gyroscope's sensitivity to 1000 degrees per second as I believed that should be enough to avoid maxing out the sensor, while still retaining accuracy.
To combat integrator windup, I zero the integral accumulator when the error crosses over zero. This insures that the integral term does not continue to drive the output despite passing the setpoint.
This lab focused on implementing grid localization. Since our robot only has two distance sensors we have limited ability to localize in place. However, by rotating in place we can obtain sensor readings from a variety of angles before returning to our original pose. By default this rotation gathers 18 distance readings from equidistant bearings.
To start this lab we must first install the simulator as seen here.
Now we can implement helpful functions to run Bayes Filter. 'compute_control()' Calcultes an approximate control that would move the robot between two poses. We can then compare this to the actual control that was sent to determine the probability of being in the second state. This is performed in 'odom_motion_model()'.
Another important helper function calculates the probability of a sensor reading given a robot pose. I initially used loops to perform the calculations, but I later shifted to use numpy functionality for efficiency. The loop structure averaged around 0.4 seconds per update step, but with numpy I was able to decrease it to 0.06 seconds. This speedup will help significantly with online localization, especially since the robot will rely on an external computer to perform localization. If the localization takes too long then by the time the updated pose is sent back to the robot it will be too outdated for use. The robot would either need to stop and wait for the updated pose, or risk moving with innacurate state information.
The core of the Bayes Filter are the predict and update steps. These are the two steps that the filter uses to estimate localization. First, 'prediction_step()' is used to shift the belief in accordance with the control sent to the robot. By modifying each possible state's probability based on the likelihood that the robot moved there with that control, a more up to date belief is obtained. Then we narrow that belief with 'update_step()' by comparing the sensor readings with the expected readings at each pose.
Finally, we can use those functions to implement the Bayes Filter. For the sake of simplicity and reproducability we will use a preset trajectory, and calculate the Bayes Filter at each step along the way. Each trajectory step is made up of an initial rotation, followed by driving forward, and then a final rotation. With small steps this can approximate continuous control, and is significantly easier to plan and observe. To start with we initalize and reset everything so previous state wont interfere.
Next we can loop through the trajectory list. After each movement we collect our sensor readings by rotating in place. Then we perform the predict and update steps. We assume that the grid cell with the highest probability is the robot position, and plot all of the relevant data.
In the first video the robot follows a simple square trajectory. However, it does struggle at the end when it seems to miss a command and crashes.
The next video follows a more complex trajectory around the map, and completes succesfully. There is some error between the green true pose and our teal localization estimate. Some of the error can be attributed to the discrete nature of the grid we use to localize. Since the true pose is continuous, it is unlikely to perfectly match the grid, and thus the localization can only approximate to within the size of a grid cell. Some error is also possible due to sensor noise, or features which match up in multiple positions.
I chose to modify the configuration to decrease the size of grid cells, hopefully increasing the accuracy of localization. I doubled the number of cells in each dimension, increasing the number of possible states by a factor of 8. This increased the time it took to precalculate the views the robot would see at each grid position from 10 seconds to over 2 minutes. It also increased the time to calcualte each step, but not as significantly.
The results are much closer to the ground truth on average, although in some areas it diverges by more. This is likely also a combination of the discrete grid and noise.
This lab focused on finding a map based on robot sensor readings. An arena was built in the lab to map.
First I implemented the ability to take repeated 15 degree turns. I used PID control to turn to a target angle, although I had to tune my values differently depending on which part of the floor I was working, the battery level, and possibly other variables.
To collect data the robot was placed in known positions within the arena, and the code was run. The starting positions are shown below. At each location the robot slowly turned in 15 degree increments in a circle collecting distance measurements along the way. The points are labeled with their position measured in feet, while the axes are measured in mm. My robot had distance sensors placed on the front and back, aiming in oppsoite directions along the line that the robot could drive in.
I used the orientation calculated by my IMU code instead of assuming that my turns were consistently spaced. While my robot can achieve relatively accurate turns, I allow for slight tolerance in either direction of the target so my robot can finish each turn faster. If two consecutive turns have opposite overshoot and undershoot the change in angle can differ from the target significantly. Additionally relying on orientation allows for potentialy changing the speed of the robot's turn. I could take measurements more frequently based on the distance measured for more precision. This might help since if a wall is further away the distance change per degree rotated can be much higher even on continuous walls.
The raw data collected from each of those locations is shown in this polar plots.
Examining the individual data runs I manually offset the angle's from each run. The primary indicator I used when identifying the offset was to make sure that any straight lines I could identify were vertical or horizontal. The corrected plots are show below.
When compared to the arena, the raw data clearly matches expectations, although it becomes less reliable at points further from the robot. The other area that the robot struggled to identify was the rightmost side of the floating box. There were two positions that should have measured it, but only a couple of data points lined up. I believe that this is likely due to that side being close to parallel to the line of the sensor.
Next I converted the measurements from the front sensor to cartesian coordinates and superimposed them on each other. By examining this plot I estimated likely lines to fit each wall and box.
I then did the same with the sensor that pointed backwards, correcting for the 180 degree offset of measurements.
The measurements from sensor two did not match up to the map found from sensor one. I attribute this to the fact that the second sensor's line of measurement was not perfectly parallel to the floor. Since there was no flat surface to mount it on I did my best to align it when I mounted it, but on post examination it seems to have been angled upward slightly. I believe this led to it measuring past/over the walls, leading to inconsistent readings anywhere the robot wasn't very close to the wall.
Finally I compared my map to the map provided with the simulator to see how similar they are.
Given the differences in the placement of the floating box, I manually measured the arena to confirm my mapping.
First I implemented the ability to take repeated 15 degree turns. I used PID control to turn to a target angle, although I had to tune my values differently depending on which part of the floor I was working, the battery level, and possibly other variables. I implemented a state machine which had three states; sensorSweep, turn, and pause. Pause waited long enough for the distance sensor to take a reading. Turn turns the robot to a specified angle, and sensorSweep controls the logic for counting measurements and setting the target angles.
Since I could not connect to my artemis from the jupyter notebook I had to pass the data between the two python instances. My BLE control script writes the data it receives from the robot into a csv file. The localization code watches for that file, and when it is created it reads it to get the distance measurements. When it has localized it reverses the process to pass the pose it calculates back to the BLE control script.
I increased the number of gridcells to increase the precision my filter was capable of. I increased the x and y cell counts by a factor of 4, but left the angles the same since my robot's turn accuracy made it less benificial to increase rotation granularity. While this did increase the time it took to pre-process the map to 10 minutes, utilizing numpy kept the update step running in under a second. To avoid the 10 minute wait each time I started up, I added functionality to save a mapper view to files, and load it back in a future run.
I then placed the robot at various point on the map and collected sensor data.
Using that data I performed the update step of my Bayes localization, to identify the robots pose. While not perfect, my filter consistnently found a point near the robot's actual position.
The robot localized better in some locations than others. The two marked locations on the far right (x coordinate 5) were particularly dificult. I believe this is due to similarities between these parts of the map, as well as the long distances that the robot was sensing at. extremely long distance measurements seem more error prone, and at those ranges even slight error in the robot's angle contribute to significant distance errors.
In this lab we used the robot to navigate the arena, reaching specific waypoints shown in the image above.
Since I encountered dificulties with the long diagonal distance measurements that the first movement required, I elected to add a waypoint above the start position.
I chose to simplify the turn-move-turn method we were given even further, by removing the final turn. I then implemented robot states to move the robot a set distance, or turn the robot. My laptop could then calculate a command based on the robots position and the next waypoint, and send each command to the robot.
For my first implementation I attempted to navigate blindly using only PID on my sensor readings. I used my distance sensor to detect what linear distance I had traveled during move commands, and the IMU to both keep move commands from turning, and to perform rotations in place. I used PID for both of these types of movement. For move commands, I performed PID on the difference between the initial distance and the current distance (accounting for the sensor's update rate). I simultaneously applied opposite offsets to each motor calculated using PID control to keep the robot's yaw the same as at the start of the motion. Turns simply used PID on the yaw to rotate in place.
I then attempted to navigate through the waypoints without localizing.
While I was able to reach the waypoints most of the time, it was not entirely consistent so I chose to also use localization. I initially localized after every turn-move. However, it was incredibly slow to gather so many range sweeps, and it was often unneeded. As such, I selected specific waypoints that it would use localization to navigate to, and used those as a baseline to keep the rest of the path accurate. I chose points based on both the distance travelled, the number of waypoints visited, and the dificulty of navigating to a waypoint.
While I was able to reach the waypoints most of the time, it was not entirely consistent so I chose to also use localization. I initially localized after every turn-move. However, it was incredibly slow to gather so many range sweeps, and it was often unneeded. As such, I selected specific waypoints that it would use localization to navigate to, and used those as a baseline to keep the rest of the path accurate. I chose points based on both the distance travelled, the number of waypoints visited, and the dificulty of navigating to a waypoint.
Unfortunately the many trial runs tuning localization and mapping led to hardware issues cropping up. While I was able to repair or compensate for the most part, I was unable to complete a full succesful run with localization. The most persistent issue I encountered was repeated bluetooth disconnections.
The following videos demonstrate my robot's ability to return to the path when it went off course.
The final video demonstrates how even after starting well off from the intended path, the robot was able to localize and navigate to its target.