ECE 3400 Team 1 Lab and Milestone 4
Maze storage on the Arduino
Three bytes represent each node for our team's implementation of Dijkstra's algorithm.
Recall from lab 3, a node represents qualities of every intersection in a maze. A maze
is up to a 9x9 array of nodes. Our robot has capability for a 10x10 array of nodes.
Byte 0 stores the node type (wall information), pointback bits, and explored/searched bits.
Byte 1 stores if the node is valid, if it is a start node, if it is occupied by another
robot, and a count of how many turns are required to reach a node, used in Dijkstra's algorithm.
Byte 2 stores a distance count, used in Dijkstra's algorithm. Because we allow a hard cap
of a 10x10 node array, a maximum of 300 bytes stores what the robot knows about the maze.
We reused part of the FFT input buffer for that purpose, as it could hold up to 1024 bytes
and is useless after tone identification. Here is where all of the information resides:
Node->Byte0 = {nodeType[3:0],pointback[1:0],searched,explored}
Node->Byte1 = {is_start_node, is_valid, (free bit), is_occupied, turnCounter[3:0]}
Node->Byte2 = {distanceCounter[3:0]}
Above, turnCounter, distanceCounter, pointBack, and searched change often as Dijkstra's algorithm
recalculates node weights every time the robot visits a new unexplored node.
Inter-Arduino Communication Scheme
We don't need to send all of the node information between the nodes. The base station at
its core only needs the location of the node and what node to draw. If that sounds a lot
like how the base station Arduino sends information to the FPGA graphics card, it's because
Alex Coy wanted the 8-bit graphics to be as simple yet powerful as possible. Here is
code copied and pasted from our lab 3 website.
assign yCoord = sw_reg[15:12];
assign xCoord = sw_reg[11:8];
assign nodeType = sw_reg[7:0];
From an RF24 library perspective, sw_reg is a uint16_t. We send sw_reg to the base station
every time we discover something new and interesting about the maze.
Updating the display
The information that the screen displays resides in a 14x10 array. This allows us to
draw a 10x10 maze with a 3x10 sidebar. The 14th column is not full; it is only a stripe
on the right side of the screen. We can still set it in different colors and patterns.
When the base station receives sw_reg, it decomposes the signal into its parts, and then
updates the screen array.
if (radio.available()) {
int done = 0;
while (!done)
{
done = radio.read( &instruction, sizeof(unsigned int) );
}
byte x = ((instruction & 0x0F00) >> 8);
byte y = ((instruction & 0xF000) >> 12);
byte type = (instruction & 0x00FF);
if ((y < 10) && (x < 14)) {
writeCharacter(x, y, type);
asciiRedraw = 1;
radio.stopListening();
radio.write(&instruction, sizeof(unsigned int));
radio.startListening();
break;
}
}
//... Code, code, and more code
void writeCharacter( int x, int y, int type) {
if (type == ' ') type = 16;
screen[x][y] = (uint8_t)type
}
The Arduino sends the screen to the graphics card once per main loop iteration.
Robotic Integration
The ``override button'' resides on the base station. The base station can send instructions
to the robot before it starts mapping the maze. Non-drawing conversations between the
robot and base station use x and y locations that are non-physical on the screen. See
the Github code for specifics.
Currently, instructions exist for the following.
- Arm the robot for competition (start listening for B-flat 933 and start mapping on the
trigger tone)
- Stop listening and disarm (don't do anything until instructed)
- Stop listening and start mapping the maze
- Start mapping the maze without listening at all
- Start mapping the maze without listening in a debugging-useful way.
- Spew sensor information continuously
- Manually drive the robot in between intersections
Wall sensing works. Robot sensing works in theory. We use Dijkstra's navigation algorithm.
When we detect a robot, the algorithm marks the node with the detected robot in it
and finds another route to an unexplored node. If a robot blocks us off in an explored
dead-end, the robot will just go back and forth until the chokepoint opens again.
A green LED hangs out and lights when the robot knows it ran out of maze to explore.
The following video demonstrates the robot's maze communication result and drawing scheme
and its navigation ability.
Milestone 4
The robot integration (display and updating, communication, and robot detection) tasks as
laid out in milestone 4 appear above in the videos.