This web serves to document lab projects in ECE 5160 (Spring 2025).
Find Out MoreHello! I’m Shuchang Wen (温书畅), currently an MEng student in Cornell’s ECE program. Welcome to my ECE 5160 project website, where I’ll document how I tackled each lab and ultimately built a fast, autonomous robot for the course “Fast Robots.” Here, you’ll find my reports, insights, and step-by-step progress as I explore sensor integration, dynamic controls, and the balance between on-board and off-board computation. I hope this site serves as both a personal portfolio and a resource for anyone excited about designing and implementing high-speed autonomous systems.
Get Started!The main goal of LAB 1 is to set up and familiarize myself with the Arduino IDE and the Artemis development board.
In Lab 1A, I programmed the development board, experimented with controlling the onboard LED, read and wrote serial messages via USB, and utilized the onboard temperature sensor and pulse density microphone. Additionally, I completed the additional tasks for 5000-level students.
In Lab 1B, I built communication between the computer and the Artemis board via Bluetooth. Also, I implemented the ECHO command, SEND_THREE_FLOATS command, GET_TIME_MILLIS command, and a total of eight tasks. Additionally, I analyzed the communication performance.
Parts Required: 1 x SparkFun RedBoard Artemis Nano and 1 x USB cable
In the prelab, I downloaded and installed the latest Arduino IDE. I also added the JSON link to the Arduino settings (as shown in the image below).
In Task 1, I connected the Artemis development board to my computer and selected the correct board and port in the Arduino IDE.
However, I found that two ports with the name RedBoard Artemis Nano appeared. To enable data transmission via USB, we need to select the first port (/dev/cu.usbserial-120).
In Task 2, I uploaded the 01.Basics code to my Artemis board and ran it. The process completed without any errors, and the blue LED on the Artemis started blinking, indicating that the board model and port I selected in the prelab were correct.
Then, I changed the delay in the code from 1000 to 500, uploaded it again, and observed that the blue LED on the Artemis blinked faster.
In Task 3, I uploaded and ran the Example4_Serial code on my Artemis board. The process completed without any errors. However, when I checked the serial monitor, I noticed garbled text.
I realized that the baud rate in Example4_Serial was set to 9600, while the serial monitor was set to 38400, causing a mismatch—essentially, "the Artemis was speaking a language my computer couldn't understand." After setting both baud rates to the same value, everything worked correctly.
In Task 4, I uploaded the Example2_analogRead code to the Artemis board. I tried blowing air onto the Artemis, and I observed that the temperature values in the output increased accordingly.
I also try to use getTempDegF() Function to get the die temperature in deg F. Below is a screenshot of the serial monitor output while running the code.
In Task 5, I tested the microphone on the Artemis board using the Example1_MicrophoneOutput code.
In the additional task, my main goal was to make the blue LED on the Artemis light up when musical C is played and turn off when the sound stops.
After multiple tests, I found that the frequency of musical C was 526 Hz. So, I modified the Example1_MicrophoneOutput file by adding the following code and uploaded it to the Artemis.
ui32LoudestFrequency = (sampleFreq * ui32MaxIndex) / pdmDataBufferSize;
if (ui32LoudestFrequency >= 524 && ui32LoudestFrequency <= 527)
{
digitalWrite(LED_BUILTIN, HIGH); // turn the LED on (HIGH is the voltage level)
Serial.println("turn the light on");
}else{
digitalWrite(LED_BUILTIN, LOW); // turn the LED off by making the voltage LOW
Serial.println("turn the light off");
}
To avoid interference from background noise, I set the LED to turn on only when the detected frequency was within the 524–527 Hz range. Finally, I successfully achieved my task objective.
Before starting the tasks in Lab 1B, I needed to download Python, create a virtual environment, and activate it.
The next and most important step was properly configuring BLE (Bluetooth Low Energy) communication to ensure a stable connection between my computer and the Artemis development board.
To achieve this, I needed to change the Artemis's MAC address and generate a new UUID. The purpose of this step was to ensure that my Artemis board had a unique BLE identifier (determined by its MAC address and UUID) and to prevent accidental BLE connections to the wrong device.
These two screenshots show the files that I change the MAC and UUID of my Artemis.
In Task 1, I implemented the ECHO command, which sends a string from the computer to the Artemis board and then receives the same string back.
Inspired by the PING command, my strategy was to first store the received string in an array using tx_estring_value.append, and then send it back to the computer using tx_characteristic_string.writeValue.
the Artemis-side code for Task 1 (ECHO command)
From the two screenshots below, we can see that my computer successfully received the message "Robot says -> Hihello:)". Additionally, the expected prompt was correctly printed on the serial monitor of the Artemis board.
The main goal of Task 3 was to implement the SEND_THREE_FLOATS command, which sends three floating-point numbers to the Artemis board and prints them on the Artemis side.
After completing the ECHO command, this task was relatively straightforward. My approach was to call robot_cmd.get_next_value() three times to retrieve the three floating-point numbers sent from the PC and then print them using Serial.print().
In Task 3, the main goal was to add the GET_TIME_MILLIS command, which makes the robot respond with a string containing the current time in milliseconds and write it to the string characteristic. This task was very similar to the PONG command and was primarily designed to help us get familiar with the millis() function.
One small issue I encountered was forgetting to perform explicit type conversion. The function tx_estring_value.append only accepts int, float, double, or string, but millis() returns an unsigned long int. As a result, directly uploading the code caused the following error.
The issue was resolved by explicitly casting millis() to an int before appending it to tx_estring_value.
The main task in Task 4 was to write a notification handler in Python to receive the string value from the Artemis board and extract the timestamp from the received content. This task helped me become familiar with ble.bytearray_to_string() and deepened my understanding of UUIDs and how data is stored and transmitted between the PC and Artemis.
In Task 5, I wrote a loop to continuously obtain the current time in milliseconds and send it to the computer. As shown in the results below, 68 timestamps were sent to the PC in 2 seconds, meaning the transmission speed was 34 timestamps per second. Since each timestamp contains 7 bytes, the data transfer rate was approximately 238 bytes per second.
In Task 6, I defined a constant array_size with a value of 100 to set the size of an array for storing timestamps. I then sent the entire array back to the computer and printed the timestamps in Jupyter Notebook to verify that they were indeed stored in the array.
Task 7 was very similar to Task 6. I added another array of the same size to store temperature readings. Using the GET_TEMP_READINGS command, I stored both timestamps and temperature values. Finally, I sent both arrays back to the computer and paired each temperature reading with its corresponding timestamp.
In the tasks above, we have two approaches:
Method 1 Real-time Transmission of Timestamps: This method is suitable for applications that require low latency, such as real-time monitoring. The advantage is that data is immediately transmitted to the computer, but it may be limited by BLE transmission speed. Frequent transmission of small data packets can lead to packet loss or higher power consumption.
Method 2 Batch Storage + Transmission: This method is better suited for large-scale data collection, such as experimental data recording. It first stores data in memory and then transmits it all at once, which improves transmission efficiency and reduces BLE connection overhead. However, its disadvantages include increased storage management complexity and the inability to access data in real time.
Data Recording Speed Comparison With an ideal BLE transmission speed of approximately 1 Mbps, Method 2 can record timestamps and temperature readings at a maximum rate of about 7,812 data points per second. If storing data in Artemis's RAM, which has only 384 KB, and assuming 16 bytes per data point, it can store approximately 24,576 data points before requiring transmission.
In this task, I implemented the Effective_Data_Rate function to measure the Round-Trip Time (RTT) for different message sizes.
The results are as follows:
We can see that message size and RTT have a proportional relationship—as the message size increases, it takes more time to send the data back to the PC. This indicates that larger message sizes result in slower transmission speeds.
During the reliability stress test, when I set the message size to 480, I was unable to obtain an RTT, and the connection to Artemis was forcibly disconnected. This indicates that if data is sent too quickly, it may overflow, causing some data to be dropped.
Through Lab 1, I bknow how to establish communication between the PC and the Artemis board.
Additionally, I learned how UUIDs and MAC addresses ensure the uniqueness of the Artemis board.
Most importantly, I gained insights into how data transmission size and speed affect response time, which will help me approach data transmission challenges more effectively in the future.
The purpose of this Lab is to familiarize with the calculation and transmission of IMU-related data, specifically obtaining IMU data to compute roll, pitch, and yaw and transmitting it to a computer. Additionally, it aims to understand how to reduce noise interference in the final output.
First, the Artemis board and IMU need to be properly connected, as shown in the diagram below.
Next, I installed the "SparkFun 9DOF IMU Breakout - ICM 20948 Arduino Library" from the Arduino Library Manager and connected the IMU to the Artemis board using the QWIIC connector.
To verify if the IMU is working properly, I ran the example program located at: ..\Arduino\libraries\SparkFun_ICM-20948\SparkFun_ICM-20948_ArduinoLibrary-master\examples\Arduino\Example1_Basics.
Also, I added code shown below so that the LED would blink three times slowly on start-up.
The video above demonstrates how the roll and pitch values change when I place the IMU in a horizontal position and different vertical orientations.
I used myICM.dataReady() to check if the IMU data is ready and myICM.getAGMT() to update the accelerometer values for the x, y, and z axes.
Then, I implemented the formulas shown in the classroom slides on Arduino to calculate the Roll and Pitch values.
As shown in the video, the final output differs from the expected output. The table below illustrates this difference.
Rotation | 90 (expected) | 0 (expected) | -90 (expected) |
---|---|---|---|
Roll | 88.4 (final) | -0.3 (final) | -91.2 (final) |
Pitch | 87.6 (final) | -0.8 (final) | -88.7 (final) |
To better observe roll and pitch, I added a new case, GET_Pitch_Roll, to the Lab 1 code. This allows the IMU to transmit data to my computer. Therefore, I collected 150 data points at a 50ms interval.
It can be observed that the data is not perfectly smooth and contains noise. To analyze the noise, I studied how to apply the Fourier Transform in Python by referring to the following resource: Fourier Transform in Python - Vibration Analysis.
Using this method, I obtained the following noise spectrum.
From the two images above, I observed that the noise is mainly concentrated in the 0~2 Hzrange. Therefore, I selected 2 Hz as the cutoff frequency.
Below is the Python code for implementing a low-pass filter and the result after applying the low-pass filter:
It can be observed that after applying the low-pass filter, the data graph becomes much smoother and a amount of noise is reduced.
For the gyroscope, the general approach is similar to that of the accelerometer.
First, based on the following calculation formulas, I added a new case, GET_PRY_Gyroscope, to the Lab 1 code to compute roll, pitch, and yaw using the gyroscope.
As a result, I obtained the following output images:
Then, I combined the Gyroscope and Accelerometer code to obtain a comparison graph of roll and pitch from both sensors. Also, I got the ouput of Yaw.
It can be observed that the Accelerometer is more sensitive to noise compared to the Gyroscope. Even minor noise can cause significant fluctuations in the Accelerometer readings.
On the other hand, the Gyroscope tends to exhibit a continuous increase or decrease over time. This phenomenon is not very noticeable in the graph above because the dataset is relatively small and the time intervals are short.
However, in the following experiment on the complementary filter, this issue becomes more evident.
Based on the content from the classroom slides, I got the formula for the complementary filter and implemented it in the Arduino code.
The most important step was determining the value of alpha. I experimented with multiple alpha values and finally settled on 0.7.
Below are the output graphs for alpha = 0.7 and alpha = 0.2.
From the graphs above, it can be observed that if the value of alpha is too low, the final result will overly rely on the Gyroscope, causing a continuous increase or decrease over time.
However, with alpha = 0.7, the filter effectively combines the advantages of both the Gyroscope and the Accelerometer, producing a more accurate and stable result. Therefore, alpha = 0.7 is a better choice.
First, I commented out all Serial.print statements in the main loop to eliminate any delays they might cause.
As shown in the code snippet below, I used ActiveFlag as a marker to indicate the start of data recording. The function record_process() is responsible for transmitting the x-axis data from both the Gyroscope and Accelerometer to my PC.
Although this function could also transmit the previously computed roll, pitch, and yaw, this part of the lab primarily focuses on discussing data transmission methods and efficiency. Therefore, it has been simplified to only send the x-axis data of the Gyroscope and Accelerometer.
Additionally, I introduced the variable MissCase to detect instances where the IMU data is not ready.
As shown in the image below, I used case Loop_Start and case Loop_End to control the start and end of the loop.
Additionally, I configured the LED on the Artemis to remain lit while recording data and to blink three times upon completion. This visual indication helps me better determine the current state of the process.
Using the code shown in the image below, I collected data over a period of 5 seconds.
During this time, I recorded 1,131 samples, and the value of MissCase was 0. This indicates that the IMU generates new data faster than the main loop executes.
I originally hoped that the car would dash onto the cardboard and perform a cool backflip, but that didn’t quite work out.
However, the high-speed spin was still a pretty awesome stunt!
In this lab, I learned how to obtain data from the accelerometer and gyroscope and calculate the corresponding roll, pitch, and yaw.
More importantly, I learned to use the low-pass filter and complementary filter to obtain reliable data. This prepares me for ensuring accurate navigation of the car in future tasks.
The goal of this lab is to equip the robot with distance sensors. By optimizing the sampling speed and reliability of sensor readings, the robot can improve its driving performance.
Following the information shown by the datasheet, the I2C sensor address should be 0x52. The detail is shown on the pic blow.
To use two TOF sensors simultaneously, I nedd to use XSHUT pin. By first disabling one sensor via the XSHUT pin, only one active sensor remains on the I²C bus. Then, an I²C command is used to assign a new address to this available sensor. After that, the other sensor is re-enabled (it still retains the default address at this point).
Now that the two sensors have different addresses, the host can communicate with both sensors separately, allowing them to be used simultaneously.
After watching the 2024 Spring students' Lab 12 video, I noticed that our final autonomous driving experiment will take place on a flat surface without slopes, and the obstacles are not difficult-to-detect objects like tilted poles.
Therefore, my approach is to position one ToF sensor facing forward and the other facing backward to detect obstacles in both movement directions.
However, I will not simply place the two sensors directly at the front and rear of the car. Instead, I will tilt both sensors slightly at a 30-degree angle (the line connecting the two sensors forms a 30-degree angle with the car's central axis). This way, I can also achieve side obstacle detection.
First, I soldered all the parts according to the Sketch of the wiring diagram, as shown in the figure below.
To prepare for future wireless connectivity, I also soldered the battery wires to the JST jumper.
Additionally, I tested whether the Artemis board could function properly with battery power alone by running the test code from Lab 1.
From the video, it can be seen that the Artemis board continues to function normally with battery power alone.
In this section, I ran Example1_wire_I2C
to check whether the I2C addresses matched my expectations. The results are shown in the figure below.
I noticed that the scanned I2C address was 0x29, which differs from the address (0x52) provided in the datasheet. Why is this?
The reason is that the datasheet provides an 8-bit address (including the read/write bit), while I²C scanning tools (such as Example1_wire_I2C.ino
) typically use a 7-bit address.
The 8-bit address 0x52 corresponds to 0b1010010
. Removing the last bit (shifting right by one bit) results in 0b0101001
, which is 0x29. This explains why my scan detected 0x29.
In this section, since .setDistanceModeMedium()
is only applicable to the Pololu VL53L1X library, I chose to test two modes: setDistanceModeShort();
and setDistanceModeLong();
.
I also visualized the obtained data to analyze the differences between these two modes.
According to the data, within 1.15 meters, both modes exhibit nearly identical accuracy. However, beyond 1.2 meters, DistanceModeShort
shows significant errors.
I believe this may be partially related to the measurement environment. Since the measurements were conducted in a dimly lit room at night, the lack of light could have contributed to the errors in the DistanceModeShort
mode.
This indirectly confirms that the sensor is somewhat sensitive to light intensity.
As shown in the figure, I connected the XSHUT pin to pin A7 of the Artemis Board.
The following code demonstrates that by controlling the voltage level of the A7 pin, I successfully assigned a new address to the second sensor.
With this setup, both sensors can now operate simultaneously.
To determine what limits the performance bottleneck of the ToF sensors, I made the following modifications to the code.
In the setup()
function, I started the distance measurement using distanceSensor0.startRanging();
and distanceSensor1.startRanging();
without waiting for it to complete.
In the loop()
function, I printed millis()
in every iteration to measure the execution speed of the loop.
If the data is ready, the distance is read and printed.
If the data is not yet ready, it is skipped without blocking the main loop.
From the output results, we can observe the following:
The Time(ms)
value fluctuates between 30 and 50 ms, indicating that the loop()
function runs approximately every 15-50 ms.
However, new sensor data is not available on every iteration. For example, at 38415 ms and 38627 ms, no new data was recorded.
This suggests that sensor data updates only every 30-50 ms.
Therefore, the current bottleneck is the sensor's own ranging time.
Based on the functions implemented in Lab 2, I created the function GET_Pitch_Roll_Distance_10s
.
The primary function of this implementation is to simultaneously start two ToF sensors and the IMU, record data for 10 seconds, and transmit the collected data to my computer via Bluetooth. The following is a partial screenshot of the code:
Finally, I obtained a graph showing the relationship between ToF data and time, as well as a graph illustrating the relationship between IMU data and time.
Sensor Type | Functionality | Pros | Cons |
---|---|---|---|
Time-of-Flight (ToF) Sensor | Measures the time taken for infrared light to travel to an object and reflect back to determine distance. |
|
|
Infrared Proximity Sensor | Detects objects based on the intensity of reflected infrared light. |
|
|
Structured Light Sensor | Projects an infrared pattern onto an object and analyzes the distortion to calculate depth. |
|
|
Laser Range Finder (LIDAR) | Uses a laser pulse and measures the time taken for it to return, similar to ToF but typically with higher precision. |
|
|
In this section, I collaborated with my classmate @Zhang, Haixi to test the sensor's performance under different conditions: a black box, a white box, and a transparent glass bottle.
The results are shown in the table below.
Collected Data vs Expected Data (cm) | 1 | 2 | 5 | 10 | 15 | 20 | 25 | 30 |
---|---|---|---|---|---|---|---|---|
Black Box | 1.8 | 3.8 | 5.3 | 9.3 | 14.1 | 24.5 | 27.5 | 29.6 |
White Box | 1.2 | 2.5 | 4.6 | 10.2 | 15.1 | 20.4 | 25.7 | 30.7 |
White Glass Bottle | 0.9 | 1.8 | 3.9 | 9.8 | 13.8 | 19.1 | 24.2 | 29.3 |
After conversation, we got those conclusions:
The ToF sensor shows relatively higher deviation in the black box, especially at shorter distances (e.g., at 1 cm, the measured distance is 1.8 cm instead of 1 cm). This is likely because black surfaces absorb more infrared light, reducing the amount of reflected light detected by the sensor. As the distance increases, the accuracy improves slightly, but errors remain noticeable.
The white box results are more accurate compared to the black box. White surfaces reflect more infrared light, allowing the sensor to receive a stronger signal. At closer distances, the error is minimal, and even at longer distances, the measurements are quite close to the expected values.
The sensor's performance with the white glass bottle is slightly worse than the white box but better than the black box. Transparent or semi-transparent materials like glass can refract or scatter infrared light, leading to some inaccuracies. At close distances, the measured values are slightly lower than expected, possibly due to partial transmission of light through the material rather than full reflection.
In this experiment, I learned how to use multiple sensors simultaneously on a single I2C bus and record data.
Additionally, I discovered that sensor performance varies under different lighting conditions and when detecting obstacles made of different materials.
This insight provides valuable guidance for my future autonomous driving experiments with the robot.
The goal of this lab is to transition from manual control to open-loop control for the car. By the end of this lab, your car should be able to follow a predefined sequence of movements using the Artemis board and two dual motor drivers. Ultimately, this experiment enables the car to achieve both straight-line motion and circular movement under the control of the Artemis board.
The following image shows the expected connection diagram. I will perform the soldering based on this diagram.
As shown in the diagram, I will use A2, A3, A13, and A14 on the Artemis board as motor control pins. Additionally, I will use a 3.7V 850mAh Li-Ion battery to power both motor drivers.
In Lab 3, I successfully powered the Artemis board using a 3.7V 650mAh battery. In this lab, I will continue using the same approach, allowing the Artemis board to operate without a USB power connection.
Question: I was asked to power the Artemis and the motor drivers/motors from separate batteries. Why is that?
The Artemis board and the motor drivers/motors should be powered by separate batteries to prevent electrical interference and voltage fluctuations. Motors typically draw a significant amount of current, which can cause sudden voltage drops and introduce noise into the power supply. If both the Artemis board and the motors share the same power source, these fluctuations could lead to instability in the Artemis board, causing unexpected behavior or even system resets. By using separate batteries, it ensure a more stable and reliable power supply for both the control system and the motors.
First, I soldered the motor driver and the Artemis board according to the previously planned connection diagram. At this stage, I did not connect BOUT and AOUT to the motors. Instead, I connected them to the oscilloscope to observe whether there was any signal output. The results are shown in the video.
Later, I realized that in the video, I had connected the oscilloscope's output and input ports incorrectly. As a result, although there was a signal output, it was not the expected rectangular waveform. The following image shows the correct signal waveform obtained after properly using the oscilloscope.
Here is my test code
void setup() {
pinMode(ABIN1, OUTPUT);
pinMode(ABIN2, OUTPUT);
}
void loop() {
analogWrite(ABIN1, 255);
analogWrite(ABIN2, 0);
}
After confirming that the signal was successfully output, I connected the motor driver to the motor and supplied power. The results are shown in the video.
Additionally, I conducted a speed variation test on a single motor. The code and video are shown below:
I observed that during the transition from rest to motion, the motor voltage was relatively high. I believe this is because the motor needs to overcome significant resistance when starting from a stationary state, leading to this effect.
After successfully testing the operation of a single motor, I soldered all the parts together according to the planned connection diagram and conducted a dual motor operation test. The code and video are shown below.
From the video, it can be seen that the car can accelerate both forward and backward, indicating that all my circuit connections are correct!
Next, I will use another battery to power the Artemis board and make the car run on the ground!
From the video above, it can be seen that with two batteries powering the system, the car successfully runs. (However, setting the PWM value to 255 at the beginning was a bad idea—the car was so fast that it almost took off!)
The image below shows how I secured all the components.
In this part of the experiment, I determined the minimum PWM values required to make the car move using the enumeration method. The results are:
The following is a video showing the car in motion.
In the previous experiments, I noticed that the right wheel had significantly less power than the left wheel. To achieve straight-line motion, I used the following PWM values in my first test:
The results are shown in the video. The car exhibited a leftward drift.
After multiple attempts, I adjusted the PWM values as follows and successfully achieved straight-line motion:
From the video, it can be seen that the car is able to travel in a straight line for a distance of more than 2 meters.
Next, I conducted an open-loop test on the car, making it move forward, backward, and perform left and right turns. To my surprise, in order for the car to turn in place, I had to set the PWM value to at least 120. This indicates that the car needs to overcome significant friction when turning.
I believe this is due to the car's tires being very rough and having deep treads, which create a high amount of friction against the ground. Therefore, in future labs, I might wrap the car's tires with smooth tape to reduce friction.
Returning to the video below, it can be observed that the frequency of the signal generated by analogWrite
is quite high, reaching nearly 190Hz.
However, we need to consider potential bottlenecks. If this frequency is lower than the data transmission frequency of the IMU or other sensors, manually setting a higher frequency may become necessary.
The following code represents my approach to finding the minimum PWM value for maintaining motion:
Based on previous experiments, I determined the PWM values required to start the car from a stationary state:
Next, I used a for
loop to decrease the PWM value by 2 every 2 seconds until it reached 0. During this process, I observed when the car stopped moving and recorded the time at which it stopped. This allowed me to calculate the minimum PWM value required to sustain motion.
Finally, I determined that the minimum PWM value required to sustain motion is 36 - 38.
In this lab, I successfully assembled the car. Additionally, this lab made me think about factors that influence the car's eventual "autonomous driving," such as tire friction and the data transmission frequency of different components.
What I found most interesting was that, as a CS major during my undergraduate studies, I had never done so much soldering before. This lab significantly improved my hands-on skills and allowed me to appreciate the beauty of hardware. When the car successfully moved after assembly and soldering, I felt an incredible sense of accomplishment.
In this lab, we implement PID control to achieve precise position control. PID (Proportional-Integral-Derivative) controllers are widely used in control systems due to their ability to minimize steady-state error while improving response time and stability. The objective is to develop a PID-based position control system, considering practical implementation constraints such as sensor noise, actuator limitations, and tuning challenges.
To enable the car to start via Bluetooth, I set a pid_active
flag in the Arduino code. I also created a function named pid_control()
and placed it inside the loop
. If pid_active = 1
, the pid_control()
function is executed.
Additionally, I created a case named PID_control
to facilitate Bluetooth communication between the laptop and the Artemis board. The primary function of this case is to receive values for kp
, ki
, kd
, and target_distance
from the Python side and set the pid_active
value to 1
.
Below is a screenshot of the relevant code:
The specific implementation of the pid_control()
function and the usage of PID will be explained in detail in the following sections.
I reviewed the content from Lab 2 and implemented data transmission using a similar approach. First, I used arrays P_arr[500]
, I_arr[500]
, D_arr[500]
, Speed[500]
, Error_arr[500]
, Tof[500]
, and Time[500]
to store all relevant data from the pid_control()
function.
Additionally, I created a Data_Send()
function and a case named send_data
to handle data transmission. The relevant code is shown below:
On the Python side, I used notification_handler
to receive data sent from the Artemis board. This approach has been proven effective in Lab 2 and in the subsequent experiments of this lab. A portion of the relevant code is shown below:
PID (Proportional-Integral-Derivative) control is a widely used control algorithm in robotics, automation, and industrial systems. It provides precise and stable control by continuously adjusting the system’s output based on error feedback.
Kp
) leads to a faster response but may cause oscillations.Ki
) helps reduce offset but can introduce instability.Kd
) helps dampen oscillations and improve stability.The control output u(t)
is given by:
In this section, I will explain how I implemented PID control in the pid_control()
function.
Based on the formula given in the previous section, I computed the values for P, I, D, and error. The sum of these terms (P + I + D
) was used as the control output, which serves as the input to the motor.
The relevant code is shown below:
In the code above, the following two lines particularly necessary and important:
int motor_speed = constrain(control_output, -100, 100); integral = constrain(integral, -200, 200);
These constraints help prevent excessive values that could destabilize the system:
motor_speed = constrain(control_output, -100, 100);
ensures that the motor speed remains within a safe operating range, preventing saturation or excessive speed.integral = constrain(integral, -200, 200);
limits the accumulation of the integral term, mitigating integral windup, which can cause instability.In this section, I will attempt to transmit different Kp
, Ki
, and Kd
values to the robot to achieve stopping distances of 304mm and 204mm from the wall.
Final Conclusion:
Kp = 0.1
, Ki = 0.0001
, Kd = 8
produced the best results.
Procedure:
First, I tested Kp = 0.7
, Ki = 0.001
, and Kd = 3
. I observed an overshoot phenomenon, where the robot continuously oscillated near the target instead of stabilizing. Additionally, the deceleration was too slow, preventing the robot from stopping properly.
Below are the related data visualization:
From the visualization above, it can be observed that after hitting the wall, the sensor readings fluctuate significantly, indicating that the robot is in an oscillatory state.
To improve the slow deceleration and reduce oscillations, I adjusted the parameters to Kp = 0.5
, Ki = 0.0001
, while keeping Kd
unchanged.
Below are the related video and data visualization:
It can be observed that the robot no longer oscillates, but the deceleration is still too slow, causing it to hit the wall. This can also be seen in the data visualization, where the sensor values increase after the collision.
To achieve faster deceleration while preventing excessive speed, I adjusted Kp
to 0.1
and Kd
to 8
, keeping Ki
unchanged.
Below are the related video and data visualization:
It can be observed that the robot successfully stops at a distance of 304mm from the wall—experiment successful!
However, I noticed that the robot tends to drift to the left. To address this, I applied the results from Lab 4 to this experiment.
Next, I tested placing the robot at a distance of over 2 meters from the wall and aimed for it to stop at 200mm. To achieve success in this scenario, I adjusted Kd
to 25
.
Below are the related video and data visualization:
It can be observed that the robot moves forward in a straight line and successfully stops very close to the target distance from the wall—experiment successful!
I tested the robot on a carpet using the parameters Kp = 0.1
, Ki = 0.0001
, Kd = 25
, with a target distance of 104mm
.
Below is the video of the experiment:
It can be observed that, with these parameters, the robot performs excellently on the carpet.
Initially, my strategy was to transmit data immediately after computing the current PID values and motor input, instead of separating data transmission from PID control. This approach caused slow data transmission and occasional Bluetooth congestion.
To resolve this issue, I separated data transmission and PID control into two different cases, which effectively eliminated Bluetooth congestion.
My final sampling data size was 500, with a total duration of 5 seconds.
However, in a previous sensor lab, I observed that sometimes, during the loop execution, the sensor data was not yet ready. This occasionally resulted in empty sensor readings within the loop. To address this issue, a data extrapolator needs to be implemented.
To prevent cases where the ToF sensor might lack data due to its different frequency from the loop, I implemented a data extrapolator using linear interpolation.
The relevant code is shown below:
From the code, it can be observed that when the ToF data is detected as not ready, I use the last two ToF data points to calculate the slope and estimate a new ToF value.
Below is the visualization of the obtained data:
Integral wind-up occurs when the integral term in a PID controller accumulates excessive error during long periods of saturation (when the actuator is at its limit). This can cause significant overshoot, slow response times, and instability. Without wind-up protection, the controller may take a long time to recover, especially when there are constraints such as motor speed limits or varying floor surfaces with different friction coefficients.
For example, if the robot starts on a slippery surface and then moves onto a high-friction surface, the integral term may have already accumulated a large value, leading to excessive correction that destabilizes the system. Wind-up protection helps mitigate this issue by preventing the integral term from growing beyond a certain threshold.
To prevent integral wind-up, I implemented a constraint on the integral term, ensuring it remains within a predefined range:
integral = constrain(integral, -200, 200);
Additionally, I reset the motor speed when the control output reaches the actuator limit:
int motor_speed = constrain(control_output, -100, 100);
Thanks to TA: @Cameron Urban
My ToF data was consistently problematic, preventing me from completing Lab 5 on time. Cameron identified that some solder joints on my ToF sensor might be causing the issue and personally demonstrated better soldering techniques. Under his guidance, I resoldered both ToF sensors and other potential problem areas.
Thanks to my partner: @Haixi Zhang
Haixi and I discussed the individual effects of Kp
, Ki
, and Kd
on the robot’s behavior. He also shared valuable tuning techniques, which helped improve the PID parameter selection process.
The goal of this lab is to gain experience with orientation PID control using the IMU. In Lab 5, PID control was applied to regulate wall distance using ToF sensors. In this lab, the focus will shift to controlling the robot's yaw using the IMU. Similar to the previous lab, you are free to choose the controller that best suits your system.
Lab 6 is very similar to Lab 5, so I decided to follow the same approach.
To enable the robot to start via Bluetooth, I set a flag named pid_imu_active
in the Arduino code. I also created a function called pid_control_IMU()
and placed it inside the loop
. When pid_imu_active = 1
, the pid_control_IMU()
function executes.
Additionally, pid_control_IMU()
receives Kp
, Ki
, and Kd
values, as well as target_angle
, from the Python side.
Below is a screenshot of the relevant code:
calibrateGyro()
Unlike Lab 5, in this lab, I added the function calibrateGyro()
when executing the PID_IMUcontrol
case.
The reason for this addition was that after reviewing the tutorial on the Digital Motion Processor (DMP), I successfully implemented DMP.
However, the results I obtained are shown in the following video:
It can be observed that under DMP, the robot's roll remains very stable, but pitch and yaw continuously increase (data drift). I believe this is due to the continuous increase in Z-axis data, which causes yaw to keep increasing and subsequently affects pitch.
To address this issue, I implemented calibrateGyro()
for calibration. The relevant code is shown below:
I sampled the Z-axis data 50 times and calculated the average. This average value represents the offset of the Z-axis that needs to be calibrated.
Question: Why is only Z-axis (Yaw) drift corrected in Gyro Bias Calibration, while X (Roll) and Y (Pitch) are not adjusted?
Answer: Z-axis (Yaw) drift is the most severe
The specific implementation of the pid_control_IMU()
function and the usage of PID will be explained in detail in the following sections.
For data storage and transmission, Lab 6 fully follows the approach used in Lab 5. Therefore, detailed information can be referenced in my Lab 5 report. Here, I will only present the relevant code for data storage and transmission.
Below is the Arduino-side code:
Below is the Python-side notification code and a running example:
PID (Proportional-Integral-Derivative) control is a widely used algorithm in robotics, automation, and industrial systems. In this lab, PID is applied to IMU-based orientation control, ensuring precise and stable yaw adjustments by continuously modifying the system's output based on error feedback.
Kp
) results in a faster response but may cause oscillations.Ki
) helps reduce offset but can introduce instability.Kd
) helps dampen oscillations and improves stability.The control output u(t)
is given by:
In this section, I will explain how I implemented PID control in the pid_control_IMU()
function.
Based on the formula given in the previous section, I computed the values for P, I, D, and error. The sum of these terms (P + I + D
) was used as the control output, which serves as the input to the motor.
The relevant code is shown below:
These constraints help prevent excessive values that could destabilize the system:
int correction = constrain(control_output, -135, 135);
ensures that the motor speed remains within a safe operating range, preventing saturation or excessive speed.integral = constrain(integral, -500, 500);
limits the accumulation of the integral term, mitigating integral windup, which can cause instability.In this lab, I obtained the current yaw value using the get_yaw()
function. I implemented get_yaw()
using the DMP method. However, the readings from my get_yaw()
method always had a delay of approximately 0.3 seconds. Fortunately, I successfully resolved this issue, and the detailed solution is discussed in the Discussion on Sampling Rate and Time section.
Here is the code of get_yaw():
In this section, I will attempt to transmit different Kp
, Ki
, and Kd
values to the robot to achieve stopping distances of 304mm and 204mm from the wall.
Final Conclusion:
Kp = 4
, Ki = 0.0001
, Kd = 800
produced the best results.
Procedure:
First, I tested Kp = 6
, Ki = 0
, and Kd = 0
. I observed an overshoot phenomenon, where the robot continuously oscillated near the target instead of stabilizing. Additionally, the deceleration was too slow, preventing the robot from stopping properly.
Below are the related video and data visualization:
From the video, it can be seen that the car keeps oscillating back and forth and is unable to stop at the 90-degree angle.
To improve the slow deceleration and reduce oscillations, I adjusted the parameters to Kp = 4
, Ki = 0.0001
, Kd = 800
.
After multiple attempts, I found that the car performed best when Kd = 800. The reason for such a large Kd value is that I used tape to stick the car's wheels, significantly reducing friction, making it more difficult for the car to maintain a specific direction. However, without the tape, the car struggles to resist friction, so using the tape is necessary.
Below are the related video and data visualization:
From the video, it can be seen that the car rapidly rotated several times before stabilizing at a 90-degree angle. I believe there are several reasons for this result:
get_yaw()
sometimes generates noise (sudden data changes), causing the car to misjudge the angle.After limiting the motor speed, the car became more stable. I believe that using Kalman filtering in Lab 7 will help improve the noise issue.
The following shows the car's performance and data visualization after optimization, using the same parameters when target_angle = 0
.
It can be seen that even when an external force forces the car to change direction, it can promptly correct itself. The experiment was successful!
In Lab 5, I encountered an issue with DMP reading delays: after changing the car's direction, the DMP would take about 0.3 seconds to update the output angle. This issue troubled me for nearly three days. Later, after checking the tutorial, I found that the following code was crucial:
success &= (myICM.setDMPODRrate(DMP_ODR_Reg_Quat6, 4) == ICM_20948_Stat_Ok);
In the example code, the original line was:
success &= (myICM.setDMPODRrate(DMP_ODR_Reg_Quat6, 0) == ICM_20948_Stat_Ok);
However, this setting caused the reading speed to be too slow, and the DMP queue could get filled up, leading to severe DMP delays. By reducing the DMP output rate, the Artemis loop processes data faster than the DMP generates it, preventing the queue from becoming overloaded.
The relevant code is shown below:
Integral wind-up occurs when the integral term in a PID controller accumulates excessive error during long periods of saturation (when the actuator is at its limit). This can cause significant overshoot, slow response times, and instability. Without wind-up protection, the controller may take a long time to recover, especially when there are constraints such as motor speed limits or varying floor surfaces with different friction coefficients.
For example, if the robot starts on a slippery surface and then moves onto a high-friction surface, the integral term may have already accumulated a large value, leading to excessive correction that destabilizes the system. Wind-up protection helps mitigate this issue by preventing the integral term from growing beyond a certain threshold.
To prevent integral wind-up, I implemented a constraint on the integral term, ensuring it remains within a predefined range:
integral = constrain(integral, -500, 500);
Additionally, I reset the Speed term when the control output reaches the actuator limit:
int correction = constrain(control_output, -135, 135);
In this lab, I applied PID to direction control and learned how to minimize the impact of noise on the output.
The goal of this lab is to implement a Kalman Filter to enhance distance estimation using the ToF sensor. This enables the robot to move faster towards a wall while still making accurate stop/turn decisions.
Below is the function code on the Arduino side and the command sending code on the Python side.
From the images above, it can be seen that I drove the robot toward the wall with a fixed PWM input (130), and recorded both the ToF distance and estimated speed. Using this data, I plotted raw distance and computed speed over time. I then estimated:
The data was saved locally in array format for further analysis.
The speed curve shows a downward trend because when the current distance is less than 100, I reverse the car’s motor at a speed of 255 to protect it instead of letting it crash into the wall.
In this section, I walk through how I calculated the drag coefficient d and mass-like parameter m based on the data I showed on the Step 1.
Time = 1870261.0 μs
, Speed = 2509.0 mm/s
→ steady-state speed vss = 2509.0 mm/s
v₀.₉ = 0.9 × 2509.0 = 2258.1 mm/s
Time (μs) | Speed (mm/s) | Note |
---|---|---|
1869949.0 | 2192.0 | ❌ Below |
1870053.0 | 2278.0 | ✅ Above |
So v = 2258.1
is between these two timestamps.
t₁ = 1869949.0 μs, v₁ = 2192.0
t₂ = 1870053.0 μs, v₂ = 2278.0
Δv = 2278.0 − 2192.0 = 86.0
Δt = 104.0 μs
v_diff = 2258.1 − 2192.0 = 66.1
t₀.₉ = t₁ + (v_diff / Δv) × Δt
≈ 1869949.0 + (66.1 / 86.0) × 104.0
≈ 1870029.0 μs
Start time = 1869013.0 μs
t₀.₉ − t_start = 1870029.0 − 1869013.0 = 1016.0 μs = 0.001016 s
Input PWM = 130 → u = 130 / 255 ≈ 0.5098
From u = d ⋅ v_ss ⇒ d = u / v_ss = 0.5098 / 2509.0 ≈ 0.0002031 (1/s)
m = −d ⋅ t₀.₉ / ln(0.1)
= −0.0002031 × 0.001016 / (−2.3026)
≈ 8.96 × 10⁻⁸
These parameters were then used to construct my state-space model and discretize the Kalman Filter matrices.
Below is how I implemented the above process using Python code.
To verify the effectiveness of the Kalman Filter, I implemented a 1D position-only Kalman Filter in Python using my raw ToF data collected in the step response test.
My state vector was simply:
x = [position]
A = [[1]]
B = [[Δt]]
C = [[1]]
The input u
was the normalized PWM value (130 / 255), and the observation y
was the raw ToF measurement. The model assumes constant velocity during each small time step Δt.
I empirically set the process and sensor noise as:
Process noise variance (σ₁²) = 10² = 100
Sensor noise variance (σ₂²) = 20² = 400
Sigma_u = [[100]]
Sigma_z = [[400]]
For every ToF measurement, I called the Kalman Filter update function:
for each time step i:
y = observed ToF at time i
u = constant normalized PWM
mu, sigma = kf(mu, sigma, u, y)
This produced a smoother estimate of the true position at each time step, as shown below.
Below is the code implementation of the above description.
First, I followed the content from the tutorial section "4. Implement the Kalman Filter on the Robot" (https://fastrobotscornell.github.io/FastRobots-2025/labs/Lab7.html) and Step 3: "Implement and Test Kalman Filter in Python" in my report to import the BLA library and related parameters.
Then, I implemented a Kalman filter using the kalman_update
function to make the TOF data smoother.
I also modified the code in the PID_control()
function from Lab 5:
after reading the sensor data, I use the kalman_update
function to obtain the filtered distance,
and I use this filtered distance to calculate the error instead of the original current_distance
.
Finally, I still used the best PID values obtained from Lab 5, namely Kp: 0.1, Ki: 0.0001, and Kd: 25, and let the car try to stop at a distance of 104 mm from the wall (which is very close—I wanted to see how well the Kalman Filter performs).
The result is visualized in the figure below.
As shown clearly in the figure, the data after applying the Kalman filter is very smooth, indicating a successful experiment.
Below is the video of my robot successfully stopping 104mm from the wall using Kalman-filtered ToF values as input to the PID controller:
In this lab, I successfully implemented and validated a 2D Kalman Filter for ToF sensor fusion. I integrated it with PID control on the robot and demonstrated smoother, faster, and more accurate behavior compared to raw measurements alone.
The robot must start at the designated line (< 4m from the wall), drive forward at high speed, and initiate a 180° turn when within 3ft (914mm) of the wall. Successful execution requires fast reaction, stable turning, and accurate distance sensing. The challenge includes implementing precise timing for the drift maneuver while optionally using Kalman Filter predictions during sensor gaps.
The robot operates under a state machine with the following states:
STATE_FORWARD
: The robot moves forward at high speed.STATE_TURNING_AT_X
: Upon reaching a target distance, the robot performs a 180° turn.STATE_STOPPED
: The robot stops after completing the round trip.The following is a screenshot of the code I implemented for state machine transitions. This code is located in the program's loop
function.
Note: In the code, I also defined state_backward
, but in reality, it calls the same function as state_forward
. Therefore, in the initial state transition diagram, I represented both as Forward States. This design allows for better distinction between the pre-turn and post-turn states, making debugging more convenient.
In this experiment, the Arduino will receive 7 parameters from the Python side. These are:
ToF Sensor PID parameters: Kp_tof
, Ki_tof
, Kd_tof
IMU Sensor PID parameters: Kp_imu
, Ki_imu
, Kd_imu
Target distance: target_distance
The following is a screenshot of the code
In this experiment, the ToF module and IMU module are based on Lab 7: Kalman Filter and Lab 6: Orientation Control, respectively (URLs to be inserted). For detailed code information, please refer to these two lab reports.
The following are partial code screenshots.
It is important to note that the ToF sensor, IMU, and Artemis Board loop all operate at different frequencies. Therefore, it is necessary to add delay()
functions to certain data transmission segments to appropriately reduce their frequency and ensure accurate data communication.
Additionally, during the experiment, I encountered two issues: invalid initial sensor readings and IMU data drift. The following two sections explain how I addressed these problems.
During the experiment, I noticed that many times the car would directly enter the 180 degree turning state, skipping the initial forward state. This behavior is shown in the video below.
I checked the sensor data and found that the first 20 readings were extremely low (around 0–200), which caused the car to mistakenly believe it had already reached the target distance and immediately triggered the second state machine (180-degree rotation).
I believe this issue occurred because the sensor requires some time to initialize after calling distanceSensor.startRanging();
before it can produce accurate values. To address this, I added the following code in the pid_control()
function of the ToF module:
if (skip_counter < 25) {
skip_counter++;
return;
}
These few lines of code successfully resolved the problem.
I reused the PID values obtained from Lab 6 and Lab 7 for the experiment.
Experiment 1:
The input data I used is shown in the image below.
Experimental Result:
I observed that the car's IMU caused it to keep rotating without stopping, indicating a drift issue in the sensor data. To address this, I added the following code at the beginning of the IMU PID control logic:
With this piece of code, I performed initial bias calibration on the gyroscope's Z-axis (Yaw direction), which effectively prevented data drift.
Experiment 2:
The input data I used is shown in the image below.
Experimental Result:
As shown in the video above, the car rotated several times after reaching the target position before returning, which basically achieved the goal of the experiment. However, the fact that it took many rotations to stabilize in the 180-degree direction indicates that the Kp
value was too high and the Kd
value was too low. Therefore, I adjusted the parameters Kp_imu
and Kd_imu
accordingly.
Experiment 3:
The input data I used is shown in the image below.
Experimental Result:
As shown in the video above, the car rotated 180 degrees and returned after reaching the target position, successfully achieving the goal of this experiment.
The following are the sensor data.
It can be seen that the sensor data is very smooth thanks to the Kalman filter.
In the graph, the sensor data suddenly jumps from 400 to 1600 (timestamp: 96500 to 101000) because during this period the car is in the 180-degree rotation state, during which no sensor data is returned. After the rotation is complete and the system enters the next state, new sensor data is received around timestamp 101000. Since Python automatically connects data points in the plot, a straight line appears between timestamps 96500 and 101000.
During one of the tests, incorrect PID tuning and yaw drift caused the robot to spin endlessly.
Interestingly, the robot started spinning along with me, as if we were dancing a waltz together. It was so much fun!
I quickly took out my phone to record the moment — it was definitely the most entertaining part of the entire experiment.
The background music in the video is Chopin's Waltz Op. 64 No. 3 in F major — one of my favorite piano pieces. I hope you enjoy it too.
This lab integrates BLE communication, Kalman filtering, state machines, and PID control into a full pipeline for fast and autonomous maneuvering. The robot successfully detects the wall, performs a 180° turn using IMU yaw control, and returns to the origin. The implementation strengthened my understanding of sensor fusion and real-time embedded control.
The goal of this lab is to create a 2D map of a static environment—specifically, the front room of the lab—which will be used in later localization and navigation tasks. To construct the map, the robot is placed at several pre-defined positions and performs a 360-degree rotation at each one, collecting Time-of-Flight (ToF) sensor readings throughout the turn.
To ensure accurate and evenly distributed measurements in angular space, I implemented PID control on orientation, using integrated gyroscope data (via the DMP module) to maintain and adjust heading. This closed-loop control method allowed for consistent angular speed and precise yaw estimation throughout the rotation. At each time step, the robot recorded multiple ToF distance measurements at different angles relative to its position.
In Lab 6, I implemented orientation PID control using DMP values, which allowed the robot to rotate to a specified angle and stabilize. Therefore, I decided to reuse the functionality already developed in Lab 6 to implement the mapping function. Below is the key code for my mapping function.
Through the start_mapping
function, I set two flags—mapping_mode
and pid_imu_active
—to true
, enabling the mapping operation to run inside the main loop.
For orientation PID control, I used the pid_control_IMU
function. The detailed implementation and its underlying principles can be found in my Lab 6 report.
In Lab 9, my main approach was to increment a global variable target_yaw
by a fixed angle step_angle
after each step, allowing pid_control_IMU
to determine the desired yaw for each step. I considered the mapping complete when current_step > step_total
, indicating the robot had completed a full rotation.
Key parameters used:
In Lab 9, I will mainly present the following aspects:
During testing, I found that the IMU readings contained a lot of noise, such as NaN values and 0 values. I believe this issue was caused by the IMU data update rate being slower than the main loop rate. To address this problem, I needed to synchronize the IMU update rate with the main loop rate.
From the comments in the existing code, I found that the current DMP frequency is 27.5 Hz. To make the main loop slightly slower than the DMP update rate and thus reduce the occurrence of NaN and 0 values, I added the following code to the main loop:
As shown in the code, I used the condition if (now - lastPID >= 40) to make the mapping operation run every 40 milliseconds. This means the loop frequency during mapping is approximately 25 Hz, which is slightly lower than the DMP frequency. As a result, the issue of IMU values being NaN or 0 was significantly reduced.
During the experiment, I observed the following phenomena:
As shown in the video, when the wheels were wrapped with tape, the robot required a slight push to start rotating. This indicates that the robot could not overcome static friction on its own. To address this issue, I added the following code to help the robot overcome the static friction and initiate rotation.
Later, I discovered that after adding the code to overcome static friction, the robot was able to rotate successfully even without tape on the wheels, and it performed very well. This explains why the robot's wheels no longer have tape in the later videos.
I used the SEND_DATA command to receive the current angle along with the five collected ToF readings. The detailed code and a sample output are shown below:
Within the designated area of the lab, I positioned the robot at four locations: (-3, -2), (0, 3), (5, 3), and (5, -3), where it performed a full 360-degree rotation at each point. During the rotation, the robot recorded five ToF sensor readings every 24 degrees.
In this experiment, the PID values that gave the best performance were: Kp = 18, Ki = 0.0001, Kd = 15.
After performing the operation shown in the video at each position, I collected five ToF readings at every 24-degree interval. Using this data, I generated the following polar plots.
Additionally, I used the following code to process the four datasets, compute the transformation matrices, and convert the distance sensor measurements to the inertial reference frame of the room.
After plotting the data, I noticed that all the measurements were rotated 30 degrees counterclockwise in the Cartesian coordinate system. To correct this and make the final data more standardized, I added the following lines to the code:
rotation_offset_deg = 30
angle_rad = np.deg2rad(angle_deg + rotation_offset_deg)
The results are shown in the figure below:
In this lab, I successfully built a 2D map of the lab room by collecting ToF measurements at multiple positions while rotating the robot. I used PID control on orientation with integrated gyroscope data to ensure stable and accurate rotations. This method produced evenly spaced sensor readings, leading to a clean and reliable map for future localization and navigation tasks.
The objective of this lab was to implement a grid-based Bayes Filter to localize a mobile robot using odometry and sensor readings in a discretized environment.
In robotics, Bayes Filter is a probabilistic algorithm used for estimating a robot’s position in a known environment by continuously combining two sources of information:
At each time step, Bayes Filter performs two essential operations:
The theoretical foundation of the Bayes Filter used in this lab follows the standard formulation presented in Probabilistic Robotics by Thrun, Burgard, and Fox (2005). The prediction and update steps follow::
In the class slides, the core pseudocode (algorithm flow) of the Bayes Filter is presented, as shown in the figure below:
This figure is closely related to the functions I need to implement: compute_control
, odom_motion_model
, prediction_step
, sensor_model
, and update_step
.
The following image is a snapshot of my notes, illustrating the connection between the pseudocode and the functions I need to implement.
The following is the implementation code in detail
The purpose of this function is to determine how much the robot has moved and how it moved. It takes the current position and the previous position as input, and calculates the robot's motion as a combination of two rotations and one translation:
Notes:
if dx == 0 and dy == 0:
to prevent errors from atan2(0, 0)
.normalize_angle()
to handle angle errors in rot1
and rot2
, in order to avoid incorrect large angle differences, such as mistaking 360° and 0° as a 360° error.This function is used to estimate how likely it is for the robot to move from one position to another. Given two positions and a control input (the expected motion), it uses a Gaussian distribution to compute the probability of this motion.
The assumption is that the robot's rotation and translation are not perfect — for example, it intends to rotate 30°, but ends up rotating only 28°. A Gaussian distribution is well-suited to model this kind of “close but with noise” behavior.
I also normalized the angles to avoid mistakes like treating 179° and -181° as a 360° difference, and used max(..., 1e-9)
to prevent extremely small probabilities from collapsing the final product to zero.
Notes:
rot1
, rot2
, rot1_hat
, and rot2_hat
are passed through normalize_angle()
to fix large error issues, such as when rot1 ≈ -179°
and rot1_hat ≈ +179°
.p1
, p2
, and p3
are protected using max(..., 1e-9)
to avoid zero values, which would cause the entire belief to collapse to zero and result in NaN
issues.This function answers the question: “Given where I was and how I moved, where am I most likely now?”
It loops through all previous states and calculates: if I apply the current control input from that previous state, is it likely that I end up in the current state? If so, it multiplies this likelihood by the belief at the previous time step and adds it to the total.
To improve efficiency, I skip previous states with extremely low probabilities (less than 0.0001
). At the end, I normalize the total so that all probabilities sum up to 1. If, by any chance, the total is zero (e.g., due to a bug or a poor observation), I fall back to a uniform distribution as a safeguard.
This function answers the question: “How similar is the current laser observation at this position to the ideal observation from the map?”
The robot rotates in place and scans distances in 18 directions. I compare each observed distance with the expected distance at this position on the map. If the values are close, it suggests that this position is a likely candidate for the robot’s current location.
To prevent a single poor match from making the entire probability drop to zero, I added a lower bound safeguard: each individual probability is protected using max(prob, 1e-9)
.
This function combines the previous prediction with the current laser observation to update the final belief
.
For each grid cell, it calculates: “If the robot were at this location, how likely would it be to receive the current observation?” This likelihood is then multiplied by the prior belief from the previous step.
Once again, the resulting probabilities are normalized to ensure they form a valid probability distribution. If all values become zero (due to errors or poor matches), normalization is skipped to avoid division by zero.
After implementing the above functions, I obtained the following trajectory plot:
The following video demonstrates the entire process of localization using my implementation:
The following are the output results:
From the output results, the localization system consistently maintained high accuracy. At almost every time step, both the predicted and updated Bel index
remained very close to the Ground Truth (GT index)
.
This experiment successfully implemented a grid-based localization system using the Bayes Filter. The system is able to accurately estimate the robot's position by continuously reducing uncertainty through the fusion of odometry and laser observations during both translational and rotational movements.
The objective of this task was to perform a full 360-degree mapping scan using a mobile robot equipped with BLE communication and ToF sensors, and to process the raw measurements into usable distance and angle data for further localization or mapping.
I obtained the localization results by running lab11_sim.ipynb. The results are as follows:
From the results, we can see that the odometry is not perfect, but the belief obtained with the Bayes filter (blue) is very close to the ground truth (green).
In Lab 11, I needed the robot to rotate 18 times at 20-degree increments. This functionality was already implemented in Lab 9 Mapping. I only had to change the step_angle to 20 to meet the experiment requirements. (See the Lab 9 report for details.)
I believe that in robot experiments, we often need to perform multiple tasks simultaneously, such as receiving sensor data (e.g., ToF distance), controlling motor movement or steering, sending/receiving commands via BLE, and performing real-time computations (like PID control).
Therefore, coroutines are highly suitable for real-time control and multitasking coordination in this lab — they allow non-blocking waits, prevent the main flow from stalling, and enable smarter handling of interactions between sensors and control logic.
Based on the Lab 11 tutorial (Lab 11: Localization), I implemented perform_observation_loop using coroutines. The complete code is shown below:
The above is my final version of the complete code, which differs significantly from my initial implementation. At first, the results from my Bayes filter were extremely inaccurate. Through continuous debugging and modifying the code, I was able to achieve much better results.
I will share my full debugging process below, in hopes that it will help others facing similar issues with Bayes filter prediction accuracy.
Since my Lab 9 implementation worked very well, I initially thought Lab 11 would be easy for me. The success of the previous PID control experiments also gave me confidence that my sensors were functioning correctly.
Below is a video of my first test at position (5, -3):
From the video, we can see that my robot's rotation performance was quite good (at least in my opinion). I also successfully collected ToF data from 18 different angles, with 5 measurements for each angle.
In addition, I followed the angle convention where the positive x-axis is set as 0 degrees, and the robot rotates counterclockwise up to 360 degrees. This is illustrated in the image below (originally shared by Aidan McNay on Ed Discussion — thanks to Aidan for the clear diagram):
Figure credit: Aidan McNay
Originally, I believed that with this setup, the Bayes filter would allow me to accurately predict the robot's position and complete the experiment smoothly. However, the results were disappointing.
As shown in the figure above, there was a significant gap between my prediction and the ground truth, so I began the debugging process.
First, TA Cameron informed me that the robot’s rotation angle range should be (-180, 180] instead of [0, 360). I found this reasonable, because in the IMU lab, I recalled that the angle would suddenly jump from 180° to -179° during rotation. So, I modified my code accordingly, as shown in the figure below:
However, even after adjusting the angle range, the prediction was still inaccurate, indicating that the problem lay elsewhere.
I suspected that my ToF sensor might have been affected by interference because in his report, Steven Sun mentioned: “Also, the sensors might have periodically, sensed the ground instead of the world walls, making it think it’s in a more constrained position than it actually is.” (Source Link).
This idea was reinforced by some of my own ToF readings, such as: Ref Angle: -120.0, ToF: 10.0, 788.0, 788.0, 789.0, 0.0, which clearly suggested abnormal or unstable measurements.
To address this, I increased the time interval between each angle’s ToF readings (I set it to 2 seconds), and added a 0.2-second delay before recording each ToF value. This made the data collection process more stable and reliable.
Additionally, TA Cameron noticed that the positioning sticker at point (0, 0) was slightly raised. To ensure more accurate localization, he reattached the sticker properly.
After the adjustments above, my current ToF readings are shown in the figure below:
As we can see, the data is quite stable. However, the localization results were still not very accurate. This suggests that while I resolved the sensor interference issue, the main problem likely lies elsewhere.
After all the previous debugging steps, the prediction was still inaccurate. I began to suspect that there might be an issue with the orientation of the sensor data. To investigate further, I decided to plot polar plots to help identify the problem.
The polar plots below show the sensor readings at position (5, -3):
It is clear that the contour at (5, -3) is generally correct, but the entire plot is rotated clockwise by about 30 degrees. Therefore, I added this angle offset in my code to correct the orientation.
Additionally, I noticed that the ToF sensor readings occasionally included 0 values, which are clearly invalid. To address this, I added a constraint in the data_received_handler function to ensure that the ToF array only stores values greater than 0.
More importantly, TA Cameron pointed out that in my code, I was calculating the final ToF distance using the average of the 5 readings. This was highly unreliable, as even a single inaccurate reading could significantly distort the result.
Therefore, in my latest version of the code, I switched to using the median value instead, which is much more robust against outliers and noise in the sensor data.
Additionally, to ensure that both the final ToF sensor data and the corresponding angle data were correctly passed into the Bayes Filter pipeline, I added a print statement inside the get_observation_data function in localization.py. This allowed me to confirm that the data was indeed being transmitted into the filtering process.
It was clear from the print output that the data was indeed passed in correctly. Also, since I had previously set a 30-degree offset, the first angle value appeared as 0.523 (in radians).
After all the debugging efforts above, my final prediction results improved significantly compared to the initial attempts.
After the debugging steps above, the final result is shown in the figure below:
Result at Position (-3, -2)
Result at Position (5, -3)
Result at Position (0, 3)
Result at Position (5, 3)
As we can see, the predicted position is much closer to the ground truth, indicating that the stability and accuracy of the ToF data, as well as the robot’s rotation angle, have a significant impact on prediction performance.
In particular, the result at position (-3, -2) shows high accuracy. This is likely because the position is located within a semi-enclosed space, where most ToF measurements are short-range and stable, making localization easier. In contrast, more open positions require the ToF sensor to detect distant walls, which increases the likelihood of inaccurate readings and therefore reduces prediction accuracy.
The following video was recorded during my testing process:
Special thanks to TA Cameron for his guidance and support throughout my debugging process.
Thanks to Aidan McNay for the helpful image he shared on Ed, which clarified the robot's forward direction and rotation reference.
Appreciation also goes to Steven Sun, whose report made me consider the possibility that the sensor might sometimes detect the ground instead of the walls.
The objective of this task was to enable a mobile robot to autonomously navigate through a predefined sequence of waypoints using BLE communication, PID control, and ToF sensors. The robot was expected to traverse the environment as quickly and accurately as possible, demonstrating reliable path following, precise rotation, and consistent distance estimation for real-time navigation.
To achieve accurate and efficient waypoint navigation, I adopted a modular control strategy combining relative rotation and distance-based PID control. The robot executes each leg of the path by first rotating to the target heading using tuned PID parameters for angular control, then moving forward until the desired distance is reached based on ToF sensor readings.
The reason for selecting this approach is that the results from my Lab 11 were not very accurate, while the results from Lab 5, Lab 6, and Lab 9 were significantly better. This indicates that for tasks involving the ToF sensor and PID control, my robot performs exceptionally well. Therefore, I decided to use PID control to complete the final waypoint navigation lab.
My final path planning is shown in the figure below.
At the beginning, my path planning was as shown in the figure below.
However, during the experiment, the ToF sensor was not able to reliably detect distant walls. As a result, my final path was adjusted to ensure that the robot would face closer walls whenever possible to better determine its position. This will be discussed in more detail in the following sections.
For position control, I used the PID control function implemented in Lab 5: Linear PID Control and Linear Interpolation to control the robot's movement. The following is the relevant pid_control code.
For direction control, I chose to use the code implemented in Lab 9: Mapping, because the robot's rotation was very accurate in that lab. Lab 9 demonstrated the reliability of this code.
Based on the path planning diagram (as shown below), I implemented the state machine transition logic on the Python side.
As shown in the diagram above, I need eight state machines to complete the entire path navigation. I use a list to store these eight state machines.
Then, by implementing the drive_path function, I enabled the robot to follow the process: enter state machine → localization → rotation → pause → move to the next state machine.
With this setup, the robot can complete the entire path by running the code just once, without needing to send commands repeatedly during execution.
On top of the PID control code I implemented in Lab6 and Lab9, I added a stop command.
This command is mainly used to stabilize the robot and clear the integral and error values accumulated during rotation and localization.
It ensures the independence of each state machine and prevents residual data from the previous state machine from interfering with the current one.
Before starting the experiment, I needed to determine the PID values for both movement and rotation of the robot.
Additionally, I had to check whether executing the sequence of move → rotate → pause would cause any unexpected behavior.
My idea was that if the robot operated normally after sequentially executing move and rotate commands, then it would confirm that the robot had a functioning basic localization and control system.
The following two videos demonstrate the robot performing two tasks: rotating 90 degrees and stopping at a distance of 200 mm from the wall.
As shown, the robot performed both tasks exceptionally well.
The PID values I selected for the robot's movement and rotation are as follows:
Initially, I designed the path with the robot's sensor facing the forward direction, allowing it to complete the entire route moving forward.
As a result, I encountered the outcome shown in the following video.
From the video, it can be seen that after moving forward for a short distance, the robot suddenly accelerated and failed to stop, indicating that it was unable to detect the wall at a certain point.
After multiple tests, I found that the robot's sensor was inaccurate when detecting walls more than 2500 mm away.
Haixi Zhang suggested that I could try having the robot move backward for the entire path. This way, the sensor would always face the closer wall, which would improve detection accuracy.
After modifying the plan, the robot's performance is shown in the following video.
From the video, it is clear that the robot's localization has improved significantly, and it can now stop correctly at the specified distance from the wall.
However, I noticed a new issue: sometimes the robot mistakenly believes it has already hit the wall or reached the target position, causing it to stop moving prematurely.
For example, in the video, the robot is supposed to move forward from position (2, -3) and stop around 150 mm from the wall, reaching (5, -3). But instead, it stops at (2, -3), indicating that the sensor has already reported a distance of about 150 mm at that point.
This led me to realize that I had overlooked a problem when initially deciding on the sensor placement. In some cases, the sensor readings suddenly drop, which I will explain in detail in Issue Resolved 2.
In the previous section, I explained that the robot's sensor occasionally reports a sudden drop in distance readings. This issue is caused by some flaws in the placement of the sensor.
First, it's important to understand that the sensor, like the human eye, can detect not only to the left, right, and straight ahead, but also upward and downward.
With this understanding, let's take a look at the position of the sensor on my robot, as shown in the image below.
As shown, my sensor is positioned in an area surrounded by the wheels and the front part of the robot. The most significant interference comes from the protruding section of the robot’s front, located directly beneath the sensor.
Sometimes, the sensor scans this part of the chassis, which causes the reported distance to suddenly drop. I had previously assumed this was caused by ToF sensor noise, but I only realized the true cause today.
This also explains why sudden zero values appeared in my ToF sensor readings during Lab 11.
To solve this issue, I adjusted the placement of the ToF sensor, as shown in the image below.
I used tissue paper to extend the ToF sensor outward until it aligned with the front edge of the robot. This adjustment helps minimize errors in the ToF sensor readings and prevents sudden drops to very small values.
During the experiment, I found that even when using DMP and after repeatedly tuning the PID values, the robot still couldn't rotate to the exact target angle 100% of the time.
For example, when the robot was supposed to rotate 90 degrees, it would sometimes rotate to around 95 degrees instead.
This slight error could cause the robot to deviate from its path, potentially resulting in a collision with the wall or the ToF sensor detecting the wrong wall.
The following video demonstrates this issue.
As shown, my robot successfully completed 80% of the path navigation. However, during the transition from (5, -3) to (5, 3), a slight rotation error caused the sensor to detect the wrong wall.
To address this issue, I measured the robot's rotation error multiple times and compensated for it when sending rotation commands.
For example, if I want the robot to rotate 90 degrees and I know the typical error is around 3–5 degrees, I set the target angle to 86 degrees instead.
Since most of my later experiments were conducted on the third floor, where the lighting is relatively dim, I made slight adjustments to my path design.
Instead of having the robot immediately rotate 45 degrees and move along the diagonal of a triangle to the target point, I changed the plan to have it rotate 90 degrees, localize, and then rotate another 90 degrees to reach the destination by following the triangle's right-angle sides.
The following is my final path planning diagram.
The following video shows the process of my robot completing the entire path.
From the video, it can be seen that only the point (1, -1) was slightly off due to a minor rotation error, while all other points were reached correctly.
This issue is difficult to fully resolve, as the only solution is to continuously fine-tune the PID values to minimize the error. Factors such as wheel friction, motor input power, and the current state of the battery all influence the result.
Overall, the robot successfully achieved path navigation through state machine transitions.
Fast Robots was undoubtedly the course I dedicated the most time and effort to this semester. I spent almost every weekend in the lab during office hours. When I saw my robot successfully complete the full path navigation, I felt that all the effort was truly worthwhile.
I am deeply grateful to Professor Farrell and all the TAs—without your support, I couldn’t have completed all the labs successfully. I also want to thank all the classmates who helped me throughout the lab sessions—you showed me the true power of collaboration.
I feel very fortunate to have taken this course. With the experience I've gained from Fast Robots, I believe I’ll be much more confident when facing future challenges in embedded systems.
Keep building. Keep debugging. Keep pushing. Every breakthrough starts with persistence!!!
The main goal of LAB 1A is to set up and familiarize myself with the Arduino IDE and the Artemis development board.
In Lab 1A, I programmed the development board, experimented with controlling the onboard LED, read and wrote serial messages via USB, and utilized the onboard temperature sensor and pulse density microphone. Additionally, I completed the additional tasks for 5000-level students.
上图展示了实验平台的初步搭建情况。可以看到核心处理器、动力驱动以及基本传感器都已安装完毕。
实验使用了以下硬件设备:
以下是一张表格示例,用来列出关键硬件参数(仅作为示例):
硬件名称 | 型号 | 主要参数 | 备注 |
---|---|---|---|
MCU | STM32F407 | 168 MHz CPU, 1 MB Flash, 192 KB SRAM | 主控核心 |
IMU | MPU-6050 | 3轴加速度计 + 3轴陀螺仪 | I2C 接口 |
电机 | BLDC-2205 | KV = 2300, 3 相无刷 | 驱动轮 |
控制流程可以分为初始化、传感器读取、控制算法和输出执行四部分。下图是流程图示意:
- 初始化:设置串口、I2C、PWM 等外设 - 读取传感器:定时查询 IMU 和距离传感器 - 控制算法:根据期望速度或姿态,计算电机输出 - 输出执行:通过 PWM 信号给电机驱动器
在本实验中,我们进行了多次测试,包括速度提升测试、转弯半径测试等。以下是嵌入的 YouTube 视频演示(示例链接):
如果你想嵌入 Google Drive 视频,也可以使用类似的方式(以下示例):
我们在测试过程中发现,当速度超过一定阈值时,传感器读数与控制输出之间的时延开始对系统稳定性产生显著影响,具体表现为车身抖动和轨迹偏移。通过在中断中优先处理 IMU 数据,并适当增大控制循环频率,可以在一定程度上减轻此影响。
安全注意事项: 在高速测试时,必须保证周围无障碍物或其他人员干扰,并且为车辆加装保护框,防止意外碰撞。
从实验结果来看,系统在高速下依然可以保持较好的姿态控制,但精确的速度控制会受到传感器噪声和环境因素影响。下面列出一些关键指标:
根据以上测试,我们得出如下结论:
未来工作(Future Work):
[1] Fast Robots Course Material, Cornell University (2025)
[2] 《现代控制理论与应用》, 某某出版社, 2022
[3] GitHub: fast-robot-resources