ROB550: Maebot Lab

Project Goal

We have to build a mobile ground robot (MaeBot) that can autonomously explore an apriori unknown bounded region, accurately reach specific points of interest (key, treasure) and finally detect an opening in the boundary and escape. Some of the challenges include designing an efficient motion controller that can accurately follow a desired trajectory, a robust Simultaneous Localization and Mapping (SLAM) module to address drift in odometry, a fast motion planning algorithm to compute paths to target destinations and a fail-safe exploration algorithm that can guide the robot to autonomously explore a bounded region completely.



The MaeBot is assumed to follow the Unicycle model. The dead reckoning method does not address errors due to quantization and measurement noise. As a result, these errors get accumulated over time and cause the odometry to drift away from the true pose. Therefore, a suitable localization method, such as SLAM is required to address the drift issue.

The drift issue of the odometry

We implement two PID controllers one each for heading and steering. The current position of the robot is determined by fusing information from the odometry and SLAM modules. Though SLAM is accurate, its frequency is not high enough for real-time performance. Thus, we use the encoder odometry to estimate the position of the robot between subsequent SLAM poses. For heading controller, we apply PI controller, because the intrinsic damps of motor is already large and the accumulated error is negligible. For steering controller, we only apply P controller, because the intrinsic damps was large and the I term will induce unexpected overshoot.

The result of the driving square task by PID control

SLAM is the task of building a map of an apriori unknown area while at the same time using the map constructed so far to localize the robot. In this project, we will use an occupancy grid for representing the map and a 2D lidar as our sensor. We develop an action model (sampling based odometry motion model), sensor model (weight computation), and particle filter to accurately localize the robot. Finally, We implement the A-star algorithm for planning a path to a target and a map exploration strategy.

Comparing our SLAM implementation against the Staff’s implementation and the ground truth poses from the motion capture system
The result of the obstacle distance and the A-star planned path, the red area is the unsafe region, the black area is the obstacle region, the white area is the free, the yellow is the Maebot SLAM, and the green line is the A-star planned path.
Ci-Jyun Polar Liang
Ci-Jyun Polar Liang
Assistant Professor (Jan 2024)

My research interests include Human-Robot Collaboration, Computer Vision, Reinforcement Learning, BIM, Digital Twins, and Extended Reality.