Introduction

In more recent FRC games, having a precise pose and good quality path following is becoming more and more critical to success. Unfortunately, this is one of the hardest software problems we face as there are too many variables that impact wheel odometry to use it by itself. One of the foundational techniques to address this challenge is pose estimation, which involves determining the position and orientation of a robot within a given space. However, pose estimation on its own, particularly when based on external references like tags or markers, can be susceptible to inaccuracies due to varying detection quality. To enhance reliability and performance, integrating pose estimation with robust odometry data offers a promising solution.

Odometry, the use of data from motion sensors to estimate the change in position over time, provides a continuous and smooth state estimate that is crucial for trajectory following and other navigation tasks. This smoothness ensures that control systems, such as PID controllers, do not react to abrupt, erroneous changes in estimated position, which could lead to unstable or unpredictable behavior. Despite its benefits, odometry has flaws; over time, it tends to drift, gradually introducing errors into the position estimate that can accumulate and significantly affect accuracy.

To mitigate these limitations, our approach combines the strengths of odometry with selective corrections from pose estimation based on tag detections. By carefully choosing when to incorporate these corrections, we can maintain the smoothness and reliability of odometry while periodically realigning the robot's estimated position with more accurate external references. This methodology underpins a sophisticated navigation and task execution system that operates efficiently in complex environments.

So what’s a frame?

Understanding Reference Frames with Familiar Examples

Imagine you're sitting in a car at a stoplight, and next to you is another car. When the light turns green, both cars start moving. From your perspective (inside your car), it might appear as if the other car is stationary and the world outside is moving backward. This is because, in your reference frame (the interior of your car), you are at rest, and everything else is moving relative to you.

Now, consider observing these two cars from a third perspective: someone standing on the sidewalk. To this observer, both cars are moving forward. This shift in perspective changes the observed motion, demonstrating how the reference frame alters our perception of movement.

In code, the odometry frame functions similarly. As the robot moves, it tracks its own movement relative to its starting position, much like tracking the movement of another car from your car's perspective. However, just as the perception of motion changes when observed from the sidewalk, the robot's movement would look different when measured from a stationary point outside the robot, like the FIELD frame in robotics. This analogy highlights that the odometry frame itself shifts along with the robot, not just its state estimate within that frame.

Specifically the point that’s being underscored here is that the odometry frame here is not static; it shifts as the robot moves. This shift is fundamental to understanding the robot's movement and positioning, especially when integrating data from external sources (like vision systems) to correct for drift or inaccuracies. Recognizing that the odometry frame—and our perception of movement within it—changes with the robot's motion is key to developing effective navigation and control strategies in robotics.

Following Paths with Reference Frames in Mind

Let's consider the analogy of a human walking through a city. As you walk, you rely primarily on your sense of direction and the distance you've traveled—similar to odometry in robotics. This internal sense provides a continuous and smooth estimate of your position, allowing you to navigate the streets without constantly checking a map. However, like odometry, this internal navigation system is prone to drift; the longer you walk without external reference points, the more likely you are to veer off course.

Now imagine you occasionally use landmarks (e.g., a distinctive building, a park) as external reference points to correct your course. Each time you recognize a landmark, you momentarily refine your sense of location by aligning it with this external point. This is similar to how tag detections can be used to correct the drift in a robot's odometry-based navigation system. However, just as you wouldn't adjust your path for every sign or storefront (especially if you're unsure of its reliability as a landmark), the system selectively accepts corrections based on the quality and relevance of tag detections.

This balance between relying on a continuous, smooth internal navigation system (odometry) and periodically correcting it with external references (tag detections) enables precise navigation that is both stable and adaptable. In robotics, we implement this concept through the maintenance of multiple reference frames:

Integrating with Vision and Path Following

Vision

Vision systems play a crucial role in enhancing navigation and task execution. They provide vital data that helps correct the drift inherent in odometry over time, a concept we can encapsulate as odomTField. This transform represents the adjustment needed for your odometry based on the collective vision measurements from tags or other pose sources. Essentially, it's the bridge between what your robot thinks it's doing and what it's actually doing, as seen through the lens of external references.