#Inertial Odometry #Drone #Autonomous Systems #Machine Learning #Robotics
Extended Model-Based Learned Inertial Odometry - ICRA 26
Publication
This project is under Peer-review at the IEEE Conference on Robotics and Automation (ICRA)
Introduction
Accurate state estimation is at the heart of agile drone flight. While cameras are commonly used for localization, they fail in low light, high-speed motion, or textureless environments. Inertial odometry — using only IMU (Inertial Measurement Unit) data — offers a lightweight and robust alternative, but it often suffers from drift over time due to sensor noise and bias.
In this project, I explored how far we can push IMU-only odometry by extending a recent learning-based inertial odometry (IMO) framework for autonomous drone racing. The core idea was to make the learned model “aware” of the drone’s underlying physics — not just what it feels through accelerations and rotations, but also the forces and torques acting on its body.
Method
By incorporating the full quadrotor dynamics model, including body-frame torques, into the learning pipeline, our system achieves significantly better generalization to flight trajectories not seen during training. The model predicts short-term positional displacements from IMU and thrust data using a Temporal Convolutional Network (TCN), which are then fused through an Extended Kalman Filter (EKF) for continuous pose estimation.
Results
We validated our approach on the Blackbird and DIDO drone flight datasets and deployed it on a real racing quadrotor. The results show up to 61% improvement in relative error compared to the original IMO on unseen trajectories — demonstrating that embedding physical insight into learning models enhances robustness and transferability.
This work contributes to the ongoing effort to make autonomous drones faster, smaller, and more resilient — capable of navigating even when vision fails.