A Comprehensive Study of Deep Learning Visual Odometry for Mobile Robot Localization in Indoor Environments
Résumé
This paper investigates the characteristics and performance of deep learning-based visual odometry for the localization of mobile robots in 2D indoor small-scale environments. Our study begins by developing a highly accurate multi-sensor fusion localization method that integrates data from various sensors, including cameras, inertial measurement units (IMUs), Indoor Positioning System (Marvelmind) and wheel encoders. Using this method, we created a comprehensive dataset that captures the robot's movements in controlled indoor settings. Then, We conducted an extensive comparative study of several deep learning-based visual odometry methods by evaluating their strengths and weaknesses using public datasets. From this comparison, we identified the Deep Patch Visual Odometry (DPVO) method as the most effective approach. Afterwards, we made several enhancements to the DPVO method to further improve its localization capabilities. Subsequently, we applied the improved DPVO method to our newly created dataset, which allowed us to perform rigorous testing and validation in realistic indoor scenarios. The results were compared with localization data obtained from other modalities.