MAM3SLAM: Towards underwater robust multi-agent visual SLAM
Résumé
Some underwater applications involve deploying multiple underwater Remotely Operated Vehicles (ROVs) in a common area. Such applications require the localization of these vehicles, not only with respect to each other but also with respect to a previously unknown environment. To this end, this work presents MAM 3 SLAM, a new fully centralized multi-agent and multi-map monocular Visual Simultaneous Localization And Mapping (VSLAM) framework. Multi-agent evaluation metrics are introduced to provide an extensive evaluation of the proposed approach compared to the state-of-the-art multi-agent visual SLAM on four two-agent scenarios, including one standard airborne dataset and three new underwater datasets recorded in a pool and the sea. The results show that MAM 3 SLAM is robust to underwater visual conditions and tracking failures, outperforms the other evaluated methods in estimating the individual and relative poses of the agents and in collaborative mapping accuracy. Indeed, MAM 3 SLAM reaches an accuracy of less than 3 cm on three out of the four test sequences, and it is the only algorithm able to produce a consistent output on all
the test sequences. MAM 3 SLAM's source code is made available, as are the underwater datasets.