Improving Semantic Mapping with Prior Object Dimensions Extracted from 3D Models
Résumé
Semantic mapping in mobile robotics has gained significant attention recently for its important role in equipping robots with a comprehensive
understanding of their surroundings. This understanding involves enriching metric maps with semantic data, covering object categories, positions, models, relations, and spatial characteristics. This augmentation enables robots to interact with humans, navigate semantically using high-level instructions, and plan tasks efficiently. This study presents a novel real-time RGBD-based semantic mapping method designed for autonomous mobile robots. It focuses specifically on 2D semantic mapping in environments where prior knowledge of object models is available. Leveraging RGBD camera data, our method generates a primitive object representation using convex polygons, which is then refined by integrating prior knowledge. This integration involves utilizing predefined bounding boxes derived from real 3D object dimensions to cover real object surfaces. The evaluation, conducted in two distinct office environments (a simple and a complex setting) utilizing the MIR mobile robot, demonstrates the effectiveness of our approach. Comparative analysis showcases our method outperforming a similar state-of-the-art approach utilizing only RGBD data for mapping. Our approach accurately estimates occupancy zones of partially visible or occluded objects, resulting in a semantic map closely aligned with the ground truth.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|