We present MaskFusion, a real-time, object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene. MaskFusion recognizes, segments and assigns semantic class labels to different objects in the scene, while tracking and reconstructing them even when they move independently from the camera.

As an RGB-D camera scans a cluttered scene, image-based instance-level semantic segmentation creates semantic object masks that enable real-time object recognition and the creation of an object-level representation for the world map. Unlike previous recognition-based SLAM systems, MaskFusion does not require known models of the objects it can recognize, and can deal with multiple independent motions. MaskFusion takes full advantage of using instance-level semantic segmentation to enable semantic labels to be fused into an object-aware map, unlike recent semantics enabled SLAM systems that perform voxel-level semantic segmentation. We show augmented-reality applications that demonstrate the unique features of the map output by MaskFusion: instance-aware, semantic and dynamic.


code thumbnail Code coming soon

BibTeX

@article{DBLP:journals/corr/abs-1804-09194,
  author    = {Martin R{\"{u}}nz and
               Lourdes Agapito},
  title     = {MaskFusion: Real-Time Recognition, Tracking and Reconstruction of
               Multiple Moving Objects},
  journal   = {CoRR},
  volume    = {abs/1804.09194},
  year      = {2018},
  url       = {http://arxiv.org/abs/1804.09194},
  archivePrefix = {arXiv},
  eprint    = {1804.09194},
  timestamp = {Mon, 13 Aug 2018 16:48:14 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1804-09194},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}          
        

Acknowledgements

This work has been supported by the SeconHands project, funded from the EU Horizon 2020 Research and Innovation programme under grant agreement No 643950.