UCL Logo VISUAL COMPUTING

Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene.

We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and real challenging scenes.





@INPROCEEDINGS{FirmanCVPR2016,
 author    = {Firman, Michael and Mac Aodha, Oisin and Julier, Simon and Brostow, Gabriel J.},
 title     = {{Structured Prediction of Unobserved Voxels From a Single Depth Image}},
 booktitle = {CVPR},
 year      = {2016},
}