UCL Logo VISUAL COMPUTING

README

====================================================================================
====  Deep Blending for Free-Viewpoint Image-Based Rendering - Dataset Release  ====
====  Peter Hedman, Julien Philip, True Price,                                  ====
====  Jan-Michael Frahm, George Drettakis and Gabriel Brostow                   ====
====  SIGGRAPH Asia 2018                                                        ====
====================================================================================

  The datasets have been split in two:
    1) Reconstruction inputs & outputs.
      These zip files contain everything related to 3D reconstruction. You will find 
      the input images, COLMAP reconstruction (SfM & MVS), the global textured mesh 
      from RealityCapture, and our refined depth maps.
      
    2) IBR inputs & outputs.
      These zip files contain everything needed to render the scene in IBR. You will
      find a slimmed down SfM reconstruction of the scene (to make things fit in memory),
      the global textured mesh from RealityCapture, our per-view meshes, and an the 
      camera poses + our rendered results for an test camera path.

=====================================
== Reconstruction Inputs & Outputs ==
=====================================

  Folder structure:

  colmap/
    This folder contains the input SfM & MVS reconstruction, computed by COLMAP 3.
    You can load and inspect this data with COLMAP (https://colmap.github.io/).

  input_images/
    The (distorted) input images used to reconstruct this scene. In scenes from other
    datasets, we do not include the input images, but instead provide a link where
    you can download them in input_images.txt.

  realitycapture/
    The textured, global mesh computed by the commercial RealityCapture tool
    (https://www.capturingreality.com). This folder contains both an obj file
    (with textures) and a ply file (with only vertex colors). Both meshes can be 
    inspected using meshlab (http://www.meshlab.net/).

  refined_depth_maps/
    Our refined depth maps, stored in the COLMAP .bin format. These can be loaded
    and inspected using COLMAP (https://colmap.github.io/). Although, you will need
    to replace the depthmaps in colmap/stereo/depth_maps with these bin files to
    display them with the COLMAP UI. Alternatively, you can load these bin files using
    this function: https://github.com/colmap/colmap/blob/dev/src/mvs/mat.h#L157

==========================
== IBR Inputs & Outputs ==
==========================

  Folder structure:

    global_mesh_with_texture/
      The textured, global mesh computed by the commercial RealityCapture tool
      (https://www.capturingreality.com). This folder contains both an obj file
      (with textures) and a ply file (with only vertex colors). Both meshes can be 
      inspected using meshlab (http://www.meshlab.net/).

    input_camera_poses_as_nvm/
      The input cameras used for rendering and their corresponding camera poses. 
      Note that we use fewer cameras for rendering to make sure that the IBR scene
      fits within memory. We use the NVM file format, which can be read by VisualSFM
      (http://ccwu.me/vsfm/).

    per_input_view_meshes/
      Our refined, per-input view meshes that we use for rendering. These are stored
      in world-space and align with the global textured mesh. These meshes are stored
      in the PLY file format, which can be loaded and displayed by meshlab
      (http://www.meshlab.net/).

    test_camera_paths_with_output_images/
      The test camera path used for the results in the paper. This folder also contains
      result images for this path computed with our deep blending network. We store this 
      in the Bundler file format 
      (https://www.cs.cornell.edu/~snavely/bundler/bundler-v0.4-manual.html#S6),
      which can be loaded and displayed by meshlab (http://www.meshlab.net/).