Augmented ICL-NUIM Dataset
The dataset is based on the original ICL-NUIM dataset, which provides two synthetic models of indoor scenes---a living room and an office---along with complete infrastructure for rendering color and depth videos.
We have augmented the dataset to support evaluation of complete scene reconstruction pipelines. First, we have created four camera trajectories (two for each scene) that model thorough handheld imaging for the purpose of comprehensive reconstruction. The average trajectory length is 36 meters and the average surface area coverage is 88%. Second, we have integrated a comprehensive noise model that incorporates disparity-based quantization, realistic high-frequency noise, and a model of low-frequency distortion estimated on a real depth camera. Third, we have generated a dense point-based surface model for the office scene, which enables the measurement of surface reconstruction accuracy. If you use this data, please cite both our paper and the paper that introduced the original dataset.
Scene |
Dense Point Cloud |
RGB Sequence |
Clean Depth Sequence |
Noisy Depth Sequence |
ONI Sequence |
Ground-truth Trajectory |
---|---|---|---|---|---|---|
Living Room 1 | PLY (21M) | JPG (128M) | PNG (115M) | PNG (249M) | ONI (251M) | TXT (1M) |
Living Room 2 | JPG (118M) | PNG (94M) | PNG (198M) | ONI (219M) | TXT (1M) | |
Office 1 | PLY (29M) | JPG (176M) | PNG (125M) | PNG (238M) | ONI (288M) | TXT (1M) |
Office 2 | JPG (166M) | PNG (121M) | PNG (232M) | ONI (278M) | TXT (1M) |
Code
Task | File | Description |
---|---|---|
Custom trajectory generation | interpolate.py |
Use this code to generate a custom trajectory given a sequence of key camera poses.
Usage: Input: Output: |
Color/depth image rendering | ICL-NUIM Code | Refer to the link for the scene models and the POV-Ray commands to produce color/depth images with the trajectory generated from the previous stage. |
Noise model | simdepth.py |
Use this code to produce noisy depth images given clean depth images and a distortion model.
Usage:Download the distortion model we used. More details on the distortion model are given here. |