pynif3d.datasets¶
- class pynif3d.datasets.BaseDataset(data_directory, mode)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Base dataset class. All the custom datasets shall inherit this class and implement the required functions by overriding them.
- Parameters
data_directory (str) – The dataset root directory.
mode (str) – The dataset usage mode (“train”, “val” or “test”).
- download(url, save_directory, archive_format, md5=None)¶
- class pynif3d.datasets.Blender(data_directory, mode, scene, half_resolution=False, white_background=True, download=False)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Implementation of the synthetic dataset (Blender).
Please refer to the following paper for more information: https://arxiv.org/abs/2003.08934
Note
This implementation is based on the code from: https://github.com/bmild/nerf
Usage:
mode = "train" scene = "chair" dataset = Blender(data_directory, mode, scene)
- Parameters
data_directory (str) – The dataset base directory (see BaseDataset).
mode (str) – The dataset usage mode (see BaseDataset).
scene (str) – The scene name (“chair”, “drums”, “ficus”…).
half_resolution (bool) – Boolean indicating whether to load the dataset in half resolution (True) or full resolution (False)
white_background (bool) – Boolean indicating whether to set the dataset’s background color to white (True) or leave it as it is (False)
download (bool) – Flag indicating whether to automatically download the dataset (True) or not (False).
- dataset_md5 = 'ac0cfb13b1e4ff748b132abc8e8c26b6'¶
- dataset_url = 'https://drive.google.com/u/0/uc?id=18JxhpWD-4ZmuFKLzKlAw-w5PpzZxXOcG'¶
- class pynif3d.datasets.DTUMVSIDR(data_directory, mode, scan_id, download=False, **kwargs)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Implementation of the DTU MVS dataset, as used in the IDR paper:
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance Yariv et al., NeurIPS, 2020
Please refer to the following paper for more information: https://arxiv.org/abs/2003.09852
Usage:
mode = "train" scan_id = 110 dataset = DTUMVSIDR(data_directory, mode, scan_id)
- Parameters
data_directory (str) – The dataset base directory (see BaseDataset).
mode (str) – The dataset usage mode (see BaseDataset).
scan_id (int) – ID of the scan.
download (bool) – Flag indicating whether to automatically download the dataset (True) or not (False).
kwargs (dict) –
calibration_file (str): The name of the calibration file. Default is “cameras_linear_init.npz”.
- dataset_md5 = 'b1ad1eff5c4a4f99ae4d3503e976dafb'¶
- dataset_url = 'https://www.dropbox.com/s/ujmakiaiekdl6sh/DTU.zip?dl=1'¶
- class pynif3d.datasets.DTUMVSPixelNeRF(data_directory, mode, scan_ids_file, download=False)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Implementation of the DTU MVS dataset, as used in the pixelNeRF paper:
pixelNeRF: Neural Radiance Fields from One or Few Images Yu et al., CVPR, 2021
Please refer to the following paper for more information: https://arxiv.org/abs/2012.02190
- Parameters
data_directory (str) – The dataset base directory (see BaseDataset).
mode (str) – The dataset usage mode (see BaseDataset).
scan_ids_file (str) – The path to the file that contains the IDs of the scans that need to be processed.
download (bool) – Flag indicating whether to automatically download the dataset (True) or not (False).
- dataset_md5 = '02af85c542238d9832e348caee2a6bba'¶
- dataset_url = 'https://drive.google.com/uc?id=1aTSmJa8Oo2qCc2Ce2kT90MHEA6UTSBKj'¶
- class pynif3d.datasets.DeepVoxels(data_directory, mode, scene, download=False)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Loads DeepVoxels data from a given directory into a Dataset object.
Please refer to the following paper for more information: https://arxiv.org/abs/1812.01024
Project page: https://vsitzmann.github.io/deepvoxels
Note
This implementation is based on the code from: https://github.com/bmild/nerf
Usage:
mode = "train" scene = "bus" dataset = DeepVoxels(data_directory, mode, scene)
- Parameters
data_directory (str) – The dataset base directory (see BaseDataset).
mode (str) – The dataset usage mode (see BaseDataset).
scene (str) – The scene name (“armchair”, “bus”, “cube”…).
download (bool) – Flag indicating whether to automatically download the dataset (True) or not (False).
- dataset_md5 = 'd715b810f1a6c2a71187e3235b2c5c56'¶
- dataset_url = 'https://drive.google.com/u/0/uc?id=1lUvJWB6oFtT8EQ_NzBrXnmi25BufxRfl'¶
- class pynif3d.datasets.LLFF(data_directory, mode, scene, factor=8, recenter=True, bd_factor=0.75, spherify=False, path_z_flat=False, download=False)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Loads LLFF data from a given directory into a Dataset object.
Please refer to the following paper for more information: https://arxiv.org/abs/1905.00889
Note
This implementation is based on the code from: https://github.com/bmild/nerf
Usage:
mode = "train" scan_id = "bus" dataset = LLFF(data_directory, mode, scan_id)
- Parameters
data_directory (str) – The dataset base directory (see BaseDataset).
mode (str) – The dataset usage mode (see BaseDataset).
scene (str) – The scene name (“armchair”, “bus”, “cube”…).
factor (float) – The factor to reduce image size by. Default is 8.
recenter (bool) – Boolean flag indicating whether to re-center poses (True) or not (False). Default is True.
bd_factor (float) – The factor to rescale poses by. Default is 0.75.
spherify (bool) – Boolean flag indicating whether the poses should be converted to spherical coordinates (True) or not (False). Default is False.
path_z_flat (bool) – (TODO: Add explanation). Defaults to False.
download (bool) – Flag indicating whether to automatically download the dataset (True) or not (False).
- dataset_md5 = '74cc8bd336e9a19fce3c03f4a1614c2d'¶
- dataset_url = 'https://drive.google.com/u/0/uc?id=16VnMcF1KJYxN9QId6TClMsZRahHNMW5g'¶
- class pynif3d.datasets.Shapes3dDataset(data_directory, mode, download=False, **kwargs)¶
Bases:
Generic
[torch.utils.data.dataset.T_co
]Loads ShapeNet and Synthetic Indoor Scene data from a given directory into a Dataset object.
Please refer to the Convolutional Occupancy Networks (CON) paper for more information: https://arxiv.org/abs/2003.04618
Note
This implementation is based on the original one, which can be found here: https://github.com/autonomousvision/convolutional_occupancy_networks
Usage:
mode = "train" dataset = Shapes3dDataset(data_directory, mode)
- Parameters
data_directory (str) – The parent dictionary of the dataset.
mode (str) – The subset of the dataset. Has to be one of (“train”, “val”, test”).
download (bool) – Flag indicating whether to automatically download the dataset (True) or not (False).
kwargs (dict) –
categories (list): List of strings defining the object categories. Default is None.
points_filename (str): The name for the points file. Default is “points.npz”.
pointcloud_filename (str): The name for the pointcloud file. Default is “pointcloud.npz”.
unpackbits (bool): Boolean flag which defines if bit unpacking is needed during point cloud and occupancy loading. Default is True.
gt_point_sample_count (uint): The number of the point samples used as ground truth. Default is 2048.
in_points_sample_count (uint): The number of the point samples used as input to the network. Default is 3000.
in_points_noise_stddev (float): The stddev for noise to add to input points. Setting it to 0 will cancel noise addition. Default is 0.005.