Challenge 2 (FlatCam) Datasets
Track 2: FlatCam for Faces
About the Dataset
We provide two types of data for Challenge 2. For each type of data, we provide both FlatCam sensor measurements, as well as FlatCam Tikhonov reconstructions. You need not use all data for an individual subchallenge, and some data may be more appropriate for some subchallenges than others. We do not explicitly split the data into training and validation sets and leave that to the participant.
The first type of data is the FlatCam Face Dataset (FCFD), a dataset of 87 subjects captured with a FlatCam prototype in a variety of conditions. The test dataset for all subchallenges is composed of images captured in the same manner as the FCFD (but with different subjects). Thus, the FCFD is the closest resemblance to the challenge's test images.
- The second type of data is display-captured FlatCam images. These are images captured by the FlatCam prototype of a computer monitor displaying images from popular datasets. While they are not captures of real scenes, the original images displayed on the monitor provide corresponding ground truth images.
Training & Evaluation
In all three sub-challenges, the participant teams are allowed to use external training data that are not mentioned above, including self-synthesized or self-collected data; but they must state so in their submissions.
FlatCam Face Dataset (FCFD)
The FCFD can be obtained via this LINK.
Display-captured CASIA Dataset
A subset of the CASIA-WebFace dataset [1] containing ~380,000 images of different face identities (organized into different subfolders). Note that not all the original CASIA images were display-captured by the FlatCam.
- Original Images: LINK
- FlatCam Measurements (choose one of the following two options):
Option 1 -- Whole dataset (792 GB): LINK
Option 2 -- Whole dataset split into smaller files: - Tikhonov Reconstructions (30 GB): LINK
- Filenames (only those that were display-captured): LINK
Reference [1]: D. Yi, Z. Lei, S. Liao, and S. Z. Li, "Learning face representation from scratch," arXiv preprint arXiv:1411.7923, 2014.
Display-captured WIDER Dataset
A subset of the WIDER Face dataset [2] containing various crops of the dataset's images captured by the FlatCam. Bounding box information is also provided.
- Original Images: LINK
- FlatCam Measurements (28 GB): LINK
- Tikhonov Reconstructions (1 GB): LINK
- Original Images Bounding Boxes (xywh format): LINK
- Tikhonov Reconstruction Bounding Boxes (xywh format): LINK
- Filenames: LINK
Reference [2]: S. Yang, P. Luo, C. C. Loy, and X. Tang, “Wider face: A face detection benchmark,” in Proc. IEEE Conf. Comput. Vision Pattern Recognition, 2016.
If you have any questions about this challenge track please feel free to email cvpr2020.ug2challenge.track2@gmail.com