Loading...
Menu
Bridging the Gap Between Computational Photography and Visual Recognition:
7th UG2+ Prize Challenge
CVPR 2024

The rapid development of computer vision algorithms increasingly allows automatic visual recognition to be incorporated into a suite of emerging applications. Some of these applications have less-than-ideal circumstances such as low-visibility environments, causing image captures to have degradations. In other more extreme applications, such as imagers for flexible wearables, smart clothing sensors, ultra-thin headset cameras, implantable in vivo imaging, and others, standard camera systems cannot even be deployed, requiring new types of imaging devices. Computational photography addresses the concerns above by designing new computational techniques and incorporating them into the image capture and formation pipeline. This raises a set of new questions. For example, what is the current state-of-the-art for image restoration for images captured in non-ideal circumstances? How can inference be performed on novel kinds of computational photography devices?

Continuing the success of the 1st (CVPR'18), 2nd (CVPR'19), 3rd (CVPR'20), 4th (CVPR'21), 5th (CVPR'22), and 6th (CVPR'23) UG2 Prize Challenge workshops, we provide its 7th version for CVPR 2024. It will inherit the successful benchmark dataset, platform and evaluation tools used by the previous UG2 workshops, but will also look at brand new aspects of the overall problem, significantly augmenting its existing scope.

Original high-quality contributions are solicited on the following topics:
  • Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms (UAVs, gliders, autonomous cars, outdoor robots etc.), under real-world adverse conditions and image degradations (haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.)
  • Novel models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography tasks and various high-level computer vision tasks, and for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.
  • Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean “ground truth” to compare with.

Challenge Categories

Winners

Keynote speakers

Available Challenges

What is the current state-of-the-art for image restoration for images captured in non-ideal circumstances? How can inference be performed on novel kinds of computational photography devices?

The UG2+ Challenge seeks to advance the analysis of "difficult" imagery by applying image restoration and enhancement algorithms to improve analysis performance. Participants are tasked with developing novel algorithms to improve the analysis of imagery captured under problematic conditions.

Atmospheric Turbulence Mitigation

The theories of turbulence and propagation of light through random media have been studied for the better part of a century. Yet progress for associated image reconstruction algorithms has been slow, as the turbulence mitigation problem has not thoroughly been given the modern treatments of advanced image processing approaches (e.g., deep learning methods) that have positively impacted a wide variety of other imaging domains (e.g., classification).

This challenge aims to promote the development of new image reconstruction algorithms for incoherent imaging through anisoplanatic turbulence.

WeatherProof Dataset Challenge: Semantic Segmentation on Paired Real Data

Images captured in adverse weather conditions significantly impact the performance of many vision tasks. Common weather phenomenon, including but not limited to rain, snow, fog, introduce visual degradations to captured images and videos. These degradations may include partial to severe occlusions of objects, illumination changes, noise, etc. As most vision algorithms assume clear weather, with no interference of particles present under adverse weather, their performance suffers due to the domain gap.

This challenge aims to spark the development of novel semantic segmentation algorithms for images captured under adverse weather conditions.

UAV Tracking and Pose Estimation

Recently, UAVs have become a major concern in frontline combat, contraband smuggling, and the endangerment of commercial aircraft, among other areas. Due to their compact size and quiet electric motors, they are difficult to detect by soldiers, law-enforcement and vehicles.

This challenge is positioned to play a pivotal role in advancing UAV threat detection, classification, trajectory estimation capabilities, and more.

Keynote speakers

Speaker Photo
Xiaoming Liu
Michigan State University
Speaker Photo
Achuta Kadambi
University of California, Los Angeles
Speaker Photo
Kristina Monakhova
Cornell University
Speaker Photo
TBA

Important Dates

Challenge Registration

January 15 - May 1, 2024

Challenge Dry-run

January 15 - May 1, 2024

Paper Submission Deadline

April 01, 2024

Notification of Paper Acceptance

April 5, 2024

Paper Camera Ready

April 12, 2024

Challenge Final Result Submission

April 30 - May 1, 2024

Challenge Winners Announcement

May 25, 2024

CVPR Workshop

June (Date TBA), 2024

Advisory Committee

Speaker Photo
Stanley H. Chan
Purdue University
Speaker Photo
Zhangyang Wang
University of Texas, Austin
Speaker Photo
Achuta Kadambi
University of California, Los Angeles
Speaker Photo
Alex Wong
Yale University
Speaker Photo
Yung-Hsiang Lu
Purdue University
Speaker Photo
Bradley Preece
US Army Program Executive Office

Organizing Committee

Speaker Photo
Nick Chimitt
Purdue University
Speaker Photo
Xingguang Zhang
Purdue University
Speaker Photo
Ajay Jaiswal
University of Texas, Austin
Speaker Photo
Wes Robbins
University of Texas, Austin
Speaker Photo
Yuecong Xu
Agency for Science, Technology and Research, Singapore
Speaker Photo
Yang Jianfei
Nanyang Technological University, Singapore
Speaker Photo
Yuan Shenghai
Nanyang Technological University, Singapore
Speaker Photo
Yang Yizhuo
Nanyang Technological University, Singapore
Speaker Photo
Hyoungseob Park
Yale University
Speaker Photo
Fengyu Yang
Yale University
Speaker Photo
Howard Zhang
University of California, Los Angeles
Speaker Photo
Rishi Upadhyay
University of California, Los Angeles
Footer