Loading...

Mobile Intelligent Photography & Imaging
Workshop 2024

>

MIPI Workshop in conjunction with CVPR 2024
Tuesday June 18th, 2024       Arch 213 (East 10), Seattle Convention Center

Challenge Tracks

Overview

Developing and integrating advanced image sensors with novel algorithms in camera systems is prevalent with the increasing demand for computational photography and imaging on mobile platforms. However, the lack of high-quality data for research and the rare opportunity for in-depth exchange of views from industry and academia constrain the development of mobile intelligent photography and imaging (MIPI).

The workshop's main focus is on MIPI, emphasizing the integration of novel image sensors and imaging algorithms. Together with the workshop, we organize a few exciting challenges and invite renowned researchers from both industry and academia to share their insights and recent work. Our challenge includes three tracks:

Unlike previous workshops that focus on image or video manipulation, restoration and enhancement, or the efficient designs of AI models for mobile devices, the central theme of our workshop encompasses new sensors and imaging systems, which are the indispensable foundation for mobile intelligent photography and imaging. As the first workshop of this kind, MIPI aims to organize a dedicated workshop so that we can solicit relevant solutions and attract a focused group from both academia and industry for fruitful discussions.

Datasets and Submission

Datasets are available at the Codalab site of each challenge track. Submissions to all phases will be done through the CodaLab site. Please register to the site and refer to the instructions on how to download the datasets and submit your results. The evaluation metrics of each track will be introduced in the respective site.

Link to Codalab: Few-shot Raw Image Denoising -- Demosaic for HybridEVS Camera -- Nighttime Flare Removal

Awards

The winner teams of each track will receive a certificate. Challenge participants with the most successful and innovative methods will be invited to present at the MIPI workshop.

Terms and Conditions

Link to Terms and Conditions.

In the event of any inconsistency between the terms and details of this website and Codalab competition sites, the details on the Codalab sites shall prevail.

Contact

For more information, please contact us at mipi.challenge@gmail.com.

Call for Papers

CVPR 2024 Workshop:

Besides the challenge, we also provide a platform for researchers, scientists, and engineers from around the world to disseminate their research outcomes and push forward the frontiers of knowledge within novel image sensors and imaging systems-related areas. The MIPI workshop topics cover but are not limited to:

  • Learnable ISPs
  • ToF depth completion
  • RGBW sensor re-mosaic and fusion
  • Nighttime image restoration
  • Under-display camera image restoration
  • Event camera for image/video restoration
  • Learning-based computational imaging systems: efficient algorithms, smart imaging systems, learned data acquisition, task-driven ISPs, etc

Papers will be peer-reviewed and comply with the CVPR 2024 proceedings style, format and length. The deadline for camera ready submission is the same as the main conference. Papers accepted must be registered and presented to ensure their inclusion in the IEEE Xplore Library. The format for submission is the same as the CVPR 2024 requirements .

Paper submission and review site: Submission Site


Important Dates

Event Date (Always 11:59 PM Pacific Time)
Site online Jan 10, 2024
Release of training data and validation data Jan 15, 2024
Validation server online Jan 19, 2024
Release of test data, test server online Mar 01, 2024
Test results submission deadline, test server closed Mar 06, 2024
Fact sheets submission deadline Mar 06, 2024   Mar 08, 2024
Final test and rating results release to participants Mar 08, 2024   Mar 10, 2024
Event Date (Always 11:59 PM Pacific Time)
Site online Jan 10, 2024
CMT online Jan 15, 2024
Paper submission deadline Mar 17, 2024
Supplementary materials deadline Mar 17, 2024
Notification to authors Mar 27, 2024
Camera ready deadline Apr 15, 2024
Workshop date Jun 18, 2024

Schedule

Time (GMT-7)

Programme [Video Record]

8:30 – 8:40
Opening and Welcome
8:40 – 9:30
Keynote Speaker
Keynote: How Far Are We From the Restore Any lmage Model (RAIM)? [online]
  • [Abstract]
  • [Slides]
  • [Zoom] (pwd: 54892391)
  • Image restoration and enhancement is a widely studied problem in low-level vision. Researchers have proposed a variety of theories and algorithms in order to reconstruct the desired images stably under different imaging conditions. However, we still encounter many challenging issues in practical photography scenarios. The rapid development of deep learning techniques has significantly improved the performance of computational photography; nonetheless, most of the trained deep image restoration models still lack enough generalization capability. Large AI models, especially the pre-trained generative diffusion models, shed light on our dream to develop Restore Any Image Models (RIAM). How far are we from RAIM? Can generative models address the challenges of real-world photography in complex scenarios? In this talk, we will introduce some recent advances along this line with some preliminary discussions.
Jerome Bell
Lei Zhang

The Hong Kong Polytechnic University & OPPO Research Institute

9:30 – 10:20
Keynote Speaker
Keynote: Revisiting Image White Balancing [online]
  • [Abstract]
  • [Slides]
  • [Zoom] (pwd: 54892391)
  • White balance correction is an essential process for accurate color reproduction in digital imaging. Most modern cameras perform white-balance correction early in the camera ISP, where global white-balance gains are estimated from the captured scene and then applied to the raw image. In this talk, I will discuss the challenges of using learning-based methods for in-camera white balance correction and our work to mitigate these challenges. Additionally, I will address the complexities of correcting white balance in the post-capture stage, where the camera ISP has already applied photofinishing steps to produce the final image. I will present our work on post-capture white-balance correction and editing, and then cover our prior research in spatial white-balance correction, which goes beyond the traditional global white balance correction applied onboard cameras. Lastly, I will discuss the impact of white-balance correction on other downstream computer vision tasks.
Jerome Bell
Mahmoud Afifi

Camera Software Engineer, Google

10:20 – 11:10
Keynote Speaker
Keynote: Towards A Better Camera in the App and AR Glass
  • [Abstract]
  • [Slides]
  • With 422 million daily active users (DAUs, Q1 2024), Snapchat's camera stands as one of the most widely utilized cameras today. Snapchat users rely on this camera to capture moments, express themselves, communicate visually, and enjoy shared experiences. We empower the camera to cater to these diverse needs through our cutting-edge camera technology. In this presentation, I will unveil a series of our endeavors aimed at enhancing the camera's capabilities. Firstly, I will discuss our efforts to improve the in-app camera's image quality. Secondly, I will introduce how we make a fun camera through the special filters. Thirdly, I will explore how we have integrated additional hardware to create superior cameras for the next-generation AR glass. Finally, I will touch upon our innovations in transparent displays and transparent cameras.
Jerome Bell
Jian Wang

Staff Research Scientist, Snap Research

11:10 – 12:00
Keynote Speaker
Keynote: Passive Ultra-Wideband Single-Photon Imaging
  • [Abstract]
  • High-speed light sources, fast cameras, and depth sensors have made it possible to image dynamic phenomena occurring in ever smaller time intervals with the help of actively-controlled light sources and synchronization. Unfortunately, while these techniques do capture ultrafast events, they cannot simultaneously capture slower ones too.

    I will discuss our recent work on passive ultra-wideband single-photon imaging, where we consider the problem of imaging a dynamic scene over an extreme range of timescales simultaneously from seconds to picoseconds completely passively, without any light sources under our control, no synchronization, and not much light. We call this new imaging regime passive ultra-wideband imaging and show that this regime enables post capture refocusing in time—from transient to everyday timescales and can be captured using single-photon cameras. We show the potential of passive ultra-wideband imaging by experimentally demonstrating several never-seen-before abilities, including recording ultra-wideband video, which can be played back later at 30 Hz to show everyday motions—but can also be played a billion times slower to show the propagation of light itself.
Jerome Bell
Mian Wei

PhD Student, University of Toronto

12:00 - 12:10
Closing

Keynote Speakers

Ralph Edwards
Lei Zhang

Chair Professor
The Hong Kong Polytechnic University & OPPO Research Institute

Ralph Edwards
Mahmoud Afifi

Camera Software Engineer
Google

Ralph Edwards
Jian Wang

Staff Research Scientist
Snap Research

Ralph Edwards
Mian Wei

PhD Student
University of Toronto

Awards (Certificates)

Rank Team Name Affiliation Team Members
1st MiAlgo_AI Xiaomi Inc., China Lize Zhang, Shuai Liu, Chaoyu Feng, Luyang Wang,
Shuan Chen, Guangqi Shao, Xiaotao Wang, Lei Lei
2nd BigGuy Tianjin University
Shenzhen MicroBT Electronics Technology Co. Ltd, China
Shanghai Artificial Intelligence Laboratory, China
Qirui Yang, Qihua Cheng, Zhiqiang Xu, Yihao Liu,
Huanjing Yue, Jingyu Yang
3rd SFNet-FR Computer Vision Lab
CAIDAS & IFI
University of Wurzburg, Germany
Florin-Alexandru Vasluianu, Zongwei Wu,
George Ciubotariu, Radu Timofte
Report of Nighttime Flare Removal Track - Methods and Results: Arxiv
Rank Team Name Affiliation Team Members
1st MiVideoNR Video Algorithm Group,
Camera Department, Xiaomi Inc., China
Ruoqi Li, Chang Liu, Ziyi Wang, Yao Du, Jingjing Yang,
Long Bao, Heng Sun
2nd Samsung MX(Mobile eXperience) Business
& Samsung Research China - Beijing (SRC-B)
Samsung Research China - Beijing (SRC-B)
Department of Camera Innovation Group
Samsung Electronics
Sun Yat-sen University
Xiangyu Kong, Xiaoxia Xing, Jinlong Wu, Yuanyang Xue,
Hyunhee Park, Sejun Song, Changho Kim, Jingfan Tan,
Zikun Liu, Wenhan Luo
3rd AIIA Harbin Institute of Technology
Smart City Research Institute of China
Electronics Technology Group Corporation
Mingde Qiao, Junjun Jiang, Kui Jiang, Yao Xiao,
Chuyang Sun, Jinhui Hu and Weijian Ruan
Report of Few-shot RAW Image Denoising Track - Methods and Results: Arxiv
Rank Team Name Affiliation Team Members
1st USTC604 University of Science and Technology of China Senyan Xu, Zhijing Sun, Jiaying Zhu, Yurui Zhu,
Xueyang Fu, Zheng-Jun Zha
2nd lolers Xiaomi Inc., China Jun Cao, Cheng Li, Shu Chen, Liang Ma
3rd Lumos Demosaicker Harbin Institute of Technology (Shenzhen)
IMEC-UGent
Northwestern Polytechnical University
Shiyang Zhou, Haijin Zeng, Kai Feng, Yongyong Chen,
Jingyong Su
Report of Demosaic for Hybridevs Camera Track - Methods and Results: Arxiv

Organizers

Workshop Organizers


Ralph Edwards
Xiaoming Li

Nanyang Technological University

Ralph Edwards
Zongsheng Yue

Nanyang Technological University

Ralph Edwards
Chongyi Li

Nanyang Technological University

Ralph Edwards
Shangchen Zhou

Nanyang Technological University

Ralph Edwards
Ruicheng Feng

Nanyang Technological University

Ralph Edwards
Yuekun Dai

Nanyang Technological University

Ralph Edwards
Peiqing Yang

Nanyang Technological University

Ralph Edwards
Chen Change Loy

Nanyang Technological University

Challenge Organizers


Chunle Guo

Nankai University

Ralph Edwards
Xin Jin

Nankai University

Ralph Edwards
Yaqi Wu

SenseTime Research

Ralph Edwards
Jimmy S. Ren

SenseTime Research

Ralph Edwards
Dafeng Zhang

Samsung Research China

Previous MIPI

Top