Loading...

Mobile Intelligent Photography & Imaging
Workshop 2023

MIPI Workshop in conjunction with CVPR 2023
Sunday June 18th, 2023       East 10, Vancouver Convention Center

Challenge Tracks

Overview

Developing and integrating advanced image sensors with novel algorithms in camera systems is prevalent with the increasing demand for computational photography and imaging on mobile platforms. However, the lack of high-quality data for research and the rare opportunity for in-depth exchange of views from industry and academia constrain the development of mobile intelligent photography and imaging (MIPI).

The workshop's main focus is on MIPI, emphasizing the integration of novel image sensors and imaging algorithms. Together with the workshop, we organize a few exciting challenges and invite renowned researchers from both industry and academia to share their insights and recent work. Our challenge includes four tracks:

Unlike previous workshops that focus on image or video manipulation, restoration and enhancement, or the efficient designs of AI models for mobile devices, the central theme of our workshop encompasses new sensors and imaging systems, which are the indispensable foundation for mobile intelligent photography and imaging. As the first workshop of this kind, MIPI aims to organize a dedicated workshop so that we can solicit relevant solutions and attract a focused group from both academia and industry for fruitful discussions.

Datasets and Submission

Datasets are available at the Codalab site of each challenge track. Submissions to all phases will be done through the CodaLab site. Please register to the site and refer to the instructions on how to download the datasets and submit your results. The evaluation metrics of each track will be introduced in the respective site.

Link to Codalab: Nighttime Flare Removal -- RGB+ToF Depth Completion -- RGBW Sensor Re-mosaic -- RGBW Sensor Fusion

Awards

The winner teams of each track will receive a certificate. Challenge participants with the most successful and innovative methods will be invited to present at the MIPI workshop.

Terms and Conditions

Link to Terms and Conditions.

In the event of any inconsistency between the terms and details of this website and Codalab competition sites, the details on the Codalab sites shall prevail.

Contact

For more information, please contact us at mipi.challenge@gmail.com.

Call for Papers

CVPR 2023 Workshop:

Coupled with the workshop, besides the challenge, we also provide a fertile ground for researchers, scientists, and engineers from around the world to disseminate their research outcomes and push forward the frontiers of knowledge within novel image sensors and imaging systems-related areas. The MIPI workshop topics cover but are not limited to:

  • Learnable ISPs
  • ToF depth completion
  • RGBW sensor re-mosaic and fusion
  • Nighttime image restoration
  • Under-display camera image restoration
  • Event camera for image/video restoration
  • Learning-based computational imaging systems: efficient algorithms, smart imaging systems, learned data acquisition, task-driven ISPs, etc

Papers will be peer-reviewed and comply with the CVPR 2023 proceedings style, format and length. The deadline for camera ready submission is the same as the main conference. Papers accepted must be registered and presented to ensure their inclusion in the IEEE Xplore Library. Selected papers will be invited to present in the workshop. Some papers will also be invited to contribute to a book chapter or a special issue article in our future publication. The format for submission is the same as the CVPR 2023 requirements .

Paper submission and review site: Submission Site


Important Dates

Event Date (Always 11:59 PM Pacific Time)
Site online Dec 20, 2022
Release of training data and validation data Dec 25, 2022
Validation server online Jan 01, 2023
Release of test data, test server online Feb 15, 2023
Test results submission deadline, test server closed Feb 20, 2023
Fact sheets submission deadline Feb 20, 2023
Final test and rating results release to participants Feb 22, 2023
Event Date (Always 11:59 PM Pacific Time)
Site online Dec 20, 2022
CMT online Dec 20, 2022
Paper submission deadline Feb 12, 2023   March 10, 2023
Supplementary materials deadline Feb 12, 2023   March 10, 2023
Notification to authors Feb 27, 2023   March 20, 2023
Camera ready deadline Apr 08, 2023
Workshop date Jun 18, 2023

Schedule

Time (GMT-7)

Programme

8:20 – 8:30
Opening and Welcome
8:30 – 9:20
Keynote Speaker
Keynote: Low-Light Image Enhancement for Intelligent Analytics in Smart Cities
  • [Abstract]
  • [Slides]
  • Low-light enhancement has been a long-standing research problem for decades. The rapid development of deep learning has led to the prosperity of low-light enhancement algorithms. However, it remains unexplored to develop an efficient and concise enhancement paradigm for improving machine vision analysis tasks in smart cities. This work addresses this gap by constructing relevant datasets as well as benchmarks and developing exemplary methods and applications. In detail, at the data end, we construct a dataset for both human and machine visions, evaluate various methods on this dataset, and develop a low-light image enhancement method optimized with downstream face detectors. Then, for the method effort, we propose lightweight methods that adjust illumination distributions to improve the performance of downstream tasks designed for low-light images. These methods do not need to access the labels in low-light conditions and conduct a plug-and-play role to integrate with different downstream tasks, therefore offering high practical values.
Jerome Bell
Jiaying Liu

Associate Professor, Peking University

9:20 – 10:10
Keynote Speaker
Keynote: Differentiable Mobile Cameras
  • [Abstract]
Jerome Bell
Felix Heide

Professor, Princeton University

10:10 – 11:00
Keynote Speaker
Keynote: How to Build a Smart Camera System
  • [Abstract]
  • [Slides]
  • AI-empower smart cameras are widely used in our daily life, from smart editing on mobiles cameras, to scene understanding on self-driving cars. A smart camera is not simply applying AI algorithms to images captured by cameras, but redesigning the entire camera pipeline with AI embedded. In this talk, I will share some principles to build such a AI-empowered smart camera pipeline, including how to strengthen visual signals by combining multiple sources of input, how to prepare training data for image processing networks, and how to design an efficient processing pipeline. Finally, I will share some of my thoughts about future direction in this area.
Jerome Bell
Tianfan Xue

Assistant Professor, The Chinese University of Hong Kong

11:00 – 11:50
Keynote Speaker
Keynote: Extending the Versatility of Neural Radiance Fields (NeRFs)
  • [Abstract]
  • Mildenhall et al. (ECCV 2020) developed a method that synthesizes novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views, which can potentially be mobile phone images. These Neural Radiance Fields (NeRFs) for scene representations had large impact, and many subsequent research has built on their seminal work. I will present a short overview of the original paper and then show our contributions to the NeRF literature, namely visually-plausible modeling of specific object materials and utilizing the learned prior of the network to improve the underlying geometry.
Jerome Bell
Sabine Süsstrunk

Professor, EPFL

11:50 - 12:05
Closing

Keynote Speakers

Ralph Edwards
Felix Heide

Professor
Princeton University

Ralph Edwards
Jiaying Liu

Associate Professor
Peking University

Ralph Edwards
Tianfan Xue

Assistant Professor
The Chinese University of Hong Kong

Ralph Edwards
Sabine Süsstrunk

Professor
EPFL

Awards (Certificates)

Rank Team Name Affiliation Team Members
1st MiAlgo Xiaomi Inc., China Shuai Liu, Hao Wang, Chaoyu Feng, Luyang Wang,
Guangqi Shao, Chenguang Zhang, Xiaotao Wang, Lei Lei
2nd Samsung Research China - Beijing Samsung Research China, Beijing, China Dafeng Zhang, Xiangyu Kong, Guanqun Liu, Mengmeng Bai,
Jia Ouyang, Xiaobing Wang, Jiahui Yuan
3rd MegFR Megvii Technology, China Xinpeng Li, Chengzhi Jiang, Ting Jiang, Wenjie Lin, QiWu,
Mingyan Han, Jinting Luo, Lei Yu, Haoqiang Fan, Shuaicheng Liu
Best Visualization Award Samsung Research China - Beijing Samsung Research China, Beijing, China Dafeng Zhang, Xiangyu Kong, Guanqun Liu, Mengmeng Bai,
Jia Ouyang, Xiaobing Wang, Jiahui Yuan
Report of Nighttime Flare Removal Track - Methods and Results: Arxiv
Rank Team Name Affiliation Team Members
1st MGTV MangoTV Yi Yu, Yangke Huang, Kang Zhang
2nd MiMcAlgo Xiaomi Inc., China Meiya Chen, Yu Wang, Yongchao Li, Hao Jiang
3rd DIntel Samsung R & D Institute India - Bangalore Amrit Kumar Muduli, Vikash Kumar, Kunal Swami, Pankaj Kumar Bajpai
Report of RGB+ToF Depth Completion Track - Methods and Results: Arxiv
Rank Team Name Affiliation Team Members
1st RUSH MI Dalian University of Technology, China
Xiaomi Inc., China
Chinese Academy of Sciences, China
Yuqing Liu, Hongyuan Yu, Weichen Yu, Zhen Dong,
Binnan Han, Qi Jia, Xuanwu Yin, Kunlong Zuo
2nd HSTT Harbin Institute of Technology, China
University of Shanghai for Science and Technology, China
Tongji University, China; Tsinghua University, China
Shanghai AI Laboratory, China
Yaqi Wu, Zhihao Fan, Fanqing Meng, Xun Wu,
Jiawei Zhang, Feng Zhang, Sijie Ren
3rd MegNR Megvii Technology, China Mingyan Han, Jinting Luo, Qi Wu, Ting Jiang, Chengzhi Jiang,
Wenjie Lin, Xinpeng Li, Lei Yu, Haoqiang Fan, Shuaicheng Liu
Report of RGBW Remosaic Track - Methods and Results: Arxiv
Rank Team Name Affiliation Team Members
1st RUSH MI Xiaomi Inc., China
Dalian University of Technology, China
Chinese Academy of Sciences, China
Hongyuan Yu, Yuqing Liu, Weichen Yu, Lin Ge,
Xiaolin Zhang, Qi Jia, Heng Zhang, Xuanwu Yin, Kunlong Zuo
2nd MegNR Megvii Technology, China Qi Wu, Wenjie Lin, Ting Jiang, Chengzhi Jiang, Mingyan Han,
Xinpeng Li, Jinting Luo, Lei Yu, Haoqiang Fan, Shuaicheng Liu
3rd USTC-Zhalab University of Science and Technology of China Kunyu Wang, Chengzhi Cao
Report of RGBW Fusion Track - Methods and Results: Arxiv

Organizers

Ralph Edwards
Chongyi Li

Nanyang Technological University

Ralph Edwards
Shangchen Zhou

Nanyang Technological University

Ralph Edwards
Ruicheng Feng

Nanyang Technological University

Ralph Edwards
Yuekun Dai

Nanyang Technological University

Qingpeng Zhu

SenseTime/Tetras.AI

Ralph Edwards
Qianhui Sun

SenseBrain

Ralph Edwards
Wenxiu Sun

SenseTime/Tetras.AI

Ralph Edwards
Chen Change Loy

Nanyang Technological University

Ralph Edwards
Jinwei Gu

The Chinese University of Hong Kong

Previous MIPI

Top