Overview
Challenge Tracks
Alongside the MIPI workshop, we also host several challenges that advance mobile intelligent photography and imaging across industry and academia.
Explore MIPI challenge tracks and join the competition
Detailed Image Quality Assessment
Achieve detailed image quality assessment (IQA) with three subtasks: fine-grained quality grounding, detailed quality perception, and descriptive quality reasoning.
Challenge SiteDeblurring for Hybrid EVS Camera
Restore high-quality RGB frames by jointly leveraging captured event data and blurred RGB images.
Challenge SiteAberration Correction for Mobile Camera
Restore high-quality RAW images by correcting spatially varying optical aberrations introduced during lens manufacturing.
Challenge SiteDatasets and Submission
Datasets are available at the Codalab site of each challenge track. Submissions to all phases will be done through the CodaLab site. Please register to the site and refer to the instructions on how to download the datasets and submit your results. The evaluation metrics of each track will be introduced in the respective site.Awards
The winner teams of each track will receive a certificate.
Terms
See our Terms and Conditions for competition rules and privacy policy.
Codalab sites take precedence in case of any inconsistency.
Contact
For more information, please contact us at mipi.challenge@gmail.com.
Call for Papers
ICCV 2025 Workshop
Besides the challenge, the MIPI workshop invites paper submissions from researchers, scientists, and engineers across academia and industry to share their novel work on mobile image sensing and intelligent imaging systems. The MIPI workshop topics cover but are not limited to:
- Generative models for low-level vision
- Image/video quality assessment
- Image/video enhancement and restoration
- Image/video editing and processing
- Efficient AI: model compression and acceleration
- Aberration correction
- Nighttime image restoration
- Under-display camera image restoration
- Event camera for image/video restoration
- Learning-based computational imaging systems: efficient algorithms, smart imaging systems, learned data acquisition, task-driven ISPs, etc
Submission Details
Papers will be peer-reviewed and comply with the ICCV 2025 proceedings style, format and length. The camera-ready deadline aligns with the main conference. Accepted papers must be registered and presented to ensure their inclusion in the IEEE Xplore Library. For details, refer to the ICCV 2025 Author Guidelines.Important Dates
1. Detailed Image Quality Assessment and Aberration Correction for Mobile Camera tracks has been extended by
One Month!
For exact dates & official notices, please see their challenge sites.
2. Submission deadline for both workshop and challenge papers: July 08, 2025
Event | Date (Always 11:59 PM HST) |
---|---|
Challenge site online | May 23, 2025 |
Release of training data and validation data | May 23, 2025 |
Validation server online | May 28, 2025 |
Release of test data, test server online | Jun 28, 2025 (Jul 28, 2025 for IQA and Aberration Correction tracks) |
Test results submission deadline, test server closed | Jul 03, 2025 (Aug 03, 2025 for IQA and Aberration Correction tracks) |
Fact sheets submission deadline | Jul 05, 2025 (Aug 05, 2025 for IQA and Aberration Correction tracks) |
Final test and rating results release to participants | Jul 07, 2025 (Aug 07, 2025 for IQA and Aberration Correction tracks) |
Event | Date (Always 11:59 PM HST) |
---|---|
Submission site online | May 20, 2025 |
Paper submission deadline | |
Challenge Paper submission deadline (only for methods from MIPI 2025 challenges!) |
Jul 08, 2025 |
Notification to authors | Jul 11, 2025 |
Camera ready deadline | Aug 18, 2025 |
Workshop date | Oct 20, 2025 |
Schedule
Time (HST) |
Programme |
|
---|---|
8:50 – 9:00
|
Opening and Welcome |
|
9:00 – 9:50
Keynote Talk
|
Photographic Style Editing with Diffusion Prior
|
|
9:50 – 10:40
Keynote Talk
|
Zero-Shot *Blind* Super-Resolution using “Internal Learning”WBlind-SR methods aim to be agnostic to the unknown downscaling SR-kernel with which the low-resolution (LR) image was generated. However, even advanced Blind-SR methods are still fundamentally restricted to rather simplistic downscaling SR-kernels (e.g., anisotropic Gaussian kernels), and fail on more complex (out of distribution) downscaling kernels. In this talk I will show how Zero-Shot Blind-SR can overcome this limitation, by exploiting the strong recurrence of tiny patches inside a single natural image (the LR test image alone). In the first part of my talk I will review our series of works on Zero-Shot Blind-SR using such “Internal Learning”, and the rationale behind it. I will then present “KernelFusion” – a new Zero-Shot Diffusion-based Blind-SR method which simultaneously recovers the unique image-specific SR-kernel and its corresponding high-resolution (HR) image. This is obtained by: (i) training an image-specific patch-based diffusion model on the single LR input image alone, thus capturing its unique internal patch statistics; and (ii) exploiting the principle that the correct SR-kernel is the one that maximizes patch similarity between the HR and LR images. By breaking free from predefined distributions of training datasets, KernelFusion pushes Blind-SR into a new assumption-free paradigm, handling complex downscaling kernels previously thought impossible. ![]() Michal IraniWeizmann Institute of Science |
|
10:40 – 11:30
Keynote Talk
|
Effective Image and Video Enhancement for Mobile DevicesIn this talk, I will review our work on image/video enhancement, frame interpolation, and related topics. The underlying theme is to learn from image data for these tasks based on classic approaches (e.g., depth, optical flow, and image pyramids) and recent learning methods (e.g., transformers). Specifically, I will present our recent work on segmentation, grounding, inpainting, and restoration, all of which can be applied to mobile devices. When time allows, I will also discuss our work for other vision tasks. ![]() Ming-Hsuan YangUniversity of California, Merced & Google DeepMind |
|
11:30 – 12:20
Keynote Talk
|
Efficient Visual Generation on the EdgeGenerative visual models achieve stunning results but remain too compute-intensive for widespread deployment. This talk introduces recent work that tackles this challenge head-on, including 1) efficient tokenizer and token compression: deep compression auto-encoders (DC-AE) for diffusion models, which compress tokens up to 64× to cut cost, 2) use linear attention: SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers and 3) exploit sparsity: Radial Attention: O(n log n) Sparse Attention with Energy Decay for Long Video Generation and Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-temporal Sparsity 4) exploit quantization and nvfp4: SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models. Together, these approaches—token compression, linear attention, sparse attention, quantization—point the way toward making high-fidelity image and video generation not only cheaper and faster, but also practical on edge devices. ![]() Song HanMassachusetts Institute of Technology |
|
12:20 - 12:30
|
Closing |
|
Keynote Speakers
The speakers are listed in alphabetical order by last name

Song Han
Associate Professor
MIT

Michal Irani
Professor
Weizmann Institute of Science

Boxin Shi
Associate Professor
Peking University

Ming-Hsuan Yang
Professor
University of California, Merced
Organizers
Workshop Organizers
Challenge Organizers
Aberration Correction for Mobile Camera
Haiyan Wang (vivo Mobile Communication Co., Ltd)
Xiaolong Yuan (vivo Mobile Communication Co., Ltd)
Siying Liu (Nankai University)
Awards 🏆 (Certificates)
Rank | Team Name | Affiliation | Team Members |
---|---|---|---|
🏅 1st | IH-VQA | Tencent, Wechat | Jianhui Sun, Xinli Yue, Yuhao Xie, Tao Shao, Zhaoran Zhao |
🥈 2nd | MediaX | Bilibili Inc., China Shanghai Jiao Tong University | Xinjun Ma, Lu Liu, Chunlei Cai, Qiang Hu, Shaocheng Shen, Huiyu Duan, Tianxiao Ye, Xiaoyun Zhang |
🥉 3rd | Smart vision group | Ricoh Software Research Center Beijing | Hong Yi, Yupeng Zhang |
Rank | Team Name | Affiliation | Team Members |
---|---|---|---|
🏅 1st | NoahTCV | HUAWEI Noah's Ark Lab Shanghai Jiaotong University |
Jin Han*, Cheng Li*, Yihui Shi*, Dehua Song, Zheng Chen, Wenbo Li, Fenglong Song (*Equal contribution) |
🥈 2nd | vivo Camera Research | Xidian University vivo Mobile Communication Co., Ltd | Yihong Leng, Siming Zheng, Peng-Tao Jiang, Linxiao Shi, Jinwei Chen, Bo Li, Jiaojiao Li |
🥉 3rd | MiVideoDeblur | Xiaomi Inc., China | Cheng Li, Jinao Song, Yan Chen, Yajing Wei, Yuqiang Yang, Jian Tang, Long Bao, Heng Sun |
Rank | Team Name | Affiliation | Team Members |
---|---|---|---|
🏅 1st | MiAlgo | Xiaomi Inc.,China | Shuran Qi, Yuqian Zhang, Furui Bai, Dong Pu, Yang Lu, Chaoyu Feng, Guangqi Shao, Dongqing Zou, Xiaotao Wang, Lei Lei |
🥈 2nd | AxeraAI | Axera Semiconductor Co., Ltd | Xiaoping Peng |
🥉 3rd | NKCVLab | Nankai University OPPO Research Institute |
Shihao Zhou, Dayu Li, Juncheng Zhou, Lishen Qu, Jie Liang, Hui Zeng, Jufeng Yang |
Sponsors
Previous MIPI
Previous Speakers












Copyright © MIPI Challenge 2025