Loading...

Mobile Intelligent Photography & Imaging
Workshop 2025

>

MIPI Workshop in conjunction with ICCV 2025
Oct 20th Morning, 2025       327 Room, Honolulu Convention Center, Honolulu, Hawai'i

Overview

Developing and integrating advanced image sensors with novel algorithms in camera systems is prevalent with the increasing demand for computational photography and imaging on mobile platforms. However, the lack of high-quality data for research and the rare opportunity for in-depth exchange of views from industry and academia constrain the development of Mobile Intelligent Photography and Imaging (MIPI). The workshop's main focus is on MIPI, emphasizing the integration of novel image sensors and imaging algorithms.

Challenge Tracks

Alongside the MIPI workshop, we also host several challenges that advance mobile intelligent photography and imaging across industry and academia.

Explore MIPI challenge tracks and join the competition

Detailed Image Quality Assessment

Achieve detailed image quality assessment (IQA) with three subtasks: fine-grained quality grounding, detailed quality perception, and descriptive quality reasoning.

Challenge Site
Deblurring for Hybrid EVS Camera

Restore high-quality RGB frames by jointly leveraging captured event data and blurred RGB images.

Challenge Site
Aberration Correction for Mobile Camera

Restore high-quality RAW images by correcting spatially varying optical aberrations introduced during lens manufacturing.

Challenge Site
Datasets and Submission
Datasets are available at the Codalab site of each challenge track. Submissions to all phases will be done through the CodaLab site. Please register to the site and refer to the instructions on how to download the datasets and submit your results. The evaluation metrics of each track will be introduced in the respective site.
Award icon
Awards

The winner teams of each track will receive a certificate.

Award icon
Terms

See our Terms and Conditions for competition rules and privacy policy.

Codalab sites take precedence in case of any inconsistency.

Award icon
Contact

For more information, please contact us at mipi.challenge@gmail.com.

Call for Papers

ICCV 2025 Workshop

Besides the challenge, the MIPI workshop invites paper submissions from researchers, scientists, and engineers across academia and industry to share their novel work on mobile image sensing and intelligent imaging systems. The MIPI workshop topics cover but are not limited to:

  • Generative models for low-level vision
  • Image/video quality assessment
  • Image/video enhancement and restoration
  • Image/video editing and processing
  • Efficient AI: model compression and acceleration
  • Aberration correction
  • Nighttime image restoration
  • Under-display camera image restoration
  • Event camera for image/video restoration
  • Learning-based computational imaging systems: efficient algorithms, smart imaging systems, learned data acquisition, task-driven ISPs, etc
Submission Details
Papers will be peer-reviewed and comply with the ICCV 2025 proceedings style, format and length. The camera-ready deadline aligns with the main conference. Accepted papers must be registered and presented to ensure their inclusion in the IEEE Xplore Library. For details, refer to the ICCV 2025 Author Guidelines.

Important Dates

📢 Announcement

1. Detailed Image Quality Assessment and Aberration Correction for Mobile Camera tracks has been extended by One Month!
For exact dates & official notices, please see their challenge sites.

2. Submission deadline for both workshop and challenge papers: July 08, 2025

🎉 Best of luck to everyone!
Event Date (Always 11:59 PM HST)
Challenge site online May 23, 2025
Release of training data and validation data May 23, 2025
Validation server online May 28, 2025
Release of test data, test server online Jun 28, 2025 (Jul 28, 2025 for IQA and Aberration Correction tracks)
Test results submission deadline, test server closed Jul 03, 2025 (Aug 03, 2025 for IQA and Aberration Correction tracks)
Fact sheets submission deadline Jul 05, 2025 (Aug 05, 2025 for IQA and Aberration Correction tracks)
Final test and rating results release to participants Jul 07, 2025 (Aug 07, 2025 for IQA and Aberration Correction tracks)
Event Date (Always 11:59 PM HST)
Submission site online May 20, 2025
Paper submission deadline Jul 04, 2025   Jul 08, 2025
Challenge Paper submission deadline
(only for methods from MIPI 2025 challenges!)
Jul 08, 2025
Notification to authors Jul 11, 2025
Camera ready deadline Aug 18, 2025
Workshop date Oct 20, 2025

Schedule

Time (HST)

Programme


8:50 – 9:00
Opening and Welcome

9:00 – 9:50
Keynote Talk
Photographic Style Editing with Diffusion Prior
— Towards Artificial Intelligence Generated Photography (AIGP)
  • [Abstract]
  • Photographic style editing modifies photographic concepts such as focus, aperture, tone, and lighting while preserving the original content of an input image. This talk will present two advancements in photographic style editing by leveraging the diffusion prior. To render the lens blur effect of fast apertures, we propose BokehDiff, which employs a physics-inspired self-attention module to align with the image formation process with a one-step diffusion model and overcomes the rendering artifacts in depth discontinuities with the diffusion prior. To transfer the personal photographic style of professional photographers, we introduce PIF (Personalized Image Filter), which learns the average photographic concepts (e.g., vignetting, contrast, shadow, etc.) with the diffusion prior and achieves the personalized photographic style with a content-preserving residual denoising paradigm. We hope the realistic photographic style editing results from our works could take one step closer towards Artificial Intelligence Generated Photography (AIGP).
Jerome Bell
Boxin Shi

Peking University


9:50 – 10:40
Keynote Talk
Zero-Shot *Blind* Super-Resolution using “Internal Learning”
  • [Abstract]
  • WBlind-SR methods aim to be agnostic to the unknown downscaling SR-kernel with which the low-resolution (LR) image was generated. However, even advanced Blind-SR methods are still fundamentally restricted to rather simplistic downscaling SR-kernels (e.g., anisotropic Gaussian kernels), and fail on more complex (out of distribution) downscaling kernels. In this talk I will show how Zero-Shot Blind-SR can overcome this limitation, by exploiting the strong recurrence of tiny patches inside a single natural image (the LR test image alone). In the first part of my talk I will review our series of works on Zero-Shot Blind-SR using such “Internal Learning”, and the rationale behind it. I will then present “KernelFusion” – a new Zero-Shot Diffusion-based Blind-SR method which simultaneously recovers the unique image-specific SR-kernel and  its corresponding high-resolution (HR) image. This is obtained by:  (i) training an image-specific patch-based diffusion model on the single LR input image alone, thus capturing its unique internal patch statistics; and (ii) exploiting the principle that the correct SR-kernel is the one that maximizes patch similarity between the HR and LR images. By breaking free from predefined distributions of training datasets, KernelFusion pushes Blind-SR into a new assumption-free paradigm, handling complex downscaling kernels previously thought impossible.
Jerome Bell
Michal Irani

Weizmann Institute of Science


10:40 – 11:30
Keynote Talk
Effective Image and Video Enhancement for Mobile Devices
  • [Abstract]
  • In this talk, I will review our work on image/video enhancement, frame interpolation, and related topics. The underlying theme is to learn from image data for these tasks based on classic approaches (e.g., depth, optical flow, and image pyramids) and recent learning methods (e.g., transformers). Specifically, I will present our recent work on segmentation, grounding, inpainting, and restoration, all of which can be applied to mobile devices. When time allows, I will also discuss our work for other vision tasks.
Jerome Bell
Ming-Hsuan Yang

University of California, Merced & Google DeepMind


11:30 – 12:20
Keynote Talk
Efficient Visual Generation on the Edge
  • [Abstract]
  • Generative visual models achieve stunning results but remain too compute-intensive for widespread deployment. This talk introduces recent work that tackles this challenge head-on, including 1) efficient tokenizer and token compression: deep compression auto-encoders (DC-AE) for diffusion models, which compress tokens up to 64× to cut cost,  2) use linear attention: SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers and  3) exploit sparsity: Radial Attention: O(n log n) Sparse Attention with Energy Decay for Long Video Generation and Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-temporal Sparsity 4) exploit quantization and nvfp4:  SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models. Together, these approaches—token compression, linear attention, sparse attention, quantization—point the way toward making high-fidelity image and video generation not only cheaper and faster, but also practical on edge devices.
Jerome Bell
Song Han

Massachusetts Institute of Technology


12:20 - 12:30
Closing

Keynote Speakers

The speakers are listed in alphabetical order by last name


Song Han

Associate Professor
MIT

Michal Irani

Professor
Weizmann Institute of Science

Boxin Shi

Associate Professor
Peking University

Ming-Hsuan Yang

Professor
University of California, Merced

Organizers

Workshop Organizers


Shangchen Zhou

Nanyang Technological University

Xiaoming Li

Nanyang Technological University

Zongsheng Yue

Xi'an Jiaotong University

Kang Liao

Nanyang Technological University

Peiqing Yang

Nanyang Technological University

Jianyi Wang

Nanyang Technological University

Ralph Edwards
Yuekun Dai

Nanyang Technological University

Yikai Wang

Nanyang Technological University

Xinyu Hou

Nanyang Technological University

Zhouxia Wang

Nanyang Technological University

Haoying Li

Nanyang Technological University

Ralph Edwards
Ruicheng Feng

Nanyang Technological University

Yihang Luo

Nanyang Technological University

Chongyi Li

Nankai University

Chen Change Loy

Nanyang Technological University

Challenge Organizers


Detailed Image Quality Assessment

Haotian Fan (ByteDance)

Yifang Xu (ByteDance)

Meijia Song (ByteDance)

QiuFang Ma (ByteDance)

Wenjie Liao (Nankai University)

Shuhao Han (Nankai University)

Chun-Le Guo (Nankai University)

Deblurring for Hybrid EVS Camera

Wu Yaqi (Pentas Vision Pte Ltd)

Fan Zhihao (AlpsenTek)

Hirotaka Shinozaki (Pentas Vision Pte Ltd)

Frank Zhang (Pentas Vision Pte Ltd)

Xander Li (Sony Electronics Inc)

Alexis Baudron (Sony Electronics Inc)

Aberration Correction for Mobile Camera

Haiyan Wang (vivo Mobile Communication Co., Ltd)

Xiaolong Yuan (vivo Mobile Communication Co., Ltd)

Jiayi Fu (Nankai University)

Siying Liu (Nankai University)

Shuhao Han (Nankai University)

Chun-Le Guo (Nankai University)

Awards 🏆 (Certificates)

Rank Team Name Affiliation Team Members
🏅 1st IH-VQA Tencent, Wechat Jianhui Sun, Xinli Yue, Yuhao Xie,
Tao Shao, Zhaoran Zhao
🥈 2nd MediaX Bilibili Inc., China
Shanghai Jiao Tong University
Xinjun Ma, Lu Liu, Chunlei Cai, Qiang Hu,
Shaocheng Shen, Huiyu Duan, Tianxiao Ye,
Xiaoyun Zhang
🥉 3rd Smart vision group Ricoh Software Research Center Beijing Hong Yi, Yupeng Zhang
Rank Team Name Affiliation Team Members
🏅 1st NoahTCV HUAWEI Noah's Ark Lab
Shanghai Jiaotong University
Jin Han*, Cheng Li*, Yihui Shi*, Dehua Song,
Zheng Chen, Wenbo Li, Fenglong Song
(*Equal contribution)
🥈 2nd vivo Camera Research Xidian University
vivo Mobile Communication Co., Ltd
Yihong Leng, Siming Zheng, Peng-Tao Jiang,
Linxiao Shi, Jinwei Chen, Bo Li, Jiaojiao Li
🥉 3rd MiVideoDeblur Xiaomi Inc., China Cheng Li, Jinao Song, Yan Chen, Yajing Wei,
Yuqiang Yang, Jian Tang, Long Bao, Heng Sun
Rank Team Name Affiliation Team Members
🏅 1st MiAlgo Xiaomi Inc.,China Shuran Qi, Yuqian Zhang, Furui Bai, Dong Pu,
Yang Lu, Chaoyu Feng, Guangqi Shao, Dongqing Zou,
Xiaotao Wang, Lei Lei
🥈 2nd AxeraAI Axera Semiconductor Co., Ltd Xiaoping Peng
🥉 3rd NKCVLab Nankai University
OPPO Research Institute
Shihao Zhou, Dayu Li, Juncheng Zhou, Lishen Qu,
Jie Liang, Hui Zeng, Jufeng Yang

Sponsors

                   

Previous MIPI

Previous Speakers



Michael S. Brown
Mohit Gupta
Peyman Milanfar
Tomoo Mitsunaga
Felix Heide
Jiaying Liu
Tianfan Xue
Sabine Süsstrunk
Lei Zhang
Mahmoud Afifi
Jian Wang
Mian Wei
Top