4-5 Sep 2025 Fontainebleau (France)

Workshop

Ahead of the conference, a one-day workshop about score based generative models, given by Gabriel V. Cardoso, will be held on Wednesday September 3, 2025.

 

Description of the workshop content

This workshop will explore score-based generative models and their application as informative priors for addressing ill-posed inverse problems. We will begin with a comprehensive overview of the theory behind score-based generative models, presenting their various forms (such as [1, 2, 3, 4]) to provide participants with a unified understanding of the field. Following this, we will discuss current statistical guarantees (as [5, 6]) and guide participants through a hands-on Jupyter notebook tutorial, where they will implement their own sampling procedure using a pre-trained score-based generative model.

In the second part of the workshop, we will focus on leveraging these models as priors for tackling ill-posed inverse problems. We will examine two approaches: learning a generative model for the posterior and conditioning a pre-existing generative model for posterior sampling. Our emphasis will be on the latter, as we present several algorithms for approximate sampling and discuss their limitations. Participants will then implement one of these samplers using a pre-trained score-based generative model and evaluate its performance in a dedicated Jupyter notebook.

[1] Song, Yang, and Stefano Ermon. "Generative modeling by estimating gradients of the data distribution." Advances in neural information processing systems 32 (2019).
[2] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." Advances in neural information processing systems 33 (2020): 6840-6851.
[3] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising Diffusion Implicit Models." International Conference on Learning Representations.
[4] Song, Yang, et al. "Score-Based Generative Modeling through Stochastic Differential Equations." International Conference on Learning Representations.
[5] Conforti, Giovanni, Alain Durmus, and Marta Gentiloni Silveri. "KL convergence guarantees for score diffusion models under minimal data assumptions." SIAM Journal on Mathematics of Data Science 7.1 (2025): 86-109.
[6] Kadkhodaie, Zahra, et al. "Generalization in diffusion models arises from geometry-adaptive harmonic representations." The Twelfth International Conference on Learning Representations.

Online user: 2 Privacy | Accessibility
Loading...