profile-pic
profile-pic
UCL ELLIS
UCL, a global leader in AI and machine learning, has joined the ELLIS network with a new ELLIS Unit. ELLIS is a European AI network of excellence comprising Units within 30 research institutions. It focuses on fundamental science, technical innovation and societal impact. The ELLIS Unit at UCL spans across multiple departments (Gatsby Computational Neuroscience Unit, Department of Computer Science, Department of Statistical Science and Department of Electronic and Electrical Engineering).

“Some of the most effective learning algorithms are those that combine perspectives from many different models or parameters. This has always seemed a fitting metaphor for effective research. And now ELLIS will provide a new architecture to keep our real-life committee machine functioning --- reinforcing, deepening and enlarging the channels that connect us to colleagues throughout Europe At UCL we're excited to be a part of this movement to grow together. We look forward to sharing new collaborations, workshops, exchanges, joint studentships and more, and to the insight and breakthroughs that will undoubtedly follow. ”

Prof Maneesh Sahani
Director, Gatsby Computational Neuroscience Unit

“Advances in AI that benefit people and planet require global cooperation across disciplines and sectors. The ELLIS network is a vital part of that effort and UCL is proud to be a contributor. ”

Prof Geraint Rees
UCL Pro-Vice-Provost (AI)

News


Events


Manifold-constrained diffusion models for inverse problems in imaging

Speaker: Jong Chul Ye
Event Date: 03 February 2023

Recently, diffusion models have been used to solve various inverse problems in an unsupervised manner with appropriate modifications to the sampling process. However, the current solvers, which recursively apply a reverse diffusion step followed by a projection-based measurement consistency step, often produce sub- optimal results. By studying the generative sampling path, here we show that current solvers throw the sample path off the data manifold, and hence the error accumulates. To address this, we propose an additional correction term inspired by the manifold constraint, which can be used synergistically with the previous solvers to make the iterations close to the manifold. The proposed manifold constraint is straightforward to implement within a few lines of code, yet boosts the performance by a surprisingly large margin. With extensive experiments, we show that our method is superior to the previous methods both theoretically and empirically, producing promising results in many applications such as image inpainting, colorization, and sparse-view computed tomography. Then, we extend diffusion solvers to efficiently handle general noisy (non)linear inverse problems via approximation of the posterior sampling. Interestingly, the resulting posterior sampling scheme is a blended version of diffusion sampling with the manifold constrained gradient without a strict measurement consistency projection step, yielding a more desirable generative path in noisy settings compared to the previous studies. Our method demonstrates that diffusion models can incorporate various measurement noise statistics such as Gaussian and Poisson, and also efficiently handle noisy nonlinear inverse problems such as Fourier phase retrieval and non-uniform deblurring.

People


Computer Science

Gatsby Computational Neuroscience Unit

Department of Statistical Science

Department of Electronic and Electrical Engineering

UCL Energy Institute