profile-pic
profile-pic
UCL ELLIS
UCL, a global leader in AI and machine learning, has joined the ELLIS network with a new ELLIS Unit. ELLIS is a European AI network of excellence comprising Units within 30 research institutions. It focuses on fundamental science, technical innovation and societal impact. The ELLIS Unit at UCL spans across multiple departments (Gatsby Computational Neuroscience Unit, Department of Computer Science, Department of Statistical Science and Department of Electronic and Electrical Engineering).

“Some of the most effective learning algorithms are those that combine perspectives from many different models or parameters. This has always seemed a fitting metaphor for effective research. And now ELLIS will provide a new architecture to keep our real-life committee machine functioning --- reinforcing, deepening and enlarging the channels that connect us to colleagues throughout Europe At UCL we're excited to be a part of this movement to grow together. We look forward to sharing new collaborations, workshops, exchanges, joint studentships and more, and to the insight and breakthroughs that will undoubtedly follow. ”

Prof Maneesh Sahani
Director, Gatsby Computational Neuroscience Unit

“Advances in AI that benefit people and planet require global cooperation across disciplines and sectors. The ELLIS network is a vital part of that effort and UCL is proud to be a contributor. ”

Prof Geraint Rees
UCL Pro-Vice-Provost (AI)

News


Events


High Fidelity Image Counterfactuals with Probabilistic Causal Models

Speaker: Fabio De Sousa Ribeiro
Event Date: 09 June 2023

The ability to generate plausible counterfactuals has wide scientific applicability and is particularly valuable in fields like medical imaging, wherein data are scarce and underrepresentation of subgroups is prevalent. Answering counterfactual queries like 'why?' and 'what if..?', expressed in the language of causality, could greatly benefit several important research areas such as: (i) explainability; (ii) data augmentation; (iii) robustness to spurious correlations, and (iv) fairness notions in both observed and counterfactual outcomes. Despite recent progress, accurate estimation of interventional and counterfactual queries for high-dimensional structured variables (e.g. images) remains an open problem. Few previous works have attempted to fulfil all three rungs of Pearl’s ladder of causation, namely: association; intervention and counterfactuals in a principled manner using deep models. Moreover, evaluating counterfactuals poses inherent challenges, as they are by definition counter-to-fact and unobservable. Contrary to preceding studies, which focus primarily on identifiability guarantees in the limit of infinite data, we take a pragmatic approach to counterfactuals. We focus on exploring the practical limits and possibilities of estimating and empirically evaluating high-fidelity image counterfactuals of real-world data. To this end, we introduce a specific system and method which leverages ideas from causal mediation analysis and advances in generative modelling to engineer deep causal mechanisms for structured variables. Our experiments illustrate the ability of our proposed mechanisms to perform accurate abduction and plausible estimates of direct, indirect and total effects as measured by axiomatic soundness of counterfactuals.

People


Computer Science

Gatsby Computational Neuroscience Unit

Department of Statistical Science

Department of Electronic and Electrical Engineering

UCL Energy Institute