\dm_csml_event_details UCL ELLIS

Learning on Aggregate Outputs with Kernels


Speaker

Dino Sejdinovic

Affiliation

University of Oxford

Date

Friday, 25 January 2019

Time

13:00-14:00

Location

Zoom

Link

Roberts 421

Event series

DeepMind/ELLIS CSML Seminar Series

Abstract

While a typical supervised learning framework assumes that the inputs and the outputs are measured at the same levels of granularity, many applications, including global mapping of disease, only have access to outputs at a much coarser level than that of the inputs. Aggregation of outputs makes generalization to new inputs much more difficult. We consider an approach to this problem based on variational learning with a model of output aggregation and Gaussian processes, where aggregation leads to intractability of the standard evidence lower bounds. We propose new bounds and tractable approximations, leading to improved prediction accuracy and scalability to large datasets, while explicitly taking uncertainty into account. We develop a framework which extends to several types of likelihoods, including the Poisson model for aggregated count data. We apply our framework to a challenging and important problem, the fine-scale spatial modelling of malaria incidences.

Joint work with Ho Chung Leon Law, Ewan Cameron, Tim CD Lucas, Seth Flaxman, Katherine Battle, Kenji Fukumizu

https://papers.nips.cc/paper/7847-variational-learning-on-aggregate-outputs-with-gaussian-processes

Biography