\dm_csml_event_details
Speaker |
Yu Luo |
---|---|
Affiliation |
King's College London |
Date |
Friday, 27 October 2023 |
Time |
12:00-13:00 |
Location |
Function Space, UCL Centre for Artificial Intelligence, 1st Floor, 90 High Holborn, London WC1V 6BH |
Link |
https://ucl.zoom.us/j/97245943682 |
Event series |
DeepMind/ELLIS CSML Seminar Series |
Abstract |
In the usual Bayesian setting, a full probabilistic model is required to link the data and parameters, and the form of this model and the inference and prediction mechanisms are specified via de Finetti's representation. In general, such a formulation is not robust to model mis-specification of its component parts. An alternative approach is to draw inference based on loss functions, where the quantity of interest is defined as a minimizer of some expected loss, and to construct posterior distributions based on the loss-based formulation; this strategy underpins the construction of the Gibbs posterior. We develop a Bayesian non-parametric approach; specifically, we generalize the Bayesian bootstrap, and specify a Dirichlet process model for the distribution of the observables. We implement this using direct prior-to-posterior calculations, but also using predictive sampling. The two updating frameworks yield the same posterior distribution under the exchangeability assumption and guarantee consistent estimation under mild conditions. We also study the assessment of posterior validity for non-standard Bayesian calculations. The methodology is demonstrated via the semi-parameter linear model. |
Biography |
Yu is currently a Lecturer in Statistics at King’s College London. He finished his PhD at McGill University under the supervision of Prof. David Stephens and Dr. David Buckeridge. His principal research focus has been on developing methodology and computational tools to solve emerging problems in biomedicine and science more generally, especially under fully Bayesian settings. Through his training and collaborations, he has developed interests in biostatistics, mixture models, hidden Markov models and causal inference. |