Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function. On the other hand, Bayesian methods, such as Gaussian Processes (GPs), exploit prior knowledge to quickly infer the shape of a new function at test time. Yet GPs are computationally expensive, and it can be hard to design appropriate priors. We propose a family of neural models, Conditional Neural Processes (CNPs), that combine the benefits of both. CNPs are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent. CNPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets. In this talk we will introduce CNPs and their latent variable version ‘Neural Processes’ through the lens of meta-learning and discuss how they relate to a variety of existing models from this ML area.
Marta is a senior research scientist at DeepMind where she has primarily worked on deep generative models and meta learning. In this context she was involved in developing Generative Query Networks and led the work on Neural Processes. Recently her research interests have expanded to include multi-agent systems and game theory.