\dm_csml_event_details
Speaker |
Gabi Teodoru |
---|---|
Affiliation |
UCL |
Date |
Friday, 25 May 2012 |
Time |
12:30-14:00 |
Location |
Zoom |
Link |
Darwin B15 Biochemistry LT |
Event series |
DeepMind/ELLIS CSML Seminar Series |
Abstract |
Spectral learning is a novel method for learning latent variable models (e.g. hidden Markov models, Kalman filters). In the limit of infinite data, the spectral learning algorithm is able to identify the true model parameters, unlike the more popular Expectation Maximization (EM) algorithm, which typically optimizes a non-convex cost function, and therefore fails to identify the true parameters even in the limit of infinite data because the optimization gets stuck in local minima. Previous work has applied spectral learning to HMMs and Kalman filters, as well as tree-structured graphs. It has also proven algorithm consistency and provided finite sample bounds. We take a step further and re-interpret the spectral learning algorithm as an optimization problem; this provides several advantages: it allows for more efficient use of the data, makes it possible to add regularizers to this cost function or use the generalized method of moments cost function instead, and allows us to extend the method to other models for which there does not exist a convex cost function. Slides for the talk: PDF |
Biography |