\dm_csml_event_details UCL ELLIS

Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction


Speaker

Mario Marchand

Affiliation

Universite Laval

Date

Friday, 22 November 2013

Time

13:00-14:00

Location

Zoom

Link

Malet Place Engineering 1.03

Event series

Jump Trading/ELLIS CSML Seminar Series

Abstract

In contrast with classification and regression where the learner tries to predict a scalar output, structured output prediction attempts at predicting outputs which can be very complex such as sequences, parse trees, and graphs. However, most learning algorithms, such as structured SVM and Max Margin Markov Networks require a prohibitive training time due to the fact that they have to solve a (often NP-hard) pre-image problem for each training example and for each update of the predictor produced by the learner. Here, we provide some conditions under which, a vector-valued regression approach, which avoids the pre-image problem during learning, is justified and has rigorous guarantees. More precisely, we show that the quadratic regression loss is a convex surrogate of the structured prediction loss when the output kernel satisfies some condition with respect to the structured prediction loss. We provide two upper bounds of the prediction risk that depend on the empirical quadratic risk of the predictor. The minimizer of the first bound is the regularized least-square predictor proposed by Cortes Mohri and Weston (2007) while the minimizer of the second bound is a predictor that has never been proposed so far. Both predictors are compared on practical tasks.

Biography