\dm_csml_event_details UCL ELLIS

Learning Modular Control Policies in Robotics


Gerhard Neumann


TU Darmstadt


Thursday, 19 June 2014






Malet Place Engineering Building 1.02

Event series

DeepMind/ELLIS CSML Seminar Series


One big aim in robotics is to learn modular control policies that can synthesize complex behaviour out of simpler elemental movements, often called movement primitives. Such structure of the control policy comes with the promise of simplifying complex learning problems into simpler tasks and alleviates learning of new, but similar tasks. In order to learn modular control policies efficiently, the underlying learning algorithm as well as the movement primitive representation has to fulfil several requirements. There need to be simple mechanisms to adapt the primitive to new situations, we need to learn how to sequence primitives and combine primitives simultaneously such that we can synthesize complex behaviour out of a compact set of movement primitives.

In this talk I will introduce our recent work on learning such a modular control policy with information theoretic policy search.
Information-theoretic policy search uses an information-theoretic bound to determine the step-size of the policy update. It exhibits several beneficial properties, such as a smooth and stable learning process and a fast learning speed. We extended information-theoretic policy search methods such that we can efficiently generalize elemental movements to new situations, learn to select between several elemental movements and learn how to sequence elemental movements. Furthermore, I will present a new probabilistic movement primitive (ProMP) representation that is particularly well suited for such a modular control approach. ProMPs allow for the use of new probabilistic operators that provide a principled way of generalization and co-activation of movement primitives.

Short Bio:

Gerhard Neumann is currently post-doctoral fellow at the Intelligent Autonomous Systems (IAS) Lab of Prof. Jan Peters at the TU Darmstadt. He is group leader of the Machine Learning for Control group. He finished his PhD. in 2012 at the Technical University Graz. His research interests are Bayesian Machine Learning, Hierarchical and Structured Learning for Robotics, Reinforcement Learning, Information Theoretic Policy Search, Kernel Embeddings and Movement Primitive Representations.