\dm_csml_event_details UCL ELLIS

Data-Driven Reinforcement Learning: Deriving Common Sense from Past Experience


Speaker

Sergey Levine

Affiliation

UC Berkeley

Date

Friday, 08 January 2021

Time

17:00-18:00

Location

Zoom

Link

Zoom

Event series

Jump Trading/ELLIS CSML Seminar Series

Abstract

https://ucl.zoom.us/j/99166798620

Reinforcement learning affords autonomous agents, such as robots, the ability to acquire behavioral skills through their own experience. However, a central challenge for machine learning systems deployed in real-world settings is generalization, and generalization has received comparatively less attention in recent research in reinforcement learning, with many methods focusing on optimization performance and relying on hand-designed simulators or closed-world domains such as games. In domains where generalization has been studied successfully -- computer vision, natural language processing, speech recognition, etc., -- invariably good generalization stems from access to large, diverse, and representative datasets. Put another way, data drives generalization. Can we transplant this lesson into the world of reinforcement learning? What does a data-driven reinforcement learning system look like, and what types of algorithmic and conceptual challenges must be overcome to devise such a system? In this talk, I will discuss how data-driven methods that utilize past experience can enable wider generalization for reinforcement learning agents, particularly as applied to challenging problems in robotic manipulation and navigation in open-world environments. I will show how robotic systems trained on large and diverse datasets can attain state-of-the-art results for robotic grasping, acquire a kind of "common sense" that allows them to generalize to new situations, learn flexible skills that allow users to set new goals at test-time, and even enable a ground robot to navigate sidewalks in the city of Berkeley with an entirely end-to-end learned model.

Biography