\dm_csml_event_details UCL ELLIS

Look, Listen and Learn


Speaker

Relja Arandjelovic

Affiliation

DeepMind

Date

Friday, 08 December 2017

Time

13:00-14:00

Location

Zoom

Link

Roberts Building G08 Sir David Davies LT

Event series

DeepMind/ELLIS CSML Seminar Series

Abstract

We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -- the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" (AVC) learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art self-supervised approaches on ImageNet classification. We also design a network that can learn to embed audio and visual inputs into a common space that is suitable for cross-modal retrieval, and a network that can localize the object that sounds in an image, given the audio signal. We achieve all of these objectives by training from unlabelled video using only audio-visual cor-
respondence (AVC) as the objective function.

Biography