EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition

Vangelis Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

280 Citations (Scopus)
125 Downloads (Pure)

Abstract

We focus on multi-modal fusion for egocentric action recognition, and propose a novel architecture for multimodal temporal-binding, i.e. the combination of modalities within a range of temporal offsets. We train the architecture with three modalities – RGB, Flow and Audio – and combine them with mid-level fusion alongside sparse temporal sampling of fused representations. In contrast with previous works, modalities are fused before temporal aggregation, with shared modality and fusion weights over time. Our proposed architecture is trained end-to-end, outperforming individual modalities as well as late-fusion of modalities. We demonstrate the importance of audio in egocentric vision, on per-class basis, for identifying actions as well as interacting objects. Our method achieves state of the art results on both the seen and unseen test sets of the largest egocentric dataset: EPIC-Kitchens, on all metrics using the public leaderboard.
Original languageEnglish
Title of host publication2019 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages5491-5500
Number of pages10
ISBN (Print)978-1-7281-4803-8
DOIs
Publication statusPublished - 2 Nov 2019
EventIEEE/CVF International Conference on Computer Vision (ICCV) 2019 - Korea, Seoul
Duration: 27 Oct 20192 Nov 2019

Publication series

Name2019 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherIEEE
ISSN (Electronic)2380-7504

Conference

ConferenceIEEE/CVF International Conference on Computer Vision (ICCV) 2019
CitySeoul
Period27/10/192/11/19

Fingerprint

Dive into the research topics of 'EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition'. Together they form a unique fingerprint.

Cite this