Centre Stage: Centricity-based Audio-Visual Temporal Action Detection

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

Previous one-stage action detection approaches have modelled temporal dependencies using only the visual modality. In this paper, we explore different strategies to incorporate the audio modality, using multi-scale cross-attention to fuse the two modalities. We also demonstrate the correlation between the distance from the timestep to the action centre and the accuracy of the predicted boundaries. Thus, we propose a novel network head to estimate the closeness of timesteps to the action centre, which we call the centricity score. This leads to increased confidence for proposals that exhibit more precise boundaries. Our method can be integrated with other one-stage anchor-free architectures and we demonstrate this on three recent baselines on the EPIC-Kitchens-100 action detection benchmark where we achieve state-of-the-art performance. Detailed ablation studies showcase the benefits of fusing audio and our proposed centricity scores.
Original languageEnglish
Number of pages14
Publication statusPublished - 24 Nov 2023
EventThe 1st Workshop in Video Understanding and its Applications at the 34th British Machine Vision Conference (BMVCW) - Robert Gordon University, Sir Ian Wood Building, Garthdee Campus , Aberdeen, United Kingdom
Duration: 20 Nov 202324 Nov 2023
https://vua-bmvc.github.io/

Workshop

WorkshopThe 1st Workshop in Video Understanding and its Applications at the 34th British Machine Vision Conference (BMVCW)
Country/TerritoryUnited Kingdom
CityAberdeen
Period20/11/2324/11/23
Internet address

Fingerprint

Dive into the research topics of 'Centre Stage: Centricity-based Audio-Visual Temporal Action Detection'. Together they form a unique fingerprint.

Cite this