Projects per year
Abstract
Diverse actions give rise to rich audio-visual signals in long videos. Recent works showcase that the two modali- ties of audio and video exhibit different temporal extents of events and distinct labels. We address the interplay between the two modalities in long videos by explicitly modelling the temporal extents of audio and visual events. We propose the Time Interval Machine (TIM) where a modality-specific time interval poses as a query to a transformer encoder that ingests a long video input. The encoder then attends to the specified interval, as well as the surrounding context in both modalities, in order to recognise the ongoing action.
We test TIM on three long audio-visual video datasets: EPIC-KITCHENS, Perception Test, and AVE, report- ing state-of-the-art (SOTA) for recognition. On EPIC- KITCHENS, we beat previous SOTA that utilises LLMs and significantly larger pre-training by 2.9% top-1 action recog- nition accuracy. Additionally, we show that TIM can be adapted for action detection, using dense multi-scale inter- val queries, outperforming SOTA on EPIC-KITCHENS-100 for most metrics, and showing strong performance on the Perception Test. Our ablations show the critical role of in- tegrating the two modalities and modelling their time inter- vals in achieving this performance.
We test TIM on three long audio-visual video datasets: EPIC-KITCHENS, Perception Test, and AVE, report- ing state-of-the-art (SOTA) for recognition. On EPIC- KITCHENS, we beat previous SOTA that utilises LLMs and significantly larger pre-training by 2.9% top-1 action recog- nition accuracy. Additionally, we show that TIM can be adapted for action detection, using dense multi-scale inter- val queries, outperforming SOTA on EPIC-KITCHENS-100 for most metrics, and showing strong performance on the Perception Test. Our ablations show the critical role of in- tegrating the two modalities and modelling their time inter- vals in achieving this performance.
Original language | English |
---|---|
Publication status | Published - 21 Jun 2024 |
Event | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): CVPR - Seattle, United States Duration: 17 Jun 2024 → 21 Jun 2024 https://cvpr.thecvf.com |
Conference
Conference | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
---|---|
Country/Territory | United States |
City | Seattle |
Period | 17/06/24 → 21/06/24 |
Internet address |
Fingerprint
Dive into the research topics of 'TIM: A Time Interval Machine for Audio-Visual Action Recognition'. Together they form a unique fingerprint.-
8030 EPSRC via Oxford EP/T028572/1 Visual AI
Damen, D. (Principal Investigator)
1/12/20 → 30/11/25
Project: Research, Parent
-
UMPIRE: United Model for the Perception of Interactions for visual Recognition
Damen, D. (Principal Investigator)
1/02/20 → 31/01/25
Project: Research
Datasets
-
EPIC-KITCHENS-100
Aldamen, D. (Creator), Kazakos, E. (Creator), Doughty, H. (Creator), Munro, J. (Creator), Price, W. (Creator), Wray, M. (Creator), Perrett, T. (Creator) & Ma, J. (Creator), University of Bristol, 15 May 2020
DOI: 10.5523/bris.2g1n6qdydwa9u22shpxqzp0t8m, http://data.bris.ac.uk/data/dataset/2g1n6qdydwa9u22shpxqzp0t8m
Dataset