AMEGO: Active Memory from long EGOcentric videos

Gabriele Goletto*, Tushar Nagarajan, Giuseppe Averta, Dima Damen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

Egocentric videos provide a unique perspective into individuals’ daily experiences, yet their unstructured nature presents challenges for perception. In this paper, we introduce AMEGO, a novel approach aimed at enhancing the comprehension of very-long egocentric videos. Inspired by the human’s ability to maintain information from a single watching, AMEGO focuses on constructing a self-contained representations from one egocentric video, capturing key locations and object interactions. This representation is semantic-free and facilitates multiple queries without the need to reprocess the entire visual content. Additionally,
to evaluate our understanding of very-long egocentric videos, we introduce the new Active Memories Benchmark (AMB), composed of more than 20K of highly challenging visual queries from EPIC-KITCHENS. These queries cover different levels of video reasoning (sequencing, concurrency and temporal grounding) to assess detailed video understanding capabilities. We showcase improved performance of AMEGO on AMB, surpassing other video QA baselines by a substantial margin.
Original languageEnglish
Title of host publicationEuropean Conference on Computer Vision
Publication statusAccepted/In press - 29 Sept 2024
EventThe 18th European Conference on Computer Vision ECCV 2024 - MiCo Milano, Milano, Italy
Duration: 29 Sept 20244 Oct 2024
https://eccv2024.ecva.net/

Conference

ConferenceThe 18th European Conference on Computer Vision ECCV 2024
Abbreviated title ECCV 2024
Country/TerritoryItaly
CityMilano
Period29/09/244/10/24
Internet address

Fingerprint

Dive into the research topics of 'AMEGO: Active Memory from long EGOcentric videos'. Together they form a unique fingerprint.

Cite this