SEMBED: Semantic Embedding of Egocentric Action Videos

Michael Wray*, Davide Moltisanti, Walterio Mayol-Cuevas, Dima Damen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

7 Citations (Scopus)
365 Downloads (Pure)

Abstract

We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of
1225 freely annotated egocentric videos, outperforming SVM classification by more than 5%.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2016 Workshops
Subtitle of host publicationAmsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I
EditorsGang Hua, Hervé Jégou
PublisherSpringer
Pages532-545
Number of pages14
ISBN (Electronic)9783319466040
ISBN (Print)9783319466033
DOIs
Publication statusPublished - 18 Sept 2016
Event14th European Conference on Computer Vision, ECCV 2016 - Amsterdam, Netherlands
Duration: 8 Oct 201616 Oct 2016

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume9913
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference14th European Conference on Computer Vision, ECCV 2016
Country/TerritoryNetherlands
CityAmsterdam
Period8/10/1616/10/16

Keywords

  • Egocentric Action Recognition
  • Semantic Ambiguity
  • Semantic Embedding

Fingerprint

Dive into the research topics of 'SEMBED: Semantic Embedding of Egocentric Action Videos'. Together they form a unique fingerprint.

Cite this