SEMBED: Semantic Embedding of Egocentric Action Videos

Michael Wray*, Davide Moltisanti, Walterio Mayol-Cuevas, Dima Damen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

4 Citations (Scopus)
285 Downloads (Pure)


We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of
1225 freely annotated egocentric videos, outperforming SVM classification by more than 5%.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2016 Workshops
Subtitle of host publicationAmsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I
EditorsGang Hua, Hervé Jégou
Number of pages14
ISBN (Electronic)9783319466040
ISBN (Print)9783319466033
Publication statusPublished - 18 Sep 2016
Event14th European Conference on Computer Vision, ECCV 2016 - Amsterdam, Netherlands
Duration: 8 Oct 201616 Oct 2016

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference14th European Conference on Computer Vision, ECCV 2016


  • Egocentric Action Recognition
  • Semantic Ambiguity
  • Semantic Embedding

Fingerprint Dive into the research topics of 'SEMBED: Semantic Embedding of Egocentric Action Videos'. Together they form a unique fingerprint.

Cite this