Learning Visual Actions Using Multiple Verb-Only Labels

Research output: Contribution to conferenceConference Paperpeer-review

45 Downloads (Pure)


This work introduces verb-only representations for both recognition and retrieval of visual actions, in video. Current methods neglect legitimate semantic ambiguities between verbs, instead choosing unambiguous subsets of verbs along with objects to disambiguate the actions. We instead propose multiple verb-only labels, which we learn through hard or soft assignment as a regression. This enables learning a much larger vocabulary of verbs, including contextual overlaps of these verbs. We collect multi-verb annotations for three action video datasets and evaluate the verb-only labelling representations for action recognition and cross-modal retrieval (video-to-text and text-to-video).
We demonstrate that multi-label verb-only representations outperform conventional single verb labels. We also explore other benefits of a multi-verb representation including cross-dataset retrieval and verb type (manner and result verb types) retrieval.
Original languageEnglish
Number of pages14
Publication statusPublished - 12 Sept 2019
Event30th British Machine Vision Conference - Cariff, United Kingdom
Duration: 9 Sept 201912 Sept 2019
Conference number: 30


Conference30th British Machine Vision Conference
Abbreviated titleBMVC
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'Learning Visual Actions Using Multiple Verb-Only Labels'. Together they form a unique fingerprint.

Cite this