Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings

Michael Wray, Diane Larlus, Gabriela Csurka, Dima Damen

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

7 Citations (Scopus)
142 Downloads (Pure)

Abstract

We address the problem of cross-modal fine-grained action retrieval between text and video. Cross-modal retrieval is commonly achieved through learning a shared embedding space, that can indifferently embed modalities. In this paper, we propose to enrich the embedding by disentangling parts-of-speech (PoS) in the accompanying captions. We build a separate multi-modal embedding space for each PoS tag. The outputs of multiple PoS embeddings are then used as input to an integrated multi-modal space, where we perform action retrieval. All embeddings are trained jointly through a combination of PoS-aware and PoS-agnostic losses. Our proposal enables learning specialised embedding spaces that offer multiple views of the same embedded entities.

We report the first retrieval results on fine-grained actions for the large-scale EPIC dataset, in a generalised zero-shot setting. Results show the advantage of our approach for both video-to-text and text-to-video action retrieval. We also demonstrate the benefit of disentangling the PoS for the generic task of cross-modal video retrieval on the MSR-VTT dataset.
Original languageEnglish
Title of host publication2019 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages450-459
Number of pages10
ISBN (Electronic)978-1-7281-4803-8
DOIs
Publication statusPublished - 2 Nov 2019
EventIEEE/CVF International Conference on Computer Vision (ICCV) 2019 - Korea, Seoul
Duration: 27 Oct 20192 Nov 2019

Publication series

Name
ISSN (Electronic)2380-7504

Conference

ConferenceIEEE/CVF International Conference on Computer Vision (ICCV) 2019
CitySeoul
Period27/10/192/11/19

Cite this