Play Fair: Frame Attributions in Video Models

Will Price, Dima Damen

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

10 Downloads (Pure)


In this paper, we introduce an attribution method for explaining action recognition models. Such models fuse information from multiple frames within a video, through score aggregation or relational reasoning. We break down a model’s class score into the sum of contributions from each frame, fairly. Our method adapts an axiomatic solution to fair reward distribution in cooperative games, known as the Shapley value, for elements in a variable-length sequence, which we call the Element Shapley Value (ESV). Critically, we propose a tractable approximation of ESV that scales linearly with the number of frames in the sequence.

We employ ESV to explain two action recognition models (TRN and TSN) on the fine-grained dataset Something-Something. We offer detailed analysis of supporting/distracting frames, and the relationships of ESVs to the frame’s position, class prediction, and sequence length. We compare ESV to naive baselines and two commonly used feature attribution methods: Grad-CAM and Integrated-Gradients.
Original languageEnglish
Title of host publicationComputer Vision - ACCV 2020
Subtitle of host publication15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 – December 4, 2020, Revised Selected Papers, Part V
EditorsHiroshi Ishikawa, Cheng-Lin Liu, Tomas Pajdla, Jianbo Shi
Number of pages18
ISBN (Electronic)9783030695415
ISBN (Print)9783030695408
Publication statusPublished - 25 Feb 2021
Event15th Asian Conference on Computer Vision -
Duration: 30 Nov 20204 Dec 2020

Publication series

NameLecture Notes in Computer Science
PublisherSpringer International Publishing
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference15th Asian Conference on Computer Vision
Internet address


Dive into the research topics of 'Play Fair: Frame Attributions in Video Models'. Together they form a unique fingerprint.

Cite this