Projects per year
We present a new model to determine relative skill from long videos, through learnable temporal attention modules. Skill determination is formulated as a ranking problem, making it suitable for common and generic tasks. However, for long videos, parts of the video are irrelevant for assessing skill, and there may be variability in the skill exhibited throughout a video. We therefore propose a method which assesses the relative overall level of skill in a long video by attending to its skill-relevant parts. Our approach trains temporal attention modules, learned with only video-level supervision, using a novel rank-aware loss function. In addition to attending to task relevant video parts, our proposed loss jointly trains two attention modules to separately attend to video parts which are indicative of higher (pros) and lower (cons) skill. We evaluate our approach on the EPIC-Skills dataset and additionally annotate a larger dataset from YouTube videos for skill determination with five previously unexplored tasks. Our method outperforms previous approaches and classic softmax attention on both datasets by over 4% pairwise accuracy, and as much as 12% on individual tasks. We also demonstrate our model’s ability to attend to rank-aware parts of the video.
|Title of host publication||2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Publication status||Published - 11 Mar 2019|
|Event||Computer Vision and Pattern Recognition (CVPR) - Rhode Island, United States|
Duration: 16 Jun 2012 → 21 Jun 2012
|Conference||Computer Vision and Pattern Recognition (CVPR)|
|Period||16/06/12 → 21/06/12|
Bibliographical noteProxy date of acceptance added to output record.
4/04/16 → 2/02/22
1/04/16 → 1/04/20