We propose a Temporal Voting Network (TVNet) for action localization in untrimmed videos. This incorporates a novel Voting Evidence Module to locate temporal boundaries, more accurately, where temporal contextual evidence is accumulated to predict frame-level probabilities of start and end action boundaries. Our action-independent evidence module is incorporated within a pipeline to calculate confidence scores and action classes. We achieve an average mAP of 34.6% on ActivityNet-1.3, particularly outperforming previous methods with the highest IoU of 0.95. TVNet also achieves mAP of 56.0% when combined with PGCN and 59.1% with MUSES at 0.5 IoU on THUMOS14 and outperforms prior work at all thresholds. Our code is available at https://github.com/hanielwang/TVNet.
|Publication status||Published - 10 Jan 2022|
|Event||17th International Conference on Computer Vision Theory and Applications - Online|
Duration: 6 Feb 2022 → 8 Feb 2022
|Conference||17th International Conference on Computer Vision Theory and Applications|
|Period||6/02/22 → 8/02/22|
FingerprintDive into the research topics of 'TVNet: Temporal Voting Network for Action Localization'. Together they form a unique fingerprint.
HPC (High Performance Computing) Facility
Sadaf R Alam (Manager), Steven A Chapman (Manager), Polly E Eccleston (Other), Simon H Atack (Other) & D A G Williams (Manager)