The Visual Voice Activity Detection (V-VAD) problem in unconstrained environments is investigated in this paper. A novel method for V-VAD in the wild, exploiting local shape and motion information appearing at spatiotemporal locations of interest for facial video segment description and the Bag of Words (BoW) model for facial video segment representation, is proposed. Facial video segment classification is subsequently performed using state-of-the-art classification algorithms. Experimental results on one publicly available V-VAD data set, denote the effectiveness of the proposed method, since it achieves better generalization performance in unseen users, when compared to recently proposed state-of-the-art methods. Additional results on a new, unconstrained data set, provide evidence that the proposed method can be effective even in such cases in which any other existing method fails.
- Voice Activity Detection in the wild
- Space-Time Interest Points
- Bag of Words model
- kernel Extreme Learning Machine
- Action Recognition