Abstract
Research has shown that the dynamics of facial motion
are important in the perception of gender, identity, and emotion.
In this paper we show that it is possible to use a multilinear
tensor framework to extract facial motion signatures
and to cluster these signatures by gender or by emotion.
Here we consider only the dynamics of internal features
of the face (e.g. eyebrows, eyelids and mouth) so as to
remove structural and shape cues to identity and gender.
Such structural gender biases include jaw width and forehead
shape and their removal ensures dynamic cues alone
are being used. Additionally, we demonstrate the generative
capabilities of using a tensor framework, by consistently
synthesising new motion signatures.
Translated title of the contribution | Using a Tensor Framework for the Analysis of Facial Dynamics |
---|---|
Original language | English |
Title of host publication | 7th International Conference on Automatic Face and Gesture Recognition (FGR06) |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Number of pages | 6 |
ISBN (Print) | 0-7695-2503-2 |
DOIs | |
Publication status | Published - 24 Apr 2006 |
Bibliographical note
Conference Proceedings/Title of Journal: 7th International Conference of Automatic Face and Gesture Recognition, FG2006Research Groups and Themes
- Cognitive Science
- Social Cognition