Abstract
We present two approaches for the generation of novel video textures which portray a human expressing different emotions. Here training data is provided by video sequences of an actress expressing specific emotions such as angry, happy and sad. The main challenge of modelling these video texture sequences is the high variance in head position and facial expression. Principal Components Analysis (PCA) is used to generate so called �motion signatures� which are shown to be complex and have non-Gaussian distributions. The first method uses a combined appearance model to transform the video data into a lower dimensional Gaussian space. This can then be modelled using a standard autoregressive process. The second technique presented extracts sub-samples from the original data using short temporal windows, some of which have Gaussian distributions and can be modelled by an autoregressive process (ARP). We find that the combined appearance technique produces more aesthetically pleasing clips but does not maintain the motion characteristics as well as the temporal window approach.
Translated title of the contribution | Synthesising Facial Emotions |
---|---|
Original language | English |
Title of host publication | Theory and Practice of Computer Graphics |
Publisher | IEEE Computer Society |
Pages | 120 - 127 |
Number of pages | 8 |
ISBN (Print) | 0769521371 |
Publication status | Published - Jun 2004 |