Synthesising Facial Emotions

DJ Oziem, LN Gralewski, NW Campbell, CJ Dalton, DP Gibson, BT Thomas

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)


We present two approaches for the generation of novel video textures which portray a human expressing different emotions. Here training data is provided by video sequences of an actress expressing specific emotions such as angry, happy and sad. The main challenge of modelling these video texture sequences is the high variance in head position and facial expression. Principal Components Analysis (PCA) is used to generate so called �motion signatures� which are shown to be complex and have non-Gaussian distributions. The first method uses a combined appearance model to transform the video data into a lower dimensional Gaussian space. This can then be modelled using a standard autoregressive process. The second technique presented extracts sub-samples from the original data using short temporal windows, some of which have Gaussian distributions and can be modelled by an autoregressive process (ARP). We find that the combined appearance technique produces more aesthetically pleasing clips but does not maintain the motion characteristics as well as the temporal window approach.
Translated title of the contributionSynthesising Facial Emotions
Original languageEnglish
Title of host publicationTheory and Practice of Computer Graphics
PublisherIEEE Computer Society
Pages120 - 127
Number of pages8
ISBN (Print)0769521371
Publication statusPublished - Jun 2004

Bibliographical note

Conference Organiser: IEEE Computer Society

Fingerprint Dive into the research topics of 'Synthesising Facial Emotions'. Together they form a unique fingerprint.

Cite this