Recognise my emotions
: on the automatic recognition of emotions from human speech

Student thesis: Master's ThesisUnspecified Master's Degree


The ability to automatically recognise human emotions will open up a range of new possibilities for human-computer interaction. Such possibilities will become increasingly valuable, as autonomous computational agents reach ubiquity. Here, the recognition of emotions in human speech is considered. While methods exist for this, they demonstrate a low classification accuracy. It is argued here that the poor performance of these existing solutions arises from the choice of classification features used. Results from dynamical systems theory are used to develop a new classification approach. To achieve this, speech production dynamics are reconstructed from audio recordings. The equilibria, Lyapunov exponents, and correlation dimension of the
dynamics are extracted. A feature space is defined on these data. A classification accuracy of 74% is achieved, when distinguishing between calm and angry emotional speech. The performance of the classifier reduces to 54% for fearful and sad emotions. A set of novel algorithms are proposed, for determining the embedding dimension and nonlinear equilibria of a system from time series data, and to detect voiced speech in audio recordings.
Date of Award25 Jun 2019
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorRobert Szalai (Supervisor) & Ksenia Shalonova (Supervisor)

Cite this