AbstractThe advent of complex machines such as robots and autonomous vehicles has opened the possibility of close interaction and cooperation with human users, for which safety becomes crucially important. By considering autonomous vehicles as robots (i.e. autonomous agents), a bridging of both areas can be done by bringing methods from human-robot interaction to autonomous vehicular agents’ interaction.
Drawing ideas from Human Robot Interaction (HRI), the understanding and estimation of human behaviour can ensure that a task is being performed by a human the way a robot/vehicle is expecting.
Our results provide useful estimations and task-related predictions of human behaviour that can be used in automatic control loops to ensure safe cooperative actions.
Different modelling objectives were explored to gain insight into human intention and readiness in terms of task performance; task modelling at a temporal manoeuvre-level, task performance whilst multitasking and cognitive load (i.e. mental workload) were studied.
Physiological signals from different sources such as skeletal tracking sensors, heart monitors and eye trackers were used to acquire knowledge about the user’s intentions and behaviours (e.g. cognitive load). Data-driven methods, from shallow to deep learning ones, were used to relate data from multi-modal heterogeneous sources (e.g. human driver, vehicle) to expected behaviour, in order to create meaningful relations to task intention and cognition.
Most relevant results included manoeuvre identification and prediction for known and unknown test subjects using skeletal tracking data. Outcomes showed relations between driving behaviour and mental workload using car dynamics, heart rate and eye gaze metrics, predicting driving performance up to 1 second ahead of the actual actions.
|Date of Award||28 Nov 2019|
|Supervisor||Guido Herrmann (Supervisor) & Ute B Leonards (Supervisor)|