Project Details
Description
This project will develop and validate exciting novel ways in which people can interact with the world via cognitive wearables -intelligent on-body computing systems that aim to understand the user, the context, and importantly, are prompt-less and useful. Specifically, we will focus on the automatic production and display of what we call glanceable guidance. Eschewing traditional and intricate 3D Augmented Reality approaches that have been difficult to show significant usefulness, glanceable guidance aims to synthesize the nuances of complex tasks in short snippets that are ideal for wearable computing systems and that interfere less with the user and that are easier to learn and use.
There are two key research challenges, the first is to be able to mine information from long, raw and unscripted wearable video taken from real user-object interactions in order to generate the glanceable supports. Another key challenge is how to automatically detect user's moments of uncertainty during which support should be provided without the user's explicit prompt.
The project aims to address the following fundamental problems:
1. Improve the detection of user's attention by robustly determining periods of time that correspond to task-relevant object interactions from a continuous stream of wearable visual and inertial sensors.
2. Provide assistance only when it is needed by building models of the user, context and task from autonomously identified micro-interactions by multiple users, focusing on models that can facilitate guidance.
3. Identify and predict action uncertainty from wearable sensing in particular gaze patterns and head motions.
4. Detect and weigh user expertise for the identification of task nuances towards the optimal creation of real-time tailored guidance.
5. Design and deliver glanceable guidance that acts in a seamless and prompt-less manner during task performance with minimal interruptions, based on autonomously built models.
GLANCE is underpinned by a rich program of experimental work and rigorous validation across a variety of interaction tasks and user groups. Populations to be tested include skilled and general population and for tasks that include: assembly, using novel equipment (e.g. an unknown coffee maker), and repair tasks (e.g. replacing a bicycle gear cable). It also tightly incorporates the development of working demonstrations.
And in collaboration with our partners the project will explore high-value impact cases related to health care towards assisted living and in industrial settings focusing on assembly and maintenance tasks.
Our team is a collaboration between Computer Science, to develop a the novel data mining and computer vision algorithms, and Behavioral Science to understand when and how users need support.
There are two key research challenges, the first is to be able to mine information from long, raw and unscripted wearable video taken from real user-object interactions in order to generate the glanceable supports. Another key challenge is how to automatically detect user's moments of uncertainty during which support should be provided without the user's explicit prompt.
The project aims to address the following fundamental problems:
1. Improve the detection of user's attention by robustly determining periods of time that correspond to task-relevant object interactions from a continuous stream of wearable visual and inertial sensors.
2. Provide assistance only when it is needed by building models of the user, context and task from autonomously identified micro-interactions by multiple users, focusing on models that can facilitate guidance.
3. Identify and predict action uncertainty from wearable sensing in particular gaze patterns and head motions.
4. Detect and weigh user expertise for the identification of task nuances towards the optimal creation of real-time tailored guidance.
5. Design and deliver glanceable guidance that acts in a seamless and prompt-less manner during task performance with minimal interruptions, based on autonomously built models.
GLANCE is underpinned by a rich program of experimental work and rigorous validation across a variety of interaction tasks and user groups. Populations to be tested include skilled and general population and for tasks that include: assembly, using novel equipment (e.g. an unknown coffee maker), and repair tasks (e.g. replacing a bicycle gear cable). It also tightly incorporates the development of working demonstrations.
And in collaboration with our partners the project will explore high-value impact cases related to health care towards assisted living and in industrial settings focusing on assembly and maintenance tasks.
Our team is a collaboration between Computer Science, to develop a the novel data mining and computer vision algorithms, and Behavioral Science to understand when and how users need support.
| Acronym | GLANCE |
|---|---|
| Status | Finished |
| Effective start/end date | 1/04/16 → 1/04/20 |
Research Groups and Themes
- Jean Golding
- Mind and Brain (Psychological Science)
- Cognitive Science
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
-
Disentangling decision uncertainty and motor noise in curved movement trajectories
Chapman, W. G. & Ludwig, C. J. H., 1 Nov 2025, In: Journal of Vision. 25, 13, 21 p., 6.Research output: Contribution to journal › Article (Academic Journal) › peer-review
Open Access -
Action Modifiers: Learning from Adverbs in Instructional Videos
Doughty, H., Laptev, I., Mayol-Cuevas, W. & Damen, D., 5 Aug 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Institute of Electrical and Electronics Engineers (IEEE), 11 p. (IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)).Research output: Chapter in Book/Report/Conference proceeding › Conference Contribution (Conference Proceeding)
Open AccessFile27 Citations (Scopus)121 Downloads (Pure) -
The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines
Damen, D., Doughty, H. R., Farinella, G. M., Fidler, S., Furnari, A., Kazakos, E., Moltisanti, D., Munro, J. P. N., Perrett, T., Price, W. & Wray, M., 6 May 2020, (E-pub ahead of print) In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 0, p. 1-16 16 p.Research output: Contribution to journal › Article (Academic Journal) › peer-review
Open Access186 Citations (Scopus)