This paper presents a system aimed to serve as the enabling platform for a wearable assistant. The method observes manipulations from a wearable camera and classifies activities from roughly stabilized low resolution images (160x120 pixels) with the help of a 3-level Dynamic Bayesian Network and adapted temporal templates. Our motivation is to explore robust but computationally inexpensive visual methods to perform as much activity inference as possible without resorting to more complex object or hand detectors. The description of the method and results obtained are presented, as well as the motivation for further work in the area of wearable visual sensing.
|Translated title of the contribution||High Level Activity Recognition using Low Resolution Wearable Vision|
|Title of host publication||First Workshop on Egocentric Vision, in conjunction with the International Conference on Computer Vision and Pattern Recognition|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Publication status||Published - 2009|
Bibliographical noteOther page information: -
Conference Proceedings/Title of Journal: First Workshop on Egocentric Vision, in conjunction with the International Conference on Computer Vision and Pattern Recognition
Other identifier: 2001033