In a series of papers, we have formalized an active Bayesian perception approach for robotics based on recent progress in understanding animal perception. However, an issue for applied robot perception is how to tune this method to a task, using: (i) a belief threshold that adjusts the speed-accuracy tradeoff; and (ii) an active control strategy for relocating the sensor e.g. to a preset fixation point. Here we propose that these two variables should be learnt by reinforcement from a reward signal evaluating the decision outcome. We test this claim with a biomimetic fingertip that senses surface curvature under uncertainty about contact location. Appropriate formulation of the problem allows use of multi-armed bandit methods to optimize the threshold and fixation point of the active perception. In consequence, the system learns to balance speed versus accuracy and sets the fixation point to optimize both quantities. Although we consider one example in robot touch, we expect that the underlying principles have general applicability.
|Title of host publication||IEEE International Conference on Intelligent Robots and Systems|
|Number of pages||6|
|Publication status||Published - 2013|
|Event||2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013 - Tokyo, Japan|
Duration: 3 Nov 2013 → 8 Nov 2013
|Conference||2013 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon, IROS 2013|
|Period||3/11/13 → 8/11/13|