Abstract
Being aware of our body has great importance in our everyday life. This is the reason why we know how to move in a dark room or to grasp a complex object. These skills are important for robots as well, however, robotic bodily awareness is still not solved. In this paper we present a novel method to implement bodily awareness into soft robots by the integration of exteroceptive and proprioceptive sensors. We use a combination of a stacked convolutional autoencoder and a recurrent neural network to map internal sensory signals to visual information. As a result, the simulated soft robot can learn to imagine its motion even when its visual sensor is not available.
Original language | English |
---|---|
Publication status | Unpublished - 15 Nov 2017 |
Event | Conference on Robot Learning 2017 - Mountain View, United States Duration: 13 Nov 2017 → 15 Nov 2017 Conference number: 1st |
Conference
Conference | Conference on Robot Learning 2017 |
---|---|
Abbreviated title | CoRL 2017 |
Country/Territory | United States |
City | Mountain View |
Period | 13/11/17 → 15/11/17 |
Structured keywords
- Tactile Action Perception
Keywords
- soft robots
- bodily awareness
- proprioception