Abstract
Object pushing presents a key non-prehensile manipulation problem that is illustrative of more complex robotic manipulation tasks. While deep reinforcement learning (RL) methods have demonstrated impressive learning capabilities using visual input, a lack of tactile sensing limits their capability for fine and reliable control during manipulation. Here we propose a deep RL approach to object pushing using tactile sensing without visual input, namely tactile pushing. We present a goal-conditioned formulation that allows both model-free and model-based RL to obtain accurate policies for pushing an object to a goal. To achieve real-world performance, we adopt a sim-to-real approach. Our results demonstrate that it is possible to train on a single object and a limited sample of goals to produce precise and reliable policies that can generalize to a variety of unseen objects and pushing scenarios without domain randomization. We experiment with the trained agents in harsh pushing conditions, and show that with significantly more training samples, a model-free policy can outperform a model-based planner, generating shorter and more reliable pushing trajectories despite large disturbances. The simplicity of our training environment and effective real-world performance highlights the value of rich tactile information for fine manipulation.
Original language | English |
---|---|
Pages (from-to) | 5480 - 5487 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 8 |
Issue number | 9 |
Early online date | 13 Jul 2023 |
DOIs | |
Publication status | Published - 13 Jul 2023 |
Bibliographical note
Publisher Copyright:© 2016 IEEE.
Research Groups and Themes
- Engineering Mathematics Research Group
Keywords
- Data models
- Dexterous Manipulation
- Force and Tactile Sensing
- Reinforcement learning
- Reinforcement Learning;
- Reliability
- Robot sensing systems
- Robots
- Task analysis
- Training