Deep Reinforcement Learning for Tactile Robotics: Learning to Type on a Braille Keyboard

Research output: Contribution to journalArticle (Academic Journal)peer-review

23 Citations (Scopus)

Abstract

Artificial touch would seem well-suited for Reinforcement Learning (RL), since both paradigms rely on interaction with an environment. Here we propose a new environment and set of tasks to encourage development of tactile reinforcement learning: learning to type on a braille keyboard. Four tasks are proposed, progressing in difficulty from arrow to alphabet keys and from discrete to continuous actions. A simulated counterpart is also constructed by sampling tactile data from the physical environment. Using state-of-the-art deep RL algorithms, we show that all of these tasks can be successfully learnt in simulation, and 3 out of 4 tasks can be learned on the real robot. A lack of sample efficiency currently makes the continuous alphabet task impractical on the robot. To the best of our knowledge, this work presents the first demonstration of successfully training deep RL agents in the real world using observations that exclusively consist of tactile images. To aid future research utilising this environment, the code for this project has been released along with designs of the braille keycaps for 3D printing and a guide for recreating the experiments.
Original languageEnglish
Pages (from-to)6145 - 6152
JournalIEEE Robotics and Automation Letters
Volume5
Issue number4
DOIs
Publication statusPublished - 20 Jul 2020

Bibliographical note

The acceptance date for this record is provisional and based upon the month of publication for the article.

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Tactile Robotics: Learning to Type on a Braille Keyboard'. Together they form a unique fingerprint.

Cite this