Sim-to-Real Deep Tactile Policies for Dexterous Robotic Manipulation

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)

Abstract

Tactile sensing is crucial for enabling robots to achieve human-level dexterity, as it provides detailed feedback necessary for fine-grained manipulation tasks. However, directly learning deep tactile policies for robot control in the real world presents significant challenges, including inefficiency and safety risks due to the extensive number of trials and the complexities of physical contact. A promising alternative is to develop these deep tactile policies in a simulated environment and subsequently transfer them to real-world applications. This approach raises a critical research question: how can simulated tactile sensing effectively contribute to the learning of robot control policies that are robust and reliable when applied to real-world manipulation tasks?

This work aims to advance the field of robotic manipulation by leveraging state-of-the-art tactile simulation to achieve dexterous, robust, and accurate manipulation capabilities in real-world environments. Initially, we conducted a comprehensive evaluation of various popular optical tactile sensors in the real world with tactile policies learned from the Tactile Gym simulated environment, revealing that the TacTip sensor outperforms others in robotic manipulation tasks. Building on this insight, we developed a dual-arm tactile robotic system, Bi-Touch, integrating TacTips in both simulation and real-world settings. To evaluate Bi-Touch's performance, we introduced a suite of bimanual manipulation tasks, including bi-pushing, bi-reorienting, bi-lifting, and bi-gathering, accompanied by tailored reward functions, and utilized deep reinforcement learning to learn tactile policies. We enhanced these policies with a goal-update mechanism, enabling robust behavior under unknown perturbations, and successfully deployed them to the real-world dual-arm tactile robot using a zero-shot sim-to-real approach. To further enhance performance, we introduced the concept of tactile saliency, allowing the robot to focus on relevant contact areas while disregarding distractions. Our framework, comprising ConDepNet, TacSalNet, and TacNGen networks, learns tactile saliency in simulation and transfers it to real-world applications, significantly improving control accuracy and robustness for multiple tactile control methods in a tactile servoing task. Additionally, we integrated vision as a complementary modality to tactile sensing, proposing NeuralTouch—a coarse-to-fine manipulation framework that combines vision and touch, utilizing Neural Descriptor Fields and deep tactile reinforcement learning policies.

In general, our findings demonstrate that accurate, robust, and dexterous tactile policies for robotic manipulation can be effectively learned in simulation and translated into real-world applications, thus advancing our goal of leveraging simulation to equip robots with the ability to interact physically with the environment.
Date of Award17 Jun 2025
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorNathan F Lepora (Supervisor) & Efi Psomopoulou (Supervisor)

Keywords

  • Tactile Robotics
  • Deep Reinforcement Learning
  • Sim-to-Rral
  • Robotic Manipulation
  • Multimodal Manipulation

Cite this

'