Text2Touch: Tactile In-Hand Manipulation with LLM-Designed Reward Functions

Research output: Contribution to conferenceConference Paper

Abstract

Large language models (LLMs) are beginning to automate reward design for dexterous manipulation. However, no prior work has considered tactile sensing, which is known to be critical for human-like dexterity. We present Text2Touch, bringing LLM-crafted rewards to the challenging task of multi-axis in-hand object rotation with real-world vision based tactile sensing in palm-up and palm-down configurations. Our prompt engineering strategy scales to over 70 environment variables, and sim-to-real distillation enables successful policy transfer to a tactile-enabled fully actuated four-fingered dexterous robot hand. Text2Touch significantly outperforms a carefully tuned human-engineered baseline, demonstrating superior rotation speed and stability while relying on reward functions that are an order of magnitude shorter and simpler. These results illustrate how LLM-designed rewards can significantly reduce the time from concept to deployable dexterous tactile skills, supporting more rapid and scalable multimodal robot learning.
Original languageEnglish
Pages2847-2887
Number of pages41
Publication statusPublished - 30 Sept 2025
Event9th Conference on Robot Learning - South Korea, Seoul, Korea, Republic of
Duration: 27 Sept 202530 Sept 2025
https://www.corl.org/

Conference

Conference9th Conference on Robot Learning
Abbreviated titleCoRL
Country/TerritoryKorea, Republic of
CitySeoul
Period27/09/2530/09/25
Internet address

Research Groups and Themes

  • Interactive Artificial Intelligence CDT

Fingerprint

Dive into the research topics of 'Text2Touch: Tactile In-Hand Manipulation with LLM-Designed Reward Functions'. Together they form a unique fingerprint.

Cite this