Trust is a critical issue in human–robot interactions as it is at the base of the establishment of solid relationships. Theory of Mind (ToM) is the cognitive skill that allows us to understand what others think and believe. Several studies in HRI and psychology suggest that trust and ToM are interdependent concepts since we trust another agent based on our representation of its actions, beliefs, and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we aim to examine whether the perception of ToM abilities on a robotic agent influences human-robot trust over time in an iterative game scenario. To this end, participants played an Investment Game with a humanoid robot (Pepper) that was presented as having either low-level ToM or high-level ToM. During the game, the participants were asked to pick a sum of money to invest in the robot. The amount invested was used as the main measurement of human-robot trust. Our experimental results show that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
|Title of host publication||HAI 2021 - Proceedings of the 9th International User Modeling, Adaptation and Personalization Human-Agent Interaction|
|Number of pages||8|
|Publication status||Published - 9 Nov 2021|
|Name||HAI 2021 - Proceedings of the 9th International User Modeling, Adaptation and Personalization Human-Agent Interaction|
This material is based upon work supported by the Air Force Office of Scientific Research, USAF under Award No. FA9550-19-1-7002. The work of Debora Zanatto was funded and delivered in partnership between the Thales Group and the University of Bristol, and with the support of the UK Engineering and Physical Sciences Research Council Grant Award EP/R004757/1 entitled ‘Thales-Bristol Partnership in Hybrid Autonomous Systems Engineering (T-B PHASE)’.
© 2021 ACM.