Investigating cooperation with robotic peers

Debora Zanatto, Massimiliano Patacchiola, Jeremy Goslin, Angelo Cangelosi

Research output: Contribution to journalArticle (Academic Journal)

1 Citation (Scopus)
76 Downloads (Pure)

Abstract

We explored how people establish cooperation with robotic peers, by giving participants the chance to choose whether to cooperate or not with a more/less selfish robot, as well as a more or less interactive, in a more or less critical environment. We measured the participants' tendency to cooperate with the robot as well as their perception of anthropomorphism, trust and credibility through questionnaires. We found that cooperation in Human-Robot Interaction (HRI) follows the same rule of Human-Human Interaction (HHI), participants rewarded cooperation with cooperation, and punished selfishness with selfishness. We also discovered two specific robotic profiles capable of increasing cooperation, related to the payoff. A mute and non-interactive robot is preferred with a high payoff, while participants preferred a more human-behaving robot in conditions of low payoff. Taken together, these results suggest that proper cooperation in HRI is possible but is related to the complexity of the task.
Original languageEnglish
Article numbere0225028
Number of pages17
JournalPLoS ONE
Volume14
Issue number11
DOIs
Publication statusPublished - 20 Nov 2019

Fingerprint Dive into the research topics of 'Investigating cooperation with robotic peers'. Together they form a unique fingerprint.

  • Cite this

    Zanatto, D., Patacchiola, M., Goslin, J., & Cangelosi, A. (2019). Investigating cooperation with robotic peers. PLoS ONE, 14(11), [e0225028]. https://doi.org/10.1371/journal.pone.0225028