Skip to content

Investigating cooperation with robotic peers

Research output: Contribution to journalArticle

Original languageEnglish
Article numbere0225028
Number of pages17
JournalPLoS ONE
Volume14
Issue number11
DOIs
DateAccepted/In press - 28 Oct 2019
DatePublished (current) - 20 Nov 2019

Abstract

We explored how people establish cooperation with robotic peers, by giving participants the chance to choose whether to cooperate or not with a more/less selfish robot, as well as a more or less interactive, in a more or less critical environment. We measured the participants' tendency to cooperate with the robot as well as their perception of anthropomorphism, trust and credibility through questionnaires. We found that cooperation in Human-Robot Interaction (HRI) follows the same rule of Human-Human Interaction (HHI), participants rewarded cooperation with cooperation, and punished selfishness with selfishness. We also discovered two specific robotic profiles capable of increasing cooperation, related to the payoff. A mute and non-interactive robot is preferred with a high payoff, while participants preferred a more human-behaving robot in conditions of low payoff. Taken together, these results suggest that proper cooperation in HRI is possible but is related to the complexity of the task.

Download statistics

No data available

Documents

Documents

  • Full-text PDF (final published version)

    Rights statement: This is the final published version of the article (version of record). It first appeared online via Public Library of Science at https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0225028 . Please refer to any applicable terms of use of the publisher.

    Final published version, 600 KB, PDF document

    Licence: CC BY

DOI

View research connections

Related faculties, schools or groups