A visual reasoning-based approach for mutual-cognitive human-robot collaboration

Pai Zheng, Shufei Li, Liqiao Xia, Lihui Wang*, Aydin Nassehi

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

49 Citations (Scopus)
72 Downloads (Pure)

Abstract

Human-robot collaboration (HRC) allows seamless communication and collaboration between humans and robots to fulfil flexible manufacturing tasks in a shared workspace. Nevertheless, existing HRC systems lack an efficient integration of robotic and human cognitions. Empowered by advanced cognitive computing, this paper proposes a visual reasoning-based approach for mutual-cognitive HRC. Firstly, a domain-specific HRC knowledge graph is established. Next, the holistic manufacturing scene is perceived by visual sensors as a temporal graph. Then, a collaborative mode with similar instructions can be inferred by graph embedding. Lastly, mutual-cognitive decisions are immersed into the Augmented Reality execution loop for intuitive HRC support.

Original languageEnglish
Pages (from-to)377-380
Number of pages4
JournalCIRP Annals
Volume71
Issue number1
Early online date26 Apr 2022
DOIs
Publication statusPublished - 12 Jul 2022

Bibliographical note

Publisher Copyright:
© 2022 The Author(s)

Keywords

  • Human robot collaboration
  • Manufacturing system
  • Visual reasoning

Fingerprint

Dive into the research topics of 'A visual reasoning-based approach for mutual-cognitive human-robot collaboration'. Together they form a unique fingerprint.

Cite this