Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence

Miranda Mowbray*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

37 Downloads (Pure)

Abstract

The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status.
Original languageEnglish
Pages (from-to)517-528
Number of pages12
JournalCambridge Quarterly of Healthcare Ethics
Volume30
Issue number3
DOIs
Publication statusPublished - 10 Jun 2021

Bibliographical note

Publisher Copyright:
©

Keywords

  • artificial intelligence (AI)
  • criteria for consciousness
  • robots
  • malware
  • code

Fingerprint

Dive into the research topics of 'Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence'. Together they form a unique fingerprint.

Cite this