Humans Cannot Decipher Adversarial Images: Revisiting Zhou and Firestone

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

In recent years, deep convolutional neural networks
(DCNNs) have shown extraordinary success in object
recognition tasks. However, they can also be fooled by
adversarial images (stimuli designed to fool networks)
that do not appear to fool humans. This has been taken
as evidence that these models work quite differently than
the human visual system. However, Zhou and Firestone
(2019) carried out a study where they presented
adversarial images which fool DCNNs to humans and
found that, in many cases, humans chose the same label
for these images as DCNNs. They take these findings to
support the claim that human and machine vision is more
similar than commonly claimed. Here we report two experiments that show that the level of agreement between human and DCNN classification is driven by
how the experimenter chooses the adversarial images and how they choose the labels given to humans for classification. Based on how one chooses these
variables, humans can show a span of agreement levels with DCNNs; from well below to well above levels expected by chance. Overall, our results do not support a view of large systematic overlap between human and computer vision.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference on Cognitive Computational Neuroscience
DOIs
Publication statusPublished - 2019

Fingerprint

Dive into the research topics of 'Humans Cannot Decipher Adversarial Images: Revisiting Zhou and Firestone'. Together they form a unique fingerprint.
  • M and M

    Bowers, J. S. (Principal Investigator)

    1/09/1731/08/22

    Project: Research, Parent

Cite this