Projects per year
Abstract
In recent years, deep convolutional neural networks
(DCNNs) have shown extraordinary success in object
recognition tasks. However, they can also be fooled by
adversarial images (stimuli designed to fool networks)
that do not appear to fool humans. This has been taken
as evidence that these models work quite differently than
the human visual system. However, Zhou and Firestone
(2019) carried out a study where they presented
adversarial images which fool DCNNs to humans and
found that, in many cases, humans chose the same label
for these images as DCNNs. They take these findings to
support the claim that human and machine vision is more
similar than commonly claimed. Here we report two experiments that show that the level of agreement between human and DCNN classification is driven by
how the experimenter chooses the adversarial images and how they choose the labels given to humans for classification. Based on how one chooses these
variables, humans can show a span of agreement levels with DCNNs; from well below to well above levels expected by chance. Overall, our results do not support a view of large systematic overlap between human and computer vision.
(DCNNs) have shown extraordinary success in object
recognition tasks. However, they can also be fooled by
adversarial images (stimuli designed to fool networks)
that do not appear to fool humans. This has been taken
as evidence that these models work quite differently than
the human visual system. However, Zhou and Firestone
(2019) carried out a study where they presented
adversarial images which fool DCNNs to humans and
found that, in many cases, humans chose the same label
for these images as DCNNs. They take these findings to
support the claim that human and machine vision is more
similar than commonly claimed. Here we report two experiments that show that the level of agreement between human and DCNN classification is driven by
how the experimenter chooses the adversarial images and how they choose the labels given to humans for classification. Based on how one chooses these
variables, humans can show a span of agreement levels with DCNNs; from well below to well above levels expected by chance. Overall, our results do not support a view of large systematic overlap between human and computer vision.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2019 Conference on Cognitive Computational Neuroscience |
DOIs | |
Publication status | Published - 2019 |
Fingerprint
Dive into the research topics of 'Humans Cannot Decipher Adversarial Images: Revisiting Zhou and Firestone'. Together they form a unique fingerprint.Projects
- 1 Finished