Examining visual representations in mind and machine

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)


Deep learning has quickly become the dominant approach in machine learning and successes have led to increasing interest in modelling human cognition via deep learning models. Deep convolutional neural networks have been hailed as the best models of human visual processing based on state-of-the-art performance on large-scale benchmarks as well as impressive scores on neural predictivity and representational similarity analysis measures. In this thesis I test claims about the similarity of visual representations between deep neural networks and humans. First, through a series of experiments, I show that humans do not intuitively understand how neural networks classify adversarial images (stimuli designed to fool neural networks) and that these types of stimuli do not provide insight into human vision as has recently been claimed. Next, human and network inductive biases are explored by generating datasets which allow manipulation of both the type and statistics of predictive features. Findings show human shape bias remains robust in novel learning environments while networks (even ones pre-trained to develop shape bias) adapt to the statistics of the new environment (learning to classify based on the most predictive feature). Additionally, when shape was as predictive of category membership as more local features, an inductive bias towards more local features was observed in networks. Finally, a series of simulations demonstrate that high representational similarity analysis (RSA) scores can be achieved between systems that represent stimuli in qualitatively different ways. While high RSA scores will be a feature of models that truly capture human-like visual representations, they are not sufficient to claim a model does so. Overall, findings presented in this thesis highlight the importance of systematic experimental scrutiny in light of engineering developments outpacing scientific research. This is key if deep learning models are going to be properly evaluated in hopes of providing insight into human cognition.
Date of Award21 Mar 2023
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorJeffrey S Bowers (Supervisor) & Gaurav Malhotra (Supervisor)


  • deep learning
  • visual representations
  • shape bias
  • representational similarity analysis
  • adversarial images
  • Convolutional Neural Networks

Cite this