Selectivity metrics provide misleading estimates of the selectivity of single units in neural networks

Ella M Gale, Ryan Blything, Nick D Martin, Jeffrey S Bowers, Anh Nguyen

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)


To understand the representations learned by neural networks (NNs), various methods of measuring unit selectivity have been developed. Here we undertake a comparison of four such measures on AlexNet: localist selectivity (Bowers et al., 2014); precision (Zhou et al., 2015); class-conditional mean activity selectivity CCMAS (Morcos et al., 2018); and top-class se- lectivity. In contrast with previous work on recurrent neural networks (RNNs), we fail to find any 100% selective ‘local- ist units’ in AlexNet, and demonstrate that the precision and CCMAS measures are misleading and suggest a much higher level of selectivity than is warranted. We also generated ac- tivation maximization (AM) images that maximally activated individual units and found that under (5%) of units in fc6 and conv5 produced interpretable images of objects, whereas fc8 produced over 50% interpretable images. Furthermore, the interpretable images in the hidden layers were not associated with highly selective units. We also consider why localist rep- resentations are learned in RNNs and not AlexNet.
Original languageEnglish
Title of host publicationProceedings of the Cognitive Science Society
EditorsA.K. Goel, C.M. Seifert, C. Freksa
Number of pages6
Publication statusPublished - 2019

Structured keywords

  • Cognitive Science
  • Language

Fingerprint Dive into the research topics of 'Selectivity metrics provide misleading estimates of the selectivity of single units in neural networks'. Together they form a unique fingerprint.

Cite this