Projects per year
Abstract
To understand the representations learned by neural networks (NNs), various methods of measuring unit selectivity have been developed. Here we undertake a comparison of four such measures on AlexNet: localist selectivity (Bowers et al., 2014); precision (Zhou et al., 2015); class-conditional mean activity selectivity CCMAS (Morcos et al., 2018); and top-class se- lectivity. In contrast with previous work on recurrent neural networks (RNNs), we fail to find any 100% selective ‘local- ist units’ in AlexNet, and demonstrate that the precision and CCMAS measures are misleading and suggest a much higher level of selectivity than is warranted. We also generated ac- tivation maximization (AM) images that maximally activated individual units and found that under (5%) of units in fc6 and conv5 produced interpretable images of objects, whereas fc8 produced over 50% interpretable images. Furthermore, the interpretable images in the hidden layers were not associated with highly selective units. We also consider why localist rep- resentations are learned in RNNs and not AlexNet.
Original language | English |
---|---|
Title of host publication | Proceedings of the Cognitive Science Society |
Editors | A.K. Goel, C.M. Seifert, C. Freksa |
Pages | 1808-1814 |
Number of pages | 6 |
Publication status | Published - 2019 |
Research Groups and Themes
- Cognitive Science
- Language
Fingerprint
Dive into the research topics of 'Selectivity metrics provide misleading estimates of the selectivity of single units in neural networks'. Together they form a unique fingerprint.Projects
- 1 Finished