AbstractIt is difficult to interpret individual units in neural networks, which use distributed representations. One way to interpret units is through analysis of their selectivity for an item, feature or class. Authors have drawn different conclusions about the role of units in neural
networks based on their selectivity. It has been claimed that highly selective units are harmful, unimportant, important but only for a few classes; or that highly selective units are not part of distributed representations and are in fact localist units.
I present a series of studies investigating how selectivity is measured; factors that may increase selectivity; and the relationship between selectivity and unit importance. I compare the selectivity of a wide variety of models, using several measures of selectivity. I experimentally vary factors that might lead to increased selectivity, namely, the systematicity of input-output mappings and pressure to avoid the superposition catastrophe. I lesion units to infer their importance for particular classes. I show that commonly used selectivity measures are unable to capture the full range of selectivity scores, showing either floor or ceiling effects. Reducing the systematicity of a class or dataset leads to reduced selectivity, while increasing it leads to higher selectivity. However, increased potential for the superposition catastrophe does not lead to an increased number of highly selective units. Finally, I find that although unit importance for a class is weakly associated with class-selectivity, the most important class is rarely the most selective, highlighting the risks of inferring a unit’s function from its selectivity.
These results highlight the need for better measures of selectivity; implicate systematicity as a driver of selectivity and demonstrate that claims regarding unit function based on selectivity alone may be unreliable.
|Date of Award||24 Jun 2021|
|Supervisor||Jeffrey S Bowers (Supervisor) & Colin J Davis (Supervisor)|
- Neural Network
- Distributed Representations
- Deep learning
- Local coding