The Parallel Distributed Processing (PDP) approach to cognitive modeling assumes that knowledge is distributed across multiple processing units. This view is typically justified on the basis of the computational advantages and biological plausibility of distributed representations. However, both these assumptions have been challenged. First, there is growing evidence that some neurons respond to information in a highly selective manner. Second, it has been demonstrated that localist representations are better suited for certain computational tasks. In this paper, we continue this line of research by investigating whether localist representations are learned in tasks involving arbitrary input-output mappings. The results imply that the pressure to learn local codes in such tasks is weak, but still there are conditions under which feed forward PDP networks learn localist representation. Our findings further challenge the assumption that PDP modeling always goes hand in hand with distributed representations and provide directions for future research.
- Cognitive Science
- Localist representations
- distributed representations
- neural networks
- arbitrary input–output mapping