Do arbitrary input-output mappings in parallel distributed processing networks require localist coding?

Ivan Vankov, Jeffrey Bowers

Research output: Contribution to journalArticle (Academic Journal)peer-review

2 Citations (Scopus)
221 Downloads (Pure)

Abstract

The Parallel Distributed Processing (PDP) approach to cognitive modeling assumes that knowledge is distributed across multiple processing units. This view is typically justified on the basis of the computational advantages and biological plausibility of distributed representations. However, both these assumptions have been challenged. First, there is growing evidence that some neurons respond to information in a highly selective manner. Second, it has been demonstrated that localist representations are better suited for certain computational tasks. In this paper, we continue this line of research by investigating whether localist representations are learned in tasks involving arbitrary input-output mappings. The results imply that the pressure to learn local codes in such tasks is weak, but still there are conditions under which feed forward PDP networks learn localist representation. Our findings further challenge the assumption that PDP modeling always goes hand in hand with distributed representations and provide directions for future research.
Original languageEnglish
Pages (from-to)392-399
Number of pages8
JournalLanguage, Cognition and Neuroscience
Volume32
Issue number3
Early online date2 Dec 2016
DOIs
Publication statusPublished - 2 Mar 2017

Structured keywords

  • Language
  • Cognitive Science

Keywords

  • Localist representations
  • distributed representations
  • neural networks
  • PDP
  • arbitrary input–output mapping

Fingerprint

Dive into the research topics of 'Do arbitrary input-output mappings in parallel distributed processing networks require localist coding?'. Together they form a unique fingerprint.

Cite this