Neural Networks Learn Highly Selective Representations in Order to Overcome the Superposition Catastrophe

Jeffrey S. Bowers*, Ivan I. Vankov, Markus F. Damian, Colin J. Davis

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

12 Citations (Scopus)

Abstract

A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the coactivation of multiple "things" (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the coactivation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to coactivate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex.

Original languageEnglish
Pages (from-to)248-261
Number of pages14
JournalPsychological Review
Volume121
Issue number2
DOIs
Publication statusPublished - 1 Apr 2014

Structured keywords

  • Language

Keywords

  • grandmother cells
  • localist representations
  • distributed representations
  • short-term memory
  • superposition catastrophe
  • SHORT-TERM-MEMORY
  • GRANDMOTHER CELLS
  • SERIAL ORDER
  • PDP MODELS
  • BRAIN
  • PSYCHOLOGY
  • COGNITION
  • BOWERS
  • PLAUT
  • CODES

Cite this