Superset versus substitution-letter priming: An evaluation of open-bigram models.

Stephen Lupker, YJ Zhang, Jason R Perry, Colin J Davis

Research output: Contribution to journalArticle (Academic Journal)peer-review


In recent years, a number of models of orthographic coding have been proposed in which the orthographic code consists of a set of units representing bigrams (open-bigram models). Three masked priming experiments were undertaken in an attempt to evaluate this idea: a conventional masked priming experiment, a sandwich priming experiment (Lupker & Davis, 2009) and an experiment involving a masked prime same-different task (Norris & Kinoshita, 2008). Three prime types were used, first-letter superset primes (eg, wjudge-JUDGE), last-letter superset primes (eg, judgew-JUDGE) and standard substitution-letter primes (eg, juwge-JUDGE). In none of the experiments was there any evidence that the superset primes were more effective primes, the prediction made by open-bigram models. In fact, in the second and third experiments, first-letter superset primes were significantly worse primes than the other two …
Original languageEnglish
Pages (from-to)138
JournalJournal of Experimental Psychology: Human Perception and Performance
Issue number1
Publication statusPublished - 2015

Structured keywords

  • Cognitive Science
  • Language

Fingerprint Dive into the research topics of 'Superset versus substitution-letter priming: An evaluation of open-bigram models.'. Together they form a unique fingerprint.

Cite this