A backwards glance at words: Using reversed-interior masked primes to test models of visual word identification

Colin J. Davis*, Stephen J. Lupker

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

4 Citations (Scopus)
281 Downloads (Pure)

Abstract

The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model.

Original languageEnglish
Article numbere0189056
JournalPLoS ONE
Volume12
Issue number12
DOIs
Publication statusPublished - 1 Dec 2017

Structured keywords

  • Cognitive Science
  • Language

Fingerprint Dive into the research topics of 'A backwards glance at words: Using reversed-interior masked primes to test models of visual word identification'. Together they form a unique fingerprint.

Cite this