A framework for evaluating automatic indexing or classification in the context of retrieval

Kora Golub, Dagobert Soergel, George Buchanan, Douglas Tudhope, Marianne Lykke, Debra Hiom

    Research output: Contribution to journalArticle (Academic Journal)peer-review

    19 Citations (Scopus)


    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single “gold standard” method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
    Original languageEnglish
    JournalJournal of the Association for Information Science and Technology
    Publication statusE-pub ahead of print - 22 Oct 2015


    • automatic classification
    • automatic indexing
    • machine aided indexing

    Fingerprint Dive into the research topics of 'A framework for evaluating automatic indexing or classification in the context of retrieval'. Together they form a unique fingerprint.

    Cite this