Evaluation of the Validity of Large-scale Examinations

Student thesis: Doctoral ThesisDoctor of Education (EdD)

Abstract

In view of the inadequacy in the literature on the methodological aspects of assessment validation, this dissertation aims to devise a validation process for large-scale examinations. To illustrate the implementation and the impact of the process, an evaluation of the validity of the 2015 Hong Kong Diploma of Secondary Education (HKDSE) Liberal Studies (LS) Examination was conducted. Based on Messick’s (1995) definition of assessment validity and Kane’s (2013, 2015) Argument-based Approach, the content and substantive validity of the 2015 LS Examination has been evaluated by drawing quantitative and qualitative evidence from multiple sources (both primary and secondary): a live script study, nominal group discussions among examiners and a think-aloud study. Bloom’s Taxonomy (1956), the Revised Taxonomy (Anderson & Krathwohl, 2001), the New Taxonomy (Marzano et al., 2008) and Kuhn’s (2001, 2005) KPI model have been deployed as the analytical framework to investigate the differentiation of candidates’ performance and the assessment of higher-order thinking skills by the LS Examination. The dissertation demonstrates how test developers can gather multi-faceted evidence for a large-scale examination and how they may be informed of aspects that require further improvement. In the evaluation of the LS Examination, it is evident that (i) the examination differentiated candidates across five Levels of Performance by the skill domains stipulated on the Level Descriptors (with the exceptions of Evaluation and the consideration of Cultures/Values between the lowest 2 levels of achievement (Levels 1 and 2)); (ii) the differentiation of performance complies with cognitive models; and (iii) higher-order thinking skills, including high-order Information-handling skills, Dispositions of relevant knowledge and concepts, Argument Formulation by integrating evidence and Meta-level skills, were demonstrated by candidates in the examination. The validation process has been evaluated in terms of the methodology, the transferability to the evaluation of examination processes, the implications for test development, as well as the limitations to be overcome in future evaluation studies of public examinations.
Date of Award29 Sep 2020
Original languageEnglish
Awarding Institution
  • The University of Bristol
SupervisorWilliam J Browne (Supervisor)

Keywords

  • assessment
  • assessment validity
  • validity evaluation
  • large-scale examination
  • Liberal Studies
  • Hong Kong Diploma of Secondary Education
  • cognitive models

Cite this

'