Why ability point estimates can be pointless: a primer on using skill measures from large-scale assessments in secondary analyses
Open Access
- 19 January 2021
- journal article
- advances in-methodology
- Published by Leibniz Institute for Psychology (ZPID) in Measurement Instruments for the Social Sciences
- Vol. 3 (1), 1-16
- https://doi.org/10.1186/s42409-020-00020-5
Abstract
Measures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.Keywords
Funding Information
- Deutsche Forschungsgemeinschaft (LE 4001/1-1)
- Bundesministerium für Bildung und Forschung (W143700A)
This publication has 69 references indexed in Scilit:
- The Assumption of a Reliable Instrument and Other Pitfalls to Avoid When Considering the Reliability of DataFrontiers in Psychology, 2012
- Structural equation models and the quantification of behaviorProceedings of the National Academy of Sciences, 2011
- Item factor analysis: Current approaches and future directions.Psychological Methods, 2007
- How to Describe the Difference between Factors and Corresponding Factor-Score EstimatesMethodology, 2005
- Assumptions and Comparative Strengths of the Two-Step ApproachSociological Methods & Research, 1992
- Structural equation modeling in practice: A review and recommended two-step approach.Psychological Bulletin, 1988
- Estimating Ability With the Wrong ModelJournal of Educational Statistics, 1987
- Sufficient statistics and latent trait modelsPsychometrika, 1977
- Confirmatory Factor-Analytic Structures and the Theory Construction ProcessSociological Methods & Research, 1973
- A general method for analysis of covariance structuresBiometrika, 1970