Files
Abstract
The scoring of performance assessments involves human judgments, and student proficiency of a certain skill is reflected by the ratings. Therefore, it is critical important to ensure the rating quality especially its accuracy. Cross-classification mixed-effect models are usually used for examining data with a complex structure (e.g., student responses to items are cross-classified by both students and items). By incorporating item response models with the multilevel models, the Cross-Classification Multilevel Item Response Theory (CCM-IRT) Model provides individual estimates in addition to random-effect coefficients. The model can also be extended by adding covariates. In this study, the application of the CCM-IRT models is illustrated to examine rating accuracy within the context of writing assessments. Also, the performance of the CCM-IRT models is evaluated in parameter estimation when the rating designs are incomplete through a simulation study.