Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Topic models are statistical models used to analyze textual data. The objective of most topic models is to interpret the latent semantic space of a set of related texts. The use of topic models within the field of educational measurement has increased in recent years as a method for analyzing constructed-response (CR) items. These approaches typically utilize two different topic models: Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA). Both of these topic models have shown to be useful in a variety of applications within educational measurement applications, however, there are limited studies that investigate how these models can be used with item response models. This dissertation, therefore, focuses on how topic models can be used with item response models to improve the quality and interpretation of measures. This focus is addressed using three studies. The first study compares LSA and LDA using a simulation study. The objective of this study is to understand how these models compare to each other in order to better understand the appropriate uses of each model. The second study uses LDA to analyze an unfolding scale. In this study, LDA is able to help define the latent unfolding scale, and is able to help define the inaccuracies of raters on constructed-response items. The third study proposes a new scoring procedure for mixed-format assessments that incorporates information obtained from topic models with an item response model to improve the accuracy of ability estimates.

Details

PDF

Statistics

from
to
Export
Download Full History