Files
Abstract
Speededness effects tend to occur when tests have time limits (Lu & Sireci, 2007). Speededness is normally dealt with in psychometric models as a "nuisance" factor because it is a factor which is not the intended focus of the test. When speededness occurs, therefore, its effects intrude on the construct being measured and can seriously degrade the validity of the test results. A number of different approaches have been used to try to detect which examinees exhibit speededness effects. Speededness in constructed response (CR) items, however, has only recently been studied (Kim et al., 2016), although CR items are becoming increasingly prominent in standardized assessments as a means of getting students to produce a response rather than select a choice (Scalise, 2014). In this dissertation, we investigate test speededness in the context of CR items.The first study examined a statistical model for detection of speededness effects in CR items using a two-class mixture graded response model (GRM; Samejima, 1969) for testlets. Traditional IRT models, unfortunately, cannot detect speededness, as the effects of speededness violate such models. In this first study, therefore, we considered an alternative model for estimating person and item parameters, when speededness effects are present. This approach uses a mixture IRT model (Rost, 1990) and operates, in part, to classify examinees into one of two latent groups, a speeded group and a nonspeeded group.In the second study, the model in the first study was extended to consider model parameters for both person and item as random effects. In particular, we investigated the performance of a random item mixture GRM for testlets with item covariates. The random item model considers both persons and items to be randomly sampled from a population (De Boeck, 2008). Treating items as random enables inclusion of item covariates directly in the model, which allows simultaneous detection of speededness effects and examination of the relationship between speededness effects in CR items and the item covariates.In the third study, we described another possible way to characterize a latent group membership from a mixture IRT model. In general, a mixture IRT model does not readily provide a qualitative explanation of the latent dimension(s). In this dissertation, we investigated a statistical method for detecting latent themes or topics in the actual text that examinees used in giving their answers to CR items. This method is latent Dirichlet allocation (LDA; Blei, Ng, & Jordan, 2003), which is used to detect latent topics in text corpora. We investigated the use of LDA for usefulness in providing information about the qualitative differences in textual responses from the speeded and nonspeeded examinees.