Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Computer-based assessments have become prevalent in many educational assessments. These assessments are usually in mixed formats. That is, they contain different item formats such as multiple-choice (MC) and constructed-response (CR) items. These items, regardless of the item formats, are developed to measure the examinee’s skill or ability related to the construct of interest, e.g., problem-solving or critical thinking. Item response models are frequently used to calibrate the examinee’s latent trait based on their response score patterns. One concern with the scores examinees receive is the scores alone may not completely convey sufficient information to help understand the targeted latent trait. For example, scores may not necessarily provide information about specific thinking or reasoning examinees used in their responses. In this study, we focus on extracting additional information from examinees’ responses that may provide this kind of additional construct relevant information. In this regard, we explore the information contained in examinees’ sequential actions as recorded in the log file of a computer-based assessment. This information is referred to as response process data, or more simply process data. The process data generated by examinees in their responses to a computer-based assessment have been shown to be related to information in their responses given to both MC and CR items.This dissertation consists of two studies. The first study examines a novel exploratory methodology for extracting process data from the log files of a computer-based assessment. This methodology is called reservoir computing and is implemented with an optimization algorithm. This first study is used for analyzing and extracting process data in the log file from an administration of MC items. This method will be studied for its use in extracting features of the response data with an eye to help interpret the latent information in the response processes associated with the measurement of the latent construct. The second study examines the use of a natural language method using a probabilistic topic model to extract the latent features in the textual responses to CR items. In this second study, the utility of the unsupervised and supervised topic models will be studied for the analysis of textual responses with an eye to extracting construct-relevant information from the process data that can be used to help interpret examinees’ status on the latent construct. The combination of the two studies is intended to help provide a way for extracting and studying the combination of item response scores and item response process data to improve interpretations of examinees’ latent proficiencies.

Details

PDF

Statistics

from
to
Export
Download Full History