Files
Abstract
The log-linear cognitive diagnosis model (LCDM) is a model comprised of categorical latent attributes said to represent specific constructs. It provides a method by which classification on a given attribute/trait can be statistically deduced from an individuals observed response pattern. Item bias or differential item function (DIF) represents the occasion when an item on an assessment produces disparate results for individuals possessing the same level of a particular trait or ability. In this dissertation, the LCDM is applied to a simulated dataset comprised of 12 items and measuring 3 attributes and an academic assessment dataset comprised of 3 items in which each item measures a single attribute. Using MPlus for analysis, an omnibus test for measurement invariance is implemented. The free baseline estimation approach is applied to the item and structural models. This baseline model is then compared to other, more constrained models. Model fit, item parameter estimates, structural model estimates, and impact of DIF or non-invariance are assessed. Results of the empirical study indicated that the estimation tool, MPlus, was challenged by the complexity of the simulated data. For this reason, estimation errors were common, and the omnibus approach was not entirely effective in identifying DIF with these data. Overall, it was proven that greater instances of DIF in the items will produce differences or instability in the structural model. The omnibus testing method was most successful when applied to the simpler data structure of the academic assessment. These data were found to have invariant items but lacked invariance in the structural model.