Files
Abstract
A great deal of controversy surrounds the question of whether valid inferences can be made from scores obtained from accommodated test administrations for students with disabilities. This study was designed to examine the latent structure of the newly revised Scholastic Aptitude Reasoning Test (SAT, 2005) across groups of examinees without disabilities tested under standard time conditions and examinees with disabilities tested with extended time to determine whether the test measures the same construct for both groups. The impact of the recent changes in item type, test length, and response format on test scores of students with disabilities is not clear. An assessment of measurement invariance was conducted to determine the extent to which test scores across the two groups of examinees are comparable. Data from the initial administration of the new SAT Reasoning Test (administered March 17, 2005) was used for the analyses in a sample of 4,952 examinees. First, confirmatory factor analysis was used to assess the fit of a single-factor structure model for the Critical Reading, Math, and Writing sections to each of the two groups. Next, a study of factorial invariance examined whether a common factor model for the Critical Reading, Math, and Writing sections holds across the two groups at increasingly restrictive levels of constraint. Invariance across the two groups was supported for factor loadings, thresholds, and factor variances. Thus, there was no real evidence to suggest that the scores on the Critical Reading, Math, and Writing sections of the SAT Reasoning Test have different interpretations when examinees have an extended time administration as opposed to the standard time administration.