Journal article
Educational and Psychological Measurement, 2022
Assistant Professor of Applied Health Science
APA
Click to copy
Weese, J. D., Turner, R., Liang, X., Ames, A. J., & Crawford, B. L. (2022). Implementing a Standardized Effect Size in the POLYSIBTEST Procedure. Educational and Psychological Measurement.
Chicago/Turabian
Click to copy
Weese, James D., R. Turner, Xinya Liang, Allison J. Ames, and Brandon L. Crawford. “Implementing a Standardized Effect Size in the POLYSIBTEST Procedure.” Educational and Psychological Measurement (2022).
MLA
Click to copy
Weese, James D., et al. “Implementing a Standardized Effect Size in the POLYSIBTEST Procedure.” Educational and Psychological Measurement, 2022.
BibTeX Click to copy
@article{james2022a,
title = {Implementing a Standardized Effect Size in the POLYSIBTEST Procedure},
year = {2022},
journal = {Educational and Psychological Measurement},
author = {Weese, James D. and Turner, R. and Liang, Xinya and Ames, Allison J. and Crawford, Brandon L.}
}
A study was conducted to implement the use of a standardized effect size and corresponding classification guidelines for polytomous data with the POLYSIBTEST procedure and compare those guidelines with prior recommendations. Two simulation studies were included. The first identifies new unstandardized test heuristics for classifying moderate and large differential item functioning (DIF) for polytomous response data with three to seven response options. These are provided for researchers studying polytomous data using POLYSIBTEST software that has been published previously. The second simulation study provides one pair of standardized effect size heuristics that can be employed with items having any number of response options and compares true-positive and false-positive rates for the standardized effect size proposed by Weese with one proposed by Zwick et al. and two unstandardized classification procedures (Gierl; Golia). All four procedures retained false-positive rates generally below the level of significance at both moderate and large DIF levels. However, Weese’s standardized effect size was not affected by sample size and provided slightly higher true-positive rates than the Zwick et al. and Golia’s recommendations, while flagging substantially fewer items that might be characterized as having negligible DIF when compared with Gierl’s suggested criterion. The proposed effect size allows for easier use and interpretation by practitioners as it can be applied to items with any number of response options and is interpreted as a difference in standard deviation units.