NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tony Albano; Brian F. French; Thao Thu Vo – Applied Measurement in Education, 2024
Recent research has demonstrated an intersectional approach to the study of differential item functioning (DIF). This approach expands DIF to account for the interactions between what have traditionally been treated as separate grouping variables. In this paper, we compare traditional and intersectional DIF analyses using data from a state testing…
Descriptors: Test Items, Item Analysis, Data Use, Standardized Tests
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Albano, Anthony D. – Applied Measurement in Education, 2015
This article used several data sets from a large-scale state testing program to examine the feasibility of combining general and modified assessment items in computerized adaptive testing (CAT) for different groups of students. Results suggested that several of the assumptions made when employing this type of mixed-item CAT may not be met for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Hongwen; Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2011
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Descriptors: Testing Programs, Measurement, Item Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Jacobsen, Jared; Ackermann, Richard; Eguez, Jane; Ganguli, Debalina; Rickard, Patricia; Taylor, Linda – Journal of Applied Testing Technology, 2011
A computer adaptive test (CAT) is a delivery methodology that serves the larger goals of the assessment system in which it is embedded. A thorough analysis of the assessment system for which a CAT is being designed is critical to ensure that the delivery platform is appropriate and addresses all relevant complexities. As such, a CAT engine must be…
Descriptors: Delivery Systems, Testing Programs, Computer Assisted Testing, Foreign Countries
Chen, Hanwei; Cui, Zhongmin; Zhu, Rongchun; Gao, Xiaohong – ACT, Inc., 2010
The most critical feature of a common-item nonequivalent groups equating design is that the average score difference between the new and old groups can be accurately decomposed into a group ability difference and a form difficulty difference. Two widely used observed-score linear equating methods, the Tucker and the Levine observed-score methods,…
Descriptors: Equated Scores, Groups, Ability Grouping, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Speece, Deborah L.; Schatschneider, Christopher; Silverman, Rebecca; Case, Lisa Pericola; Cooper, David H.; Jacobs, Dawn M. – Elementary School Journal, 2011
Models of Response to Intervention (RTI) include parameters of assessment and instruction. This study focuses on assessment with the purpose of developing a screening battery that validly and efficiently identifies first-grade children at risk for reading problems. In an RTI model, these children would be candidates for early intervention. We…
Descriptors: Reading Difficulties, Early Intervention, Grade 1, Response to Intervention
Peer reviewed Peer reviewed
Sheehan, Kathleen; Mislevy, Robert J. – Journal of Educational Measurement, 1990
The 63 items on skills in acquiring and using information from written documents contained in the Survey of Young Adult Literacy in the 1985 National Assessment of Educational Progress are analyzed. The analyses are based on a qualitative cognitive model and an item-response theory model. (TJH)
Descriptors: Adult Literacy, Cognitive Processes, Diagnostic Tests, Elementary Secondary Education
Klein, Thomas W. – 1991
Steps involved in the item analysis and scaling of the 1990 edition of Forms A and B of the Nevada High School Proficiency Examinations (NHSPEs) are described. Pilot tests of Forms A and B of the 47-item reading and 45-item mathematics tests were each administered to random samples of more than 600 eleventh-grade students. A computer program was…
Descriptors: Achievement Tests, Cutting Scores, Grade 11, High School Students