NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Confrey, Jere; Toutkoushian, Emily; Shah, Meetal – Applied Measurement in Education, 2019
Fully articulating validation arguments in the context of classroom assessment requires connecting evidence from multiple sources and addressing multiple types of validity in a coherent chain of reasoning. This type of validation argument is particularly complex for assessments that function in close proximity to instruction, address the fine…
Descriptors: Test Validity, Item Response Theory, Middle School Students, Mathematics Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage – Applied Measurement in Education, 2022
In achievement testing we assume that students will demonstrate their maximum performance as they encounter test items. Sometimes, however, student performance can decline during a test event, which implies that the test score does not represent maximum performance. This study describes a method for identifying significant performance decline and…
Descriptors: Achievement Tests, Performance, Classification, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Anderson, Daniel; Kahn, Joshua D.; Tindal, Gerald – Applied Measurement in Education, 2017
Unidimensionality and local independence are two common assumptions of item response theory. The former implies that all items measure a common latent trait, while the latter implies that responses are independent, conditional on respondents' location on the latent trait. Yet, few tests are truly unidimensional. Unmodeled dimensions may result in…
Descriptors: Robustness (Statistics), Item Response Theory, Mathematics Tests, Grade 6
Peer reviewed Peer reviewed
Direct linkDirect link
Michaelides, Michalis P. – Applied Measurement in Education, 2019
The Student Background survey administered along with achievement tests in studies of the International Association for the Evaluation of Educational Achievement includes scales of student motivation, competence, and attitudes toward mathematics and science. The scales consist of positively- and negatively keyed items. The current research…
Descriptors: International Assessment, Achievement Tests, Mathematics Achievement, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Daniel L.; Beretvas, S. Natasha – Applied Measurement in Education, 2015
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Descriptors: Teacher Effectiveness, Comparative Analysis, Hierarchical Linear Modeling, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Albano, Anthony D. – Applied Measurement in Education, 2015
This article used several data sets from a large-scale state testing program to examine the feasibility of combining general and modified assessment items in computerized adaptive testing (CAT) for different groups of students. Results suggested that several of the assumptions made when employing this type of mixed-item CAT may not be met for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Van Nijlen, Daniel; Janssen, Rianne – Applied Measurement in Education, 2011
The distinction between quantitative and qualitative differences in mastery is essential when monitoring student progress and is crucial for instructional interventions to deal with learning difficulties. Mixture item response theory (IRT) models can provide a convenient way to make the distinction between quantitative and qualitative differences…
Descriptors: Spelling, Indo European Languages, Vowels, Verbal Tests