Publication Date
| In 2026 | 0 |
| Since 2025 | 15 |
| Since 2022 (last 5 years) | 63 |
| Since 2017 (last 10 years) | 162 |
| Since 2007 (last 20 years) | 321 |
Descriptor
Source
Author
| Hambleton, Ronald K. | 15 |
| Wang, Wen-Chung | 9 |
| Livingston, Samuel A. | 6 |
| Sijtsma, Klaas | 6 |
| Wainer, Howard | 6 |
| Weiss, David J. | 6 |
| Wilcox, Rand R. | 6 |
| Cheng, Ying | 5 |
| Gessaroli, Marc E. | 5 |
| Lee, Won-Chan | 5 |
| Lewis, Charles | 5 |
| More ▼ | |
Publication Type
Education Level
Location
| Turkey | 8 |
| Australia | 7 |
| Canada | 7 |
| China | 5 |
| Netherlands | 5 |
| Japan | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| Germany | 3 |
| Michigan | 3 |
| Singapore | 3 |
| More ▼ | |
Laws, Policies, & Programs
| Americans with Disabilities… | 1 |
| Equal Access | 1 |
| Job Training Partnership Act… | 1 |
| Race to the Top | 1 |
| Rehabilitation Act 1973… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Northwest Evaluation Association, 2017
Some schools use results from the MAP® Growth™ interim assessments from Northwest Evaluation Association® (NWEA®) in a number of high-stakes ways. These include as a component of their teacher evaluation systems, to determine whether a student advances to the next grade, or as an indicator for student readiness for certain programs or…
Descriptors: Outcome Measures, Academic Achievement, High Stakes Tests, Test Results
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
Lee, HyeSun; Geisinger, Kurt F. – Educational and Psychological Measurement, 2016
The current study investigated the impact of matching criterion purification on the accuracy of differential item functioning (DIF) detection in large-scale assessments. The three matching approaches for DIF analyses (block-level matching, pooled booklet matching, and equated pooled booklet matching) were employed with the Mantel-Haenszel…
Descriptors: Test Bias, Measurement, Accuracy, Statistical Analysis
Cao, Mengyang; Tay, Louis; Liu, Yaowu – Educational and Psychological Measurement, 2017
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Descriptors: Monte Carlo Methods, Test Items, Test Bias, Error of Measurement
Steinkamp, Susan Christa – ProQuest LLC, 2017
For test scores that rely on the accurate estimation of ability via an IRT model, their use and interpretation is dependent upon the assumption that the IRT model fits the data. Examinees who do not put forth full effort in answering test questions, have prior knowledge of test content, or do not approach a test with the intent of answering…
Descriptors: Test Items, Item Response Theory, Scores, Test Wiseness
Galikyan, Irena; Madyarov, Irshat; Gasparyan, Rubina – ETS Research Report Series, 2019
The broad range of English language teaching and learning contexts present in the world today necessitates high quality assessment instruments that can provide reliable and meaningful information about learners' English proficiency levels to relevant stakeholders. The "TOEFL Junior"® tests were recently introduced by Educational Testing…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Student Attitudes
Senel, Selma; Kutlu, Ömer – European Journal of Special Needs Education, 2018
This paper examines listening comprehension skills of visually impaired students (VIS) using computerised adaptive testing (CAT) and reader-assisted paper-pencil testing (raPPT) and student views about them. Explanatory mixed method design was used in this study. Sample is comprised of 51 VIS, in 7th and 8th grades. 9 of these students were…
Descriptors: Computer Assisted Testing, Adaptive Testing, Visual Impairments, Student Attitudes
McGonagle, Katherine A.; Brown, Charles; Schoeni, Robert F. – Field Methods, 2015
Recording interviews is a key feature of quality control protocols for most survey organizations. We examine the effects on interview length and data quality of a new protocol adopted by a national panel study. The protocol recorded a randomly chosen one-third of all interviews digitally, although all respondents were asked for permission to…
Descriptors: National Surveys, Interviews, Quality Control, Test Length
Zaporozhets, Olga; Fox, Christine M.; Beltyukova, Svetlana A.; Laux, John M.; Piazza, Nick J.; Salyers, Kathleen – Measurement and Evaluation in Counseling and Development, 2015
This study was to develop a linear measure of change using University of Rhode Island Change Assessment items that represented Prochaska and DiClemente's theory. The resulting Toledo Measure of Change is short, is easy to use, and provides reliable scores for identification of individuals' stage of change and progression within that stage.
Descriptors: Item Response Theory, Change, Measures (Individuals), Test Construction
Hsu, Chia-Ling; Wang, Wen-Chung – Journal of Educational Measurement, 2015
Cognitive diagnosis models provide profile information about a set of latent binary attributes, whereas item response models yield a summary report on a latent continuous trait. To utilize the advantages of both models, higher order cognitive diagnosis models were developed in which information about both latent binary attributes and latent…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Cognitive Measurement
Lee, Yi-Hsuan; Zhang, Jinming – International Journal of Testing, 2017
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Descriptors: Test Bias, Test Reliability, Performance, Scores
James, Syretta R.; Liu, Shihching Jessica; Maina, Nyambura; Wade, Julie; Wang, Helen; Wilson, Heather; Wolanin, Natalie – Montgomery County Public Schools, 2021
The impact of the COVID-19 pandemic continues to overwhelm the functioning and outcomes of educational systems throughout the nation. The public education system is under particular scrutiny given that students, families, and educators are under considerable stress to maintain academic progress. Since the beginning of the crisis, school-systems…
Descriptors: Achievement Tests, COVID-19, Pandemics, Public Schools
Andersson, Björn – Journal of Educational Measurement, 2016
In observed-score equipercentile equating, the goal is to make scores on two scales or tests measuring the same construct comparable by matching the percentiles of the respective score distributions. If the tests consist of different items with multiple categories for each item, a suitable model for the responses is a polytomous item response…
Descriptors: Equated Scores, Item Response Theory, Error of Measurement, Tests
Jacob, Brian A. – Center on Children and Families at Brookings, 2016
Contrary to popular belief, modern cognitive assessments--including the new Common Core tests--produce test scores based on sophisticated statistical models rather than the simple percent of items a student answers correctly. While there are good reasons for this, it means that reported test scores depend on many decisions made by test designers,…
Descriptors: Scores, Common Core State Standards, Test Length, Test Content

Peer reviewed
Direct link
