Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 24 |
Since 2006 (last 20 years) | 55 |
Descriptor
Source
Author
Ahmed, S. | 2 |
Baxter, G. P. | 2 |
Ketterlin-Geller, Leanne R. | 2 |
Liu, Kimy | 2 |
Salvucci, S. | 2 |
Sikali, E. | 2 |
Sloan, M. | 2 |
Waits, T. | 2 |
Abbott, Marilyn L. | 1 |
Acquaye, Rosemary | 1 |
Anderson, Carolyn J. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 14 |
Elementary Education | 10 |
Grade 4 | 8 |
Higher Education | 8 |
Grade 6 | 7 |
Grade 8 | 7 |
Middle Schools | 7 |
Grade 5 | 6 |
Grade 7 | 5 |
Intermediate Grades | 5 |
Junior High Schools | 5 |
More ▼ |
Location
Australia | 2 |
California | 2 |
Massachusetts | 2 |
Puerto Rico | 2 |
China | 1 |
Czech Republic | 1 |
Georgia | 1 |
Idaho | 1 |
Italy | 1 |
Japan | 1 |
Malaysia | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
National Defense Education Act | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Gyamfi, Abraham; Acquaye, Rosemary – Acta Educationis Generalis, 2023
Introduction: Item response theory (IRT) has received much attention in validation of assessment instrument because it allows the estimation of students' ability from any set of the items. Item response theory allows the difficulty and discrimination levels of each item on the test to be estimated. In the framework of IRT, item characteristics are…
Descriptors: Item Response Theory, Models, Test Items, Difficulty Level
Meike Akveld; George Kinnear – International Journal of Mathematical Education in Science and Technology, 2024
Many universities use diagnostic tests to assess incoming students' preparedness for mathematics courses. Diagnostic test results can help students to identify topics where they need more practice and give lecturers a summary of strengths and weaknesses in their class. We demonstrate a process that can be used to make improvements to a mathematics…
Descriptors: Mathematics Tests, Diagnostic Tests, Test Items, Item Analysis
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2022
This article discusses visual techniques for detecting test items that would be optimal to be selected to the final compilation on the one hand and, on the other hand, to out-select those items that would lower the quality of the compilation. Some classic visual tools are discussed, first, in a practical manner in diagnosing the logical,…
Descriptors: Test Items, Item Analysis, Item Response Theory, Cutting Scores
Haladyna, Thomas M.; Rodriguez, Michael C. – Educational Assessment, 2021
Full-information item analysis provides item developers and reviewers comprehensive empirical evidence of item quality, including option response frequency, point-biserial index (PBI) for distractors, mean-scores of respondents selecting each option, and option trace lines. The multi-serial index (MSI) is introduced as a more informative…
Descriptors: Test Items, Item Analysis, Reading Tests, Mathematics Tests
Wan, Siyu; Keller, Lisa A. – Practical Assessment, Research & Evaluation, 2023
Statistical process control (SPC) charts have been widely used in the field of educational measurement. The cumulative sum (CUSUM) is an established SPC method to detect aberrant responses for educational assessments. There are many studies that investigated the performance of CUSUM in different test settings. This paper describes the CUSUM…
Descriptors: Visual Aids, Educational Assessment, Evaluation Methods, Item Response Theory
Valentina Albano; Donatella Firmani; Luigi Laura; Jerin George Mathew; Anna Lucia Paoletti; Irene Torrente – Journal of Learning Analytics, 2023
Multiple-choice questions (MCQs) are widely used in educational assessments and professional certification exams. Managing large repositories of MCQs, however, poses several challenges due to the high volume of questions and the need to maintain their quality and relevance over time. One of these challenges is the presence of questions that…
Descriptors: Natural Language Processing, Multiple Choice Tests, Test Items, Item Analysis
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Kyeng Gea Lee; Mark J. Lee; Soo Jung Lee – International Journal of Technology in Education and Science, 2024
Online assessment is an essential part of online education, and if conducted properly, has been found to effectively gauge student learning. Generally, textbased questions have been the cornerstone of online assessment. Recently, however, the emergence of generative artificial intelligence has added a significant challenge to the integrity of…
Descriptors: Artificial Intelligence, Computer Software, Biology, Science Instruction
Zhao, Xin; Coxe, Stefany; Sibley, Margaret H.; Zulauf-McCurdy, Courtney; Pettit, Jeremy W. – Prevention Science, 2023
There has been increasing interest in applying integrative data analysis (IDA) to analyze data across multiple studies to increase sample size and statistical power. Measures of a construct are frequently not consistent across studies. This article provides a tutorial on the complex decisions that occur when conducting harmonization of measures…
Descriptors: Data Analysis, Sample Size, Decision Making, Test Items
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Tijmstra, Jesper; Bolsinova, Maria; Liaw, Yuan-Ling; Rutkowski, Leslie; Rutkowski, David – Journal of Educational Measurement, 2020
Although the root-mean squared deviation (RMSD) is a popular statistical measure for evaluating country-specific item-level misfit (i.e., differential item functioning [DIF]) in international large-scale assessment, this paper shows that its sensitivity to detect misfit may depend strongly on the proficiency distribution of the considered…
Descriptors: Test Items, Goodness of Fit, Probability, Accuracy
Achieve, Inc., 2019
Assessment is a key lever for educational improvement. Assessments can be used to monitor, signal, and influence science teaching and learning -- provided that they are of high quality, reflect the rigor and intent of academic standards, and elicit meaningful student performances. Since the release of "A Framework for K-12 Science…
Descriptors: Difficulty Level, Evaluation Criteria, Cognitive Processes, Test Items
Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U. – Educational Measurement: Issues and Practice, 2016
The main points of Sijtsma and Green and Yang in Educational Measurement: Issues and Practice (34, 4) are that reliability, internal consistency, and unidimensionality are distinct and that Cronbach's alpha may be problematic. Neither of these assertions are at odds with Davenport, Davison, Liou, and Love in the same issue. However, many authors…
Descriptors: Educational Assessment, Reliability, Validity, Test Construction
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Spurgeon, Shawn L. – Measurement and Evaluation in Counseling and Development, 2017
Construct irrelevance (CI) and construct underrepresentation (CU) are 2 major threats to validity, yet they are rarely discussed within the counseling literature. This article provides information about the relevance of these threats to internal validity. An illustrative case example will be provided to assist counselors in understanding these…
Descriptors: Construct Validity, Evaluation Criteria, Evaluation Methods, Evaluation Problems