Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 13 |
| Since 2017 (last 10 years) | 45 |
| Since 2007 (last 20 years) | 112 |
Descriptor
| Correlation | 149 |
| Difficulty Level | 149 |
| Test Items | 149 |
| Item Response Theory | 47 |
| Item Analysis | 43 |
| Foreign Countries | 40 |
| Comparative Analysis | 38 |
| Test Reliability | 28 |
| Scores | 25 |
| Test Construction | 25 |
| Statistical Analysis | 23 |
| More ▼ | |
Source
Author
| Dorans, Neil J. | 4 |
| Holland, Paul | 3 |
| Sinharay, Sandip | 3 |
| Attali, Yigal | 2 |
| Benjamin W. Domingue | 2 |
| DeMars, Christine E. | 2 |
| Joshua B. Gilbert | 2 |
| Kobrin, Jennifer L. | 2 |
| Livingston, Samuel A. | 2 |
| Luke W. Miratrix | 2 |
| Mridul Joshi | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 6 |
Location
| Indonesia | 4 |
| Turkey | 4 |
| Germany | 3 |
| Australia | 2 |
| Canada | 2 |
| South Korea | 2 |
| Belgium | 1 |
| Cyprus | 1 |
| Czech Republic | 1 |
| District of Columbia | 1 |
| Finland | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Changiz Mohiyeddini – Anatomical Sciences Education, 2025
This article presents a step-by-step guide to using R and SPSS to bootstrap exam questions. Bootstrapping, a versatile nonparametric analytical technique, can help to improve the psychometric qualities of exam questions in the process of quality assurance. Bootstrapping is particularly useful in disciplines such as medical education, where student…
Descriptors: Test Items, Sampling, Statistical Inference, Nonparametric Statistics
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Metsämuuronen, Jari – International Journal of Educational Methodology, 2020
Pearson product-moment correlation coefficient between item g and test score X, known as item-test or item-total correlation ("Rit"), and item-rest correlation ("Rir") are two of the most used classical estimators for item discrimination power (IDP). Both "Rit" and "Rir" underestimate IDP caused by the…
Descriptors: Correlation, Test Items, Scores, Difficulty Level
Anatri Desstya; Ika Candra Sayekti; Muhammad Abduh; Sukartono – Journal of Turkish Science Education, 2025
This study aimed to develop a standardised instrument for diagnosing science misconceptions in primary school children. Following a developmental research approach using the 4-D model (Define, Design, Develop, Disseminate), 100 four-tier multiple choice items were constructed. Content validity was established through expert evaluation by six…
Descriptors: Test Construction, Science Tests, Science Instruction, Diagnostic Tests
Ferrari-Bridgers, Franca – International Journal of Listening, 2023
While many tools exist to assess student content knowledge, there are few that assess whether students display the critical listening skills necessary to interpret the quality of a speaker's message at the college level. The following research provides preliminary evidence for the internal consistency and factor structure of a tool, the…
Descriptors: Factor Structure, Test Validity, Community College Students, Test Reliability
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Yoo Jeong Jang – ProQuest LLC, 2022
Despite the increasing demand for diagnostic information, observed subscores have been often reported to lack adequate psychometric qualities such as reliability, distinctiveness, and validity. Therefore, several statistical techniques based on CTT and IRT frameworks have been proposed to improve the quality of subscores. More recently, DCM has…
Descriptors: Classification, Accuracy, Item Response Theory, Correlation
Flint, Kaitlyn; Spaulding, Tammie J. – Language, Speech, and Hearing Services in Schools, 2021
Purpose: The readability and comprehensibility of Learner's Permit Knowledge Test practice questions and the relationship with test failure rates across states and the District of Columbia were examined. Method: Failure rates were obtained from department representatives. Practice test questions were extracted from drivers' manuals and department…
Descriptors: Correlation, Readability Formulas, Reading Comprehension, Difficulty Level
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Annenberg Institute for School Reform at Brown University, 2024
Analyzing heterogeneous treatment effects (HTE) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and pre-intervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Arikan, Serkan; Aybek, Eren Can – Educational Measurement: Issues and Practice, 2022
Many scholars compared various item discrimination indices in real or simulated data. Item discrimination indices, such as item-total correlation, item-rest correlation, and IRT item discrimination parameter, provide information about individual differences among all participants. However, there are tests that aim to select a very limited number…
Descriptors: Monte Carlo Methods, Item Analysis, Correlation, Individual Differences
Slepkov, A. D.; Van Bussel, M. L.; Fitze, K. M.; Burr, W. S. – SAGE Open, 2021
There is a broad literature in multiple-choice test development, both in terms of item-writing guidelines, and psychometric functionality as a measurement tool. However, most of the published literature concerns multiple-choice testing in the context of expert-designed high-stakes standardized assessments, with little attention being paid to the…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Multiple Choice Tests
Hartono, Wahyu; Hadi, Samsul; Rosnawati, Raden; Retnawati, Heri – Pegem Journal of Education and Instruction, 2023
Researchers design diagnostic assessments to measure students' knowledge structures and processing skills to provide information about their cognitive attribute. The purpose of this study is to determine the instrument's validity and score reliability, as well as to investigate the use of classical test theory to identify item characteristics. The…
Descriptors: Diagnostic Tests, Test Validity, Item Response Theory, Content Validity
Akhtar, Hanif – International Association for Development of the Information Society, 2022
When examinees perceive a test as low stakes, it is logical to assume that some of them will not put out their maximum effort. This condition makes the validity of the test results more complicated. Although many studies have investigated motivational fluctuation across tests during a testing session, only a small number of studies have…
Descriptors: Intelligence Tests, Student Motivation, Test Validity, Student Attitudes

Peer reviewed
Direct link
