Publication Date
| In 2026 | 0 |
| Since 2025 | 13 |
Descriptor
| Difficulty Level | 13 |
| Item Response Theory | 13 |
| Test Items | 10 |
| Foreign Countries | 6 |
| Psychometrics | 5 |
| Test Reliability | 5 |
| Item Analysis | 3 |
| Models | 3 |
| Multiple Choice Tests | 3 |
| Accuracy | 2 |
| Artificial Intelligence | 2 |
| More ▼ | |
Source
Author
Publication Type
| Reports - Research | 13 |
| Journal Articles | 12 |
Education Level
| Higher Education | 4 |
| Postsecondary Education | 4 |
| Secondary Education | 3 |
| High Schools | 2 |
| Early Childhood Education | 1 |
| Elementary Education | 1 |
| Grade 2 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Primary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yun-Kyung Kim; Li Cai – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2025
This paper introduces an application of cross-classified item response theory (IRT) modeling to an assessment utilizing the embedded standard setting (ESS) method (Lewis & Cook). The cross-classified IRT model is used to treat both item and person effects as random, where the item effects are regressed on the target performance levels (target…
Descriptors: Standard Setting (Scoring), Item Response Theory, Test Items, Difficulty Level
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Christina Glasauer; Martin K. Yeh; Lois Anne DeLong; Yu Yan; Yanyan Zhuang – Computer Science Education, 2025
Background and Context: Feedback on one's progress is essential to new programming language learners, particularly in out-of-classroom settings. Though many study materials offer assessment mechanisms, most do not examine the accuracy of the feedback they deliver, nor give evidence on its validity. Objective: We investigate the potential use of a…
Descriptors: Novices, Computer Science Education, Programming, Accuracy
Neda Kianinezhad; Mohsen Kianinezhad – Language Education & Assessment, 2025
This study presents a comparative analysis of classical reliability measures, including Cronbach's alpha, test-retest, and parallel forms reliability, alongside modern psychometric methods such as the Rasch model and Mokken scaling, to evaluate the reliability of C-tests in language proficiency assessment. Utilizing data from 150 participants…
Descriptors: Psychometrics, Test Reliability, Language Proficiency, Language Tests
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2025
To mitigate the potential damaging consequences of rapid guessing (RG), a form of noneffortful responding, researchers have proposed a number of scoring approaches. The present simulation study examines the robustness of the most popular of these approaches, the unidimensional effort-moderated (EM) scoring procedure, to multidimensional RG (i.e.,…
Descriptors: Scoring, Guessing (Tests), Reaction Time, Item Response Theory
Y. Yokhebed; Rexy Maulana Dwi Karmadi; Luvia Ranggi Nastiti – Journal of Biological Education Indonesia (Jurnal Pendidikan Biologi Indonesia), 2025
Although self-assessment in critical thinking is thought to help students recognise their strengths and weaknesses, the reliability and validity of the assessment tool is still questionable, so a more objective evaluation is needed. Objective of this investigation is to assess the self-assessment tools in evaluating students' critical thinking…
Descriptors: Self Evaluation (Individuals), Critical Thinking, Science and Society, Test Validity
Arandha May Rachmawati; Agus Widyantoro – English Language Teaching Educational Journal, 2025
This study aims to evaluate the quality of English reading comprehension test instruments used in informal learning, especially as English literacy tests. With a quantitative approach, the analysis was carried out using the Rasch model through the Quest program on 30 multiple-choice questions given to 30 grade IX students from informal educational…
Descriptors: Item Response Theory, Reading Tests, Reading Comprehension, English (Second Language)
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Rodrigo Moreta-Herrera; Xavier Oriol-Granado; Mònica González; Jose A. Rodas – Infant and Child Development, 2025
This study evaluates the Children's Worlds Psychological Well-Being Scale (CW-PSWBS) within a diverse international cohort of children aged 10 and 12, utilising Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies. Through a detailed psychometric analysis, this research assesses the CW-PSWBS's structural integrity, focusing on…
Descriptors: Well Being, Rating Scales, Children, Item Response Theory
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Ika Zenita Ratnaningsih; Unika Prihatsanti; Anggun Resdasari Prasetyo; Bambang Sumintono – Journal of Applied Research in Higher Education, 2025
Purpose: The present study aimed to validate the Indonesian-language version of the psychological capital questionnaire (PCQ), specifically within the context of higher education, by utilising Rasch analysis to evaluate the reliability and validity aspect such as item-fit statistics, rating scale function, and differential item functioning of the…
Descriptors: Foreign Countries, Indonesian Languages, Test Validity, Psychological Characteristics
Nisaudzakiah Utami; Agus Setiawan; Ida Hamidah; Thomas Koehler – Educational Process: International Journal, 2025
Background/purpose: This study investigated the effectiveness of Problem-Based Worksheets (PBWs) in improving conceptual understanding of logic gates. PBWs were designed with key characteristics that integrate authentic problem-solving, active student engagement, and conceptual scaffolding to support critical thinking processes. A total of 32…
Descriptors: Problem Based Learning, Worksheets, Item Response Theory, Models

Peer reviewed
Direct link
