Publication Date
In 2025 | 53 |
Since 2024 | 111 |
Since 2021 (last 5 years) | 297 |
Since 2016 (last 10 years) | 562 |
Since 2006 (last 20 years) | 865 |
Descriptor
Test Items | 1407 |
Test Validity | 1407 |
Test Construction | 703 |
Test Reliability | 681 |
Foreign Countries | 396 |
Item Analysis | 303 |
Difficulty Level | 246 |
Psychometrics | 231 |
Item Response Theory | 194 |
Scores | 181 |
Factor Analysis | 172 |
More ▼ |
Source
Author
Schoen, Robert C. | 8 |
Stansfield, Charles W. | 7 |
Baghaei, Purya | 5 |
Hambleton, Ronald K. | 5 |
LaVenia, Mark | 5 |
Roid, Gale | 5 |
Wainer, Howard | 5 |
Bejar, Isaac I. | 4 |
Bennett, Randy Elliot | 4 |
Benson, Jeri | 4 |
Filby, Nikola N. | 4 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 43 |
Researchers | 38 |
Teachers | 28 |
Administrators | 14 |
Students | 5 |
Support Staff | 3 |
Community | 2 |
Parents | 2 |
Counselors | 1 |
Policymakers | 1 |
Location
Turkey | 60 |
Indonesia | 28 |
Canada | 22 |
Iran | 22 |
Australia | 21 |
California | 19 |
Germany | 19 |
China | 17 |
Florida | 14 |
United Kingdom | 14 |
Japan | 12 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Bin Tan; Nour Armoush; Elisabetta Mazzullo; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2025
This study reviews existing research on the use of large language models (LLMs) for automatic item generation (AIG). We performed a comprehensive literature search across seven research databases, selected studies based on predefined criteria, and summarized 60 relevant studies that employed LLMs in the AIG process. We identified the most commonly…
Descriptors: Artificial Intelligence, Test Items, Automation, Test Format
Anne Traynor; Sara C. Christopherson – Applied Measurement in Education, 2024
Combining methods from earlier content validity and more contemporary content alignment studies may allow a more complete evaluation of the meaning of test scores than if either set of methods is used on its own. This article distinguishes item relevance indices in the content validity literature from test representativeness indices in the…
Descriptors: Test Validity, Test Items, Achievement Tests, Test Construction
Camilla M. McMahon; Maryellen Brunson McClain; Savannah Wells; Sophia Thompson; Jeffrey D. Shahidullah – Journal of Autism and Developmental Disorders, 2025
Purpose: The goal of the current study was to conduct a substantive validity review of four autism knowledge assessments with prior psychometric support (Gillespie-Lynch in J Autism and Dev Disord 45(8):2553-2566, 2015; Harrison in J Autism and Dev Disord 47(10):3281-3295, 2017; McClain in J Autism and Dev Disord 50(3):998-1006, 2020; McMahon…
Descriptors: Measures (Individuals), Psychometrics, Test Items, Accuracy
Christopher J. Anthony; Stephen N. Elliott – School Mental Health, 2025
Stress is a complex construct that is related to resilience and general health starting in childhood. Despite its importance for student health and well-being, there are few measures of stress designed for school-based applications. In this study, we developed and initially validated a Stress Indicators Scale using five samples of teachers,…
Descriptors: Test Construction, Stress Variables, Test Validity, Test Items
Anna Planas-Lladó; Xavier Úcar – American Journal of Evaluation, 2024
Empowerment is a concept that has become increasingly used over recent years. However, little research has been undertaken into how empowerment can be evaluated, particularly in the case of young people. The aim of this article is to present an inventory of dimensions and indicators of youth empowerment. The article describes the various phases in…
Descriptors: Youth, Empowerment, Test Construction, Test Validity
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Ali Orhan; Inan Tekin; Sedat Sen – International Journal of Assessment Tools in Education, 2025
In this study, it was aimed to translate and adapt the Computational Thinking Multidimensional Test (CTMT) developed by Kang et al. (2023) into Turkish and to investigate its psychometric qualities with Turkish university students. Following the translation procedures of the CTMT with 12 multiple-choice questions developed based on real-life…
Descriptors: Cognitive Tests, Thinking Skills, Computation, Test Validity
Tia M. Fechter; Heeyeon Yoon – Language Testing, 2024
This study evaluated the efficacy of two proposed methods in an operational standard-setting study conducted for a high-stakes language proficiency test of the U.S. government. The goal was to seek low-cost modifications to the existing Yes/No Angoff method to increase the validity and reliability of the recommended cut scores using a convergent…
Descriptors: Standard Setting, Language Proficiency, Language Tests, Evaluation Methods
Sam von Gillern; Chad Rose; Amy Hutchison – British Journal of Educational Technology, 2024
As teachers are purveyors of digital citizenship and their perspectives influence classroom practice, it is important to understand teachers' views on digital citizenship. This study establishes the Teachers' Perceptions of Digital Citizenship Scale (T-PODS) as a survey instrument for scholars to investigate educators' views on digital citizenship…
Descriptors: Citizenship, Digital Literacy, Teacher Attitudes, Test Items
Ntumi, Simon; Agbenyo, Sheilla; Bulala, Tapela – Shanlax International Journal of Education, 2023
There is no need or point to testing of knowledge, attributes, traits, behaviours or abilities of an individual if information obtained from the test is inaccurate. However, by and large, it seems the estimation of psychometric properties of test items in classroomshas been completely ignored otherwise dying slowly in most testing environments. In…
Descriptors: Psychometrics, Accuracy, Test Validity, Factor Analysis
Collin Shepley; Amanda Leigh Duncan; Anthony P. Setari – Journal of Early Intervention, 2025
The provision of progress monitoring within publicly funded early childhood classrooms is legally required, supported by empirical research, and recommended by early childhood professional organizations, for teachers providing Part B services under the Individuals with Disabilities Education Act. Despite the widespread recognition of progress…
Descriptors: Progress Monitoring, Measures (Individuals), Test Construction, Test Validity
Atakan Yalcin; Cennet Sanli; Adnan Pinar – Journal of Theoretical Educational Science, 2025
This study aimed to develop a test to measure university students' spatial thinking skills. The research was conducted using a survey design, with a sample of 260 undergraduate students from geography teaching and geography departments. GIS software was used to incorporate maps and satellite images, enhancing the spatial representation in the…
Descriptors: Spatial Ability, Thinking Skills, Geography, Undergraduate Students
Sukru Murat Cebeci; Selcuk Acar – Journal of Creative Behavior, 2025
This study presents the Cebeci Test of Creativity (CTC), a novel computerized assessment tool designed to address the limitations of traditional open-ended paper-and-pencil creativity tests. The CTC is designed to overcome the challenges associated with the administration and manual scoring of traditional paper and pencil creativity tests. In this…
Descriptors: Creativity, Creativity Tests, Test Construction, Test Validity
Rodrigo Moreta-Herrera; Jacqueline Regatto-Bonifaz; Víctor Viteri-Miranda; María Gorety Rodríguez-Vieira; Giancarlo Magro-Lazo; Jose A. Rodas; Sergio Dominguez-Lara – Journal of Psychoeducational Assessment, 2025
Objective: Analyze the evidence of validity of scores of the Academic Procrastination Scale (APS), its measurement equivalence based on nationality, its reliability of the scores, and its validity in relation to other variables in university students from Ecuador, Venezuela, and Peru. Method: This paper involves a quantitative, descriptive,…
Descriptors: Measures (Individuals), Time Management, College Students, Foreign Countries
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing