NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 500 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Sandip Sinharay – Educational and Psychological Measurement, 2025
Test-takers, policymakers, teachers, and institutions are increasingly demanding that testing programs provide more detailed feedback regarding test performance. As a result, there has been a growing interest in the reporting of subscores that potentially provide such detailed feedback. Haberman developed a method based on classical test theory…
Descriptors: Scores, Test Theory, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Xi Wang; Catherine Welch – Journal of Educational Measurement, 2025
This study builds on prior research on adaptive testing by examining the performance of item calibration methods in the context of multidimensional multistage tests with within-item multidimensionality. Building on the adaptive module-level approach, where test-takers proceed through customized modules based on their initial performance, this…
Descriptors: Test Items, Adaptive Testing, Testing, Computer Simulation
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Rayne Bozeman; Robyn K. Mallett; Linas Mitchell; R. Scott Tindale – Active Learning in Higher Education, 2024
Two-phase testing assesses individual performance (phase 1) and then allows collaborative learning within small groups (phase 2). While groups typically outperform individuals, less is known about the social decision schemes that influence member collaboration. In a classroom setting, we compared individual and group performance on a standard test…
Descriptors: Testing, Group Testing, Cooperative Learning, Learning Experience
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Jila Niknejad; Margaret Bayer – International Journal of Mathematical Education in Science and Technology, 2025
In Spring 2020, the need for redesigning online assessments to preserve integrity became a priority to many educators. Many of us found methods to proctor examinations using Zoom and proctoring software. Such examinations pose their own issues. To reduce the technical difficulties and cost, many Zoom proctored examination sessions were shortened;…
Descriptors: Mathematics Instruction, Mathematics Tests, Computer Assisted Testing, Computer Software
Robert J. Marzano; Bridget Cahill; Jeni Gotto; Brian J. Kosena; Michael Lynch; Lucy Pearson – Solution Tree, 2025
In "Test-Specific Thinking," the authors provide recommended practices, methods, and means for educators to implement structural schemas into teaching, helping students better prepare for tests and formulate stronger responses to certain question frames. Armed with a better understanding of how tests are designed, teachers will increase…
Descriptors: English Instruction, Language Arts, Mathematics Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Elkhatat, Ahmed M. – International Journal for Educational Integrity, 2022
Examinations form part of the assessment processes that constitute the basis for benchmarking individual educational progress, and must consequently fulfill credibility, reliability, and transparency standards in order to promote learning outcomes and ensure academic integrity. A randomly selected question examination (RSQE) is considered to be an…
Descriptors: Integrity, Monte Carlo Methods, Credibility, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozsoy, Seyma Nur; Kilmen, Sevilay – International Journal of Assessment Tools in Education, 2023
In this study, Kernel test equating methods were compared under NEAT and NEC designs. In NEAT design, Kernel post-stratification and chain equating methods taking into account optimal and large bandwidths were compared. In the NEC design, gender and/or computer/tablet use was considered as a covariate, and Kernel test equating methods were…
Descriptors: Equated Scores, Testing, Test Items, Statistical Analysis
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bilal Ghanem; Alona Fyshe – International Educational Data Mining Society, 2024
Multiple choice questions (MCQs) are a common way to assess reading comprehension. Every MCQ needs a set of distractor answers that are incorrect, but plausible enough to test student knowledge. However, good distractors are hard to create. Distractor generation (DG) models have been proposed, and their performance is typically evaluated using…
Descriptors: Multiple Choice Tests, Reading Comprehension, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Benjamin A. Motz; Anna L. Chinni; Audrey G. Barriball; Danielle S. McNamara – Grantee Submission, 2025
When learning with self-testing alone, will a learner make inferences between the tested items? This study examines whether self-testing's benefits extend beyond isolated facts to support broader connections between the facts. Comparing self-testing to self-explanation (a strategy known to facilitate inferential learning), we find that while…
Descriptors: Inferences, Testing, Test Items, Self Evaluation (Individuals)
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  34