NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 202413
Since 2021 (last 5 years)37
Since 2016 (last 10 years)62
Since 2006 (last 20 years)122
What Works Clearinghouse Rating
Showing 1 to 15 of 122 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Li Zhao; Junjie Peng; Shiqi Ke; Kang Lee – Educational Psychology Review, 2024
Unproctored and teacher-proctored exams have been widely used to prevent cheating at many universities worldwide. However, no empirical studies have directly compared their effectiveness in promoting academic integrity in actual exams. To address this significant gap, in four preregistered field studies, we examined the effectiveness of…
Descriptors: Supervision, Tests, Testing, Integrity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jing Miao; Yi Cao; Michael E. Walker – ETS Research Report Series, 2024
Studies of test score comparability have been conducted at different stages in the history of testing to ensure that test results carry the same meaning regardless of test conditions. The expansion of at-home testing via remote proctoring sparked another round of interest. This study uses data from three licensure tests to assess potential mode…
Descriptors: Testing, Test Format, Computer Assisted Testing, Home Study
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Brian Rempel; Elizabeth McGinitie; Maria Dirks – Canadian Journal for the Scholarship of Teaching and Learning, 2023
Two-stage testing is a form of collaborative assessment that creates an active learning environment during test taking. In two-stage testing, students first complete an exam individually, and then complete a subset of the same questions as part of a learning team with the ultimate exam score being a weighted average of the individual and team…
Descriptors: College Freshmen, Student Attitudes, Cooperative Learning, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yavuz Akbulut – European Journal of Education, 2024
The testing effect refers to the gains in learning and retention that result from taking practice tests before the final test. Understanding the conditions under which practice tests improve learning is crucial, so four experiments were conducted with a total of 438 undergraduate students in Turkey. In the first study, students who took graded…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Semih Asiret; Seçil Ömür Sünbül – International Journal of Psychology and Educational Studies, 2023
In this study, it was aimed to examine the effect of missing data in different patterns and sizes on test equating methods under the NEAT design for different factors. For this purpose, as part of this study, factors such as sample size, average difficulty level difference between the test forms, difference between the ability distribution,…
Descriptors: Research Problems, Data, Test Items, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Susan K. Johnsen – Gifted Child Today, 2024
The author provides a checklist for educators who are selecting technically adequate tests for identifying and referring students for gifted education services and programs. The checklist includes questions related to how the test was normed, reliability and validity studies as well as questions related to types of scores, administration, and…
Descriptors: Test Selection, Academically Gifted, Gifted Education, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao – Language Testing, 2022
Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the…
Descriptors: Translation, Language Tests, Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Chunliang; Li, Jiaojiao; Zhao, Wenbo; Luo, Liang; Shanks, David R. – Educational Psychology Review, 2023
Practice testing is a powerful tool to consolidate long-term retention of studied information, facilitate subsequent learning of new information, and foster knowledge transfer. However, practitioners frequently express the concern that tests are anxiety-inducing and that their employment in the classroom should be minimized. The current review…
Descriptors: Tests, Test Format, Testing, Test Wiseness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Blair Lehman; Jesse R. Sparks; Jonathan Steinberg – ETS Research Report Series, 2024
Over the last 20 years, many methods have been proposed to use process data (e.g., response time) to detect changes in engagement during the test-taking process. However, many of these methods were developed and evaluated in highly similar testing contexts: 30 or more single-select multiple-choice items presented in a linear, fixed sequence in…
Descriptors: National Competency Tests, Secondary School Mathematics, Secondary School Students, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Samsa, Gregory – Journal of Curriculum and Teaching, 2021
Objective: Our master's program in biostatistics requires a qualifying examination (QE). A curriculum review led us to question whether to replace a closed-book format with an open-book one. Our goal was to improve the QE. Methods: This is a case study and commentary, where we describe the evolution of the QE, both in its goals and its content.…
Descriptors: Testing, Cooperative Learning, Evaluation Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Heeseon; Lee, Hee Seung – Educational Psychology Review, 2020
Recent studies suggest that testing on prior material enhances subsequent learning of new material. Although such forward testing effect has received extensive empirical support, it is not yet clear how testing facilitates subsequent learning. One possible explanation suggests that interim testing informs learners about the format of an upcoming…
Descriptors: Testing, Test Format, Test Wiseness, Learning Strategies
Peer reviewed Peer reviewed
Direct linkDirect link
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9