NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Information Analyses11
Journal Articles10
Reports - Research4
Reports - Evaluative2
Location
Turkey1
Laws, Policies, & Programs
Assessments and Surveys
Wisconsin Card Sorting Test1
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bolat, Yusuf Islam; Tas, Nurullah – Education and Information Technologies, 2023
The purpose of this meta-analysis was to examine the effects of Gamified-Assessment Tools (GAT) used in formal educational settings on student academic achievement. We used PRISMA systematic procedures to screen the articles across Web of Science, ERIC, Scopus, Pubmed, and PsycArticles databases. We identified 23 independent results from 17…
Descriptors: Gamification, Student Evaluation, Academic Achievement, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Deng, Jiayi – Large-scale Assessments in Education, 2021
Background: In testing contexts that are predominately concerned with power, rapid guessing (RG) has the potential to undermine the validity of inferences made from educational assessments, as such responses are unreflective of the knowledge, skills, and abilities assessed. Given this concern, practitioners/researchers have utilized a multitude of…
Descriptors: Test Wiseness, Guessing (Tests), Reaction Time, Computer Assisted Testing
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Mertens, Ute; Finn, Bridgid; Lindner, Marlit Annalena – Journal of Educational Psychology, 2022
Feedback is one of the most important factors for successful learning. Contemporary computer-based learning and testing environments allow the implementation of automated feedback in a simple and efficient manner. Previous meta-analyses suggest that different types of feedback are not equally effective. This heterogeneity might depend on learner…
Descriptors: Computer Assisted Testing, Feedback (Response), Electronic Learning, Network Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nalbantoglu Yilmaz, Funda – Eurasian Journal of Educational Research, 2021
Purpose: With improvements in computer technologies and test implementations in the computer environment, when advantageous points of computer-based test implementations are considered, it is inevitable to compare psychometric characteristics of paper-and-pencil tests and computer-based tests and students' success. In computer-based tests,…
Descriptors: Computer Assisted Testing, Test Format, Paper (Material), Computer Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chang, Frederic Tao-Yi; Li, Mao-Neng Fred – International Journal of Educational Technology, 2019
IF-AT (also called answer until correct) is a revised form of traditional multiple-choice testing. It was created to provide instant feedback during testing and to tap into partial knowledge of learners. The purposes of this meta-analytic study were to compare the effectiveness of IF-AT with the traditional multiple-choice test; to assess…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Feedback (Response), Answer Sheets
Peer reviewed Peer reviewed
Direct linkDirect link
Avery, Nick; Marsden, Emma – Studies in Second Language Acquisition, 2019
Despite extensive theoretical and empirical research, we do not have estimations of the magnitude of sensitivity to grammatical information during L2 online processing. This is largely due to reliance on null hypothesis significance testing (Plonsky, 2015). The current meta-analysis draws on data from one elicitation technique, self-paced reading,…
Descriptors: Meta Analysis, Second Language Learning, Morphology (Languages), Syntax
Peer reviewed Peer reviewed
Direct linkDirect link
Landry, Oriane; Al-Taie, Shems – Journal of Autism and Developmental Disorders, 2016
We conducted a meta-analysis of 31 studies, spanning 30 years, utilizing the WCST in participants with autism. We calculated Cohen's d effect sizes for four measures of performance: sets completed, perseveration, failure-to-maintain-set, and non-perseverative errors. The average weighted effect size ranged from 0.30 to 0.74 for each measure, all…
Descriptors: Executive Function, Cognitive Tests, Cognitive Ability, Abstract Reasoning
Peer reviewed Peer reviewed
Direct linkDirect link
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Topping, K. J.; Samuels, J.; Paul, T. – School Effectiveness and School Improvement, 2007
This study elaborates the "what works?" question by exploring the effects of variability in program implementation quality on achievement. Particularly, the effects on achievement of computerized assessment of reading were investigated, analyzing data on 51,000 students in Grades 1-12 who read over 3 million books. When minimum implementation…
Descriptors: Program Implementation, Achievement Gains, Reading Achievement, Independent Reading
Bergstrom, Betty A. – 1992
This paper reports on existing studies and uses meta analysis to compare and synthesize the results of 20 studies from 8 research reports comparing the ability measure equivalence of computer adaptive tests (CAT) and conventional paper and pencil tests. Using the research synthesis techniques developed by Hedges and Olkin (1985), it is possible to…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing