NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)0
Since 2007 (last 20 years)5
Audience
Location
Finland1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Colwell, Nicole Makas – Journal of Education and Training Studies, 2013
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
Descriptors: Test Anxiety, Computer Assisted Testing, Evaluation Methods, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2007
This study conducted a meta-analysis of computer-based and paper-and-pencil administration mode effects on K-12 student mathematics tests. Both initial and final results based on fixed- and random-effects models are presented. The results based on the final selected studies with homogeneous effect sizes show that the administration mode had no…
Descriptors: Meta Analysis, Mathematics Tests, Elementary Secondary Education, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2008
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have…
Descriptors: Elementary Secondary Education, Reading Achievement, Computer Assisted Testing, Comparative Analysis
Kim, Jong-Pil – 1999
This study was conducted to investigate the equivalence of scores from paper-and-pencil (P&P) tests and computerized tests (CTs) through meta-analysis of primary studies using both kinds of tests. For this synthesis, 51 primary studies were selected, resulting in 226 effect sizes. The first synthesis was a typical meta-analysis that treated…
Descriptors: Adaptive Testing, Computer Assisted Testing, Effect Size, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Breland, Hunter; Lee, Yong-Won – Applied Measurement in Education, 2007
The objective of the present investigation was to examine the comparability of writing prompts for different gender groups in the context of the computer-based Test of English as a Foreign Language[TM] (TOEFL[R]-CBT). A total of 87 prompts administered from July 1998 through March 2000 were analyzed. An extended version of logistic regression for…
Descriptors: Learning Theories, Writing Evaluation, Writing Tests, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Custer, Michael – Applied Measurement in Education, 2005
In this study, we investigated possible context effects when students chose to defer items and answer those items later during a computerized test. In 4 primary school reading tests, 126 items were studied. Logistic regression analyses identified 4 items across 4 grade levels as statistically significant. However, follow-up analyses indicated that…
Descriptors: Psychometrics, Reading Tests, Effect Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Ritchie, Timothy – Journal of Educational Computing Research, 2004
This study investigated the statistical and practical significance of context effects for items randomized within testlets for administration during a series of computerized non-adaptive tests. One hundred and twenty-five items from four primary school reading tests were studied. Logistic regression analyses identified from one to four items for…
Descriptors: Psychometrics, Context Effect, Effect Size, Primary Education
Breland, Hunter; Lee, Yong-Won; Najarian, Michelle; Muraki, Eiji – Educational Testing Service, 2004
This investigation of the comparability of writing assessment prompts was conducted in two phases. In an exploratory Phase I, 47 writing prompts administered in the computer-based Test of English as a Foreign Language[TM] (TOEFL[R] CBT) from July through December 1998 were examined. Logistic regression procedures were used to estimate prompt…
Descriptors: Writing Evaluation, Quality Control, Gender Differences, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Shu-Ying; Ankenman, Robert D. – Journal of Educational Measurement, 2004
The purpose of this study was to compare the effects of four item selection rules--(1) Fisher information (F), (2) Fisher information with a posterior distribution (FP), (3) Kullback-Leibler information with a posterior distribution (KP), and (4) completely randomized item selection (RN)--with respect to the precision of trait estimation and the…
Descriptors: Test Length, Adaptive Testing, Computer Assisted Testing, Test Selection
Bergstrom, Betty A. – 1992
This paper reports on existing studies and uses meta analysis to compare and synthesize the results of 20 studies from 8 research reports comparing the ability measure equivalence of computer adaptive tests (CAT) and conventional paper and pencil tests. Using the research synthesis techniques developed by Hedges and Olkin (1985), it is possible to…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Randolph, Justus J.; Virnes, Marjo; Jormanainen, Ilkka; Eronen, Pasi J. – Educational Technology & Society, 2006
Although computer-assisted interview tools have much potential, little empirical evidence on the quality and quantity of data generated by these tools has been collected. In this study we compared the effects of using Virre, a computer-assisted self-interview tool, with the effects of using other data collection methods, such as written responding…
Descriptors: Computer Science Education, Effect Size, Data Collection, Computer Assisted Testing
Thompson, Bruce; Melancon, Janet G. – 1990
Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…
Descriptors: Comparative Analysis, Computer Assisted Testing, Correlation, Effect Size