NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational…57
Audience
Researchers2
What Works Clearinghouse Rating
Does not meet standards1
Showing 31 to 45 of 57 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Kathryn S.; Osborne, Randall E.; Hayes, Keith A.; Simoes, Richard A. – Journal of Educational Computing Research, 2008
Minimal research has been conducted contrasting the effectiveness of various testing accommodations for college students diagnosed with ADHD. The current assumption is that these students are best served by extending the time they have to take a test. It is the supposition of these investigators that paced item presentation may be a more…
Descriptors: College Students, Testing Accommodations, Student Attitudes, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Heffernan, Neil T. – Journal of Educational Computing Research, 2010
ASSISTments is a web-based math tutor designed to address the need for timely student assessment while simultaneously providing instruction, thereby avoiding lost instruction time that typically occurs during assessment. This article presents a quasi-experiment that evaluates whether ASSISTments use has an effect on improving middle school…
Descriptors: Feedback (Response), Middle School Students, Formative Evaluation, Grade 7
Peer reviewed Peer reviewed
Lunz, Mary E.; Bergstrom, Betty – Journal of Educational Computing Research, 1995
Describes a study that was conducted to track the effect of candidate response patterns on a computerized adaptive test. The effect of altering responses on estimated candidate ability, test tailoring, and test precision across segments of adaptive tests and groups of candidates is examined. (Author/LRW)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Graham, Charles R.; Tripp, Tonya; Wentworth, Nancy – Journal of Educational Computing Research, 2009
This study explores the efforts at Brigham Young University to improve preservice candidates' technology integration using the Teacher Work Sample (TWS) as an assessment tool. Baseline data that was analyzed from 95 TWSs indicated that students were predominantly using technology for productivity and information presentation purposes even though…
Descriptors: Field Instruction, Work Sample Tests, Technology Integration, Educational Technology
Peer reviewed Peer reviewed
Powers, Donald E. – Journal of Educational Computing Research, 2001
Tests the hypothesis that the introduction of computer-adaptive testing may help to alleviate test anxiety and diminish the relationship between test anxiety and test performance. Compares a sample of Graduate Record Examinations (GRE) General Test takers who took the computer-adaptive version of the test with another sample who took the…
Descriptors: Comparative Analysis, Computer Assisted Testing, Nonprint Media, Performance
Peer reviewed Peer reviewed
Mason, B. Jean; Patry, Marc; Berstein, Daniel J. – Journal of Educational Computing Research, 2001
Discussion of adapting traditional paper and pencil tests to electronic formats focuses on a study of undergraduates that examined the equivalence between computer-based and traditional tests when the computer testing provided opportunities comparable to paper testing conditions. Results showed no difference between scores from the two test types.…
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Intermode Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Brooke; Caputi, Peter – Journal of Educational Computing Research, 2004
Test equivalence can be evaluated in terms of four aspects: psychometric, behavioral, experiential, and individual differences (i.e., relativity of equivalence) (Honaker, 1988). This study examined the psychometric properties of the Attitude Towards Computerized Assessment Scale (ATCAS) designed to assess one of these criteria, namely,…
Descriptors: Measures (Individuals), Psychometrics, Testing, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Peer reviewed Peer reviewed
Jacobs, Ronald L.; And Others – Journal of Educational Computing Research, 1985
This study adapted the Hidden Figures Test for use on PLATO and determined the reliability of the computerized version compared to the paper and pencil version. Results indicate the test was successfully adapted with some modifications, and it was judged reliable although it may be measuring additional constructs. (MBR)
Descriptors: Computer Assisted Testing, Educational Research, Field Dependence Independence, Higher Education
Peer reviewed Peer reviewed
Ward, Thomas J., Jr.; And Others – Journal of Educational Computing Research, 1989
Discussion of computer-assisted testing focuses on a study of college students that investigated whether a computerized test which incorporated traditional test taking interfaces had any effect on students' performance, anxiety level, or attitudes toward the computer. Results indicate no difference in performance but a significant difference in…
Descriptors: Academic Achievement, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Custer, Michael – Journal of Educational Computing Research, 2005
This study investigated the equivalence of scores from computerized and paper-and-pencil formats of a series of K-3 reading screening tests. Concerns about score equivalence on the computerized formats were warranted because of the use of reading passages, computer unfamiliarity of primary school students, and teacher versus computer…
Descriptors: Screening Tests, Reading Tests, Family Income, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Peer reviewed Peer reviewed
Vogel, Lora Ann – Journal of Educational Computing Research, 1994
Reports on a study conducted to evaluate how individual differences in anxiety levels affect performance on computer versus paper-and-pencil forms of verbal sections of the Graduate Record Examination. Contrary to the research hypothesis, analysis of scores revealed that extroverted and less computer anxious subjects scored significantly lower on…
Descriptors: Comparative Analysis, Computer Anxiety, Computer Assisted Testing, Computer Attitudes
Peer reviewed Peer reviewed
Applegate, Brooks – Journal of Educational Computing Research, 1993
Describes a study that was conducted to explore how kindergarten and second-grade students are able to structure and solve geometric analogy problems in a computer-based test and to compare the results to a paper-and-pencil test format. Use of the Test of Analogical Reasoning in Children is described. (18 references) (LRW)
Descriptors: Academically Gifted, Comparative Analysis, Computer Assisted Testing, Geometric Concepts
Pages: 1  |  2  |  3  |  4