NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 376 to 390 of 514 results Save | Export
Kelly, P. Adam – 2001
The purpose of this research was to establish, within the constraints of the methods presented, whether the computer is capable of scoring essays in much the same way that human experts rate essays. The investigation attempted to establish what was actually going on within the computer and within the mind of the rater and to describe the degree to…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Essays, Higher Education
Page, Ellis B.; Poggio, John P.; Keith, Timothy Z. – 1997
Most human gradings of essays are holistic, or "overall." Therefore, Project Essay Grade (PEG), an attempt to develop computerized grading of essays, has concentrated most of its research on overall grading. It has successfully simulated human judges. However, since computer grading is less expensive than human grading, PEG has also…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Essays, Evaluators
Drasgow, Fritz, Ed.; Olson-Buchanan, Julie B., Ed. – 1999
Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow);…
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Scoring
Peer reviewed Peer reviewed
Page, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, Andre A. – International Journal of Testing, 2003
Item response theory (IRT) has become one of the most popular scoring frameworks for measurement data. IRT models are used frequently in computerized adaptive testing, cognitively diagnostic assessment, and test equating. This article reviews two of the most popular software packages for IRT model estimation, BILOG-MG (Zimowski, Muraki, Mislevy, &…
Descriptors: Test Items, Adaptive Testing, Item Response Theory, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, David M.; Bauer, Malcolm; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.; DeMark, Sarah F. – International Journal of Testing, 2004
In computer-based interactive environments meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a computer system must additionally be able to evoke and interpret observable evidence…
Descriptors: Computer Assisted Testing, Psychometrics, Task Analysis, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Segall, Daniel O. – Journal of Educational and Behavioral Statistics, 2004
A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…
Descriptors: Scoring, Item Analysis, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2007
Because there is no commonly accepted view of what makes for good writing, automated essay scoring (AES) ideally should be able to accommodate different theoretical positions, certainly at the level of state standards but also perhaps among teachers at the classroom level. This paper presents a practical approach and an interactive computer…
Descriptors: Computer Assisted Testing, Automation, Essay Tests, Scoring
Stocking, Martha L. – 1994
Modern applications of computerized adaptive testing (CAT) are typically grounded in item response theory (IRT; Lord, 1980). While the IRT foundations of adaptive testing provide a number of approaches to adaptive test scoring that may seem natural and efficient to psychometricians, these approaches may be more demanding for test takers, test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Equated Scores
Rippey, Robert M.; And Others – 1983
This paper describes the implementation of a computer-based approach to scoring open-ended problem lists constructed to evaluate student and practitioner clinical judgment from real or simulated records. Based on 62 previously administered and scored problem lists, the program was written in BASIC for a Heathkit H11A computer (equivalent to DEC…
Descriptors: Branching, Case Records, Clinical Diagnosis, Computer Assisted Testing
Peer reviewed Peer reviewed
Lacy, Ed; Marshall, Barbara – Journal of Physical Education, Recreation & Dance, 1984
FITNESSGRAM, a computerized program for scoring and reporting students' physical fitness status based on specific tests, has been a resounding success in the Tulsa, Oklahoma, public schools. The program provides feedback to students and permits evaluation of physical fitness efforts on a school-by-school basis. (PP)
Descriptors: Computer Assisted Testing, Educational Diagnosis, Feedback, Intermediate Grades
Peer reviewed Peer reviewed
Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P. – Journal of Educational Measurement, 1997
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
Descriptors: Algorithms, Automation, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Ferguson, Carl L., Jr.; Fuchs, Lynn S. – Journal of Special Education Technology, 1991
Comparison of special education teacher (n=18) and computer-scored curriculum-based measurements (CBM) of spelling found that computer scoring accuracy was significantly higher and more stable. Additionally, high correlations were found between the CBM spelling scores and a standardized test of spelling achievement. (DB)
Descriptors: Academic Achievement, Computer Assisted Testing, Disabilities, Elementary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gipps, Caroline V. – Studies in Higher Education, 2005
This paper reviews the role of ICT-based assessment in the light of the growing use of virtual learning environments in universities. Issues of validity, efficiency, type of response, and scoring are addressed. A major area of research is the automated scoring of text. Claims for automated formative assessment are queried, since the feedback of…
Descriptors: Scoring, Evaluation Methods, Feedback, Formative Evaluation
Pages: 1  |  ...  |  22  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  ...  |  35