NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 466 to 480 of 514 results Save | Export
Kingsbury, G. Gage; Weiss, David J. – 1981
Conventional mastery tests designed to make optimal mastery classifications were compared with fixed-length and variable-length adaptive mastery tests. Comparisons between the testing procedures were made across five content areas in an introductory biology course from tests administered to volunteers. The criterion was the student's standing in…
Descriptors: Achievement Tests, Adaptive Testing, Biology, Comparative Analysis
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect
Peer reviewed Peer reviewed
Braun, Henry I.; And Others – Journal of Educational Measurement, 1990
The accuracy with which expert systems (ESs) score a new nonmultiple-choice free-response test item was investigated, using 734 high school students who were administered an advanced-placement computer science examination. ESs produced scores for 82 percent to 95 percent of the responses and displayed high agreement with a human reader on the…
Descriptors: Advanced Placement, Computer Assisted Testing, Computer Science, Constructed Response
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zechner, Klaus; Bejar, Isaac I.; Hemat, Ramin – ETS Research Report Series, 2007
The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first…
Descriptors: Role, Computer Assisted Testing, Language Proficiency, Oral Language
Braswell, James S.; Jackson, Carol A. – 1995
A new free-response item type for mathematics tests is described. The item type, referred to as the Student-Produced Response (SPR), was first introduced into the Preliminary Scholastic Aptitude Test/National Merit Scholarship Qualifying Test in 1993 and into the Scholastic Aptitude Test in 1994. Students solve a problem and record the answer by…
Descriptors: Computer Assisted Testing, Educational Assessment, Guessing (Tests), Mathematics Tests
Hinga, Sophia W.; Chen, Linlin Irene – 1998
With the assistance of learning technology consultants in the Technology Teaching and Learning Center (TTLC) at the University of Houston-Downtown (Texas), professors have shifted their paradigms and are taking the leap to use more high-risk World Wide Web technologies in their courses. One that has become a hallmark is delivering exams via the…
Descriptors: Authoring Aids (Programming), Computer Assisted Testing, Computer Managed Instruction, Computer Security
Hasselbring, Ted S.; And Others – 1989
This monograph provides an overview of computer-based assessment and error analysis in the instruction of elementary students with complex medical, learning, and/or behavioral problems. Information on generating and scoring tests using the microcomputer is offered, as are ideas for using computers in the analysis of mathematical strategies and…
Descriptors: Behavior Problems, Computer Assisted Testing, Computer Managed Instruction, Diagnostic Teaching
Cramer, Stephen E. – 1990
A standard-setting procedure was developed for the Georgia Teacher Certification Testing Program as tests in 30 teaching fields were revised. A list of important characteristics of a standard-setting procedure was derived, drawing on the work of R. A. Berk (1986). The best method was found to be a highly formalized judgmental, empirical Angoff…
Descriptors: Computer Assisted Testing, Cutting Scores, Data Collection, Elementary Secondary Education
Solano-Flores, Guillermo; Raymond, Bruce; Schneider, Steven A. – 1997
The need for effective ways of monitoring the quality of scoring of portfolios resulted in the development of a software package that provides scoring leaders with updated information on their assessors' scoring quality. Assessors with computers enter data as they score, and this information is analyzed and reported to scoring leaders. The…
Descriptors: Art Teachers, Computer Assisted Testing, Computer Software, Computer Software Evaluation
Peer reviewed Peer reviewed
Smalley, Alan – Language Learning Journal, 1996
Analyzes the computer program, "Question Mark," produced in England and designed to be a testing tool for large second-language classes. Using this tool, it is possible to create and edit up to 500 questions using any one of 8 different question types. The program also can provide a running score for students using it. Notes that student…
Descriptors: Computer Assisted Testing, Computer Software, Dutch, Editing
Peer reviewed Peer reviewed
Anderson, Paul S. – International Journal of Educology, 1988
Seven formats of educational testing were compared according to student preferences/perceptions of how well each test method evaluates learning. Formats compared include true/false, multiple-choice, matching, multi-digit testing (MDT), fill-in-the-blank, short answer, and essay. Subjects were 1,440 university students. Results indicate that tests…
Descriptors: Achievement Tests, College Students, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bennett, Randy Elliot; Persky, Hilary; Weiss, Andrew R.; Jenkins, Frank – National Center for Education Statistics, 2007
The Problem Solving in Technology-Rich Environments (TRE) study was designed to demonstrate and explore innovative use of computers for developing, administering, scoring, and analyzing the results of National Assessment of Educational Progress (NAEP) assessments. Two scenarios (Search and Simulation) were created for measuring problem solving…
Descriptors: Computer Assisted Testing, National Competency Tests, Problem Solving, Simulation
Liu, Xiufeng – 1994
Problems of validity and reliability of concept mapping are addressed by using item-response theory (IRT) models for scoring. In this study, the overall structure of students' concept maps are defined by the number of links, the number of hierarchies, the number of cross-links, and the number of examples. The study was conducted with 92 students…
Descriptors: Alternative Assessment, Computer Assisted Testing, Concept Mapping, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Pages: 1  |  ...  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  34  |  35