Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Comparative Analysis | 19 |
Test Format | 19 |
Testing Problems | 19 |
Computer Assisted Testing | 7 |
Multiple Choice Tests | 6 |
Test Reliability | 5 |
Difficulty Level | 4 |
Evaluation Methods | 4 |
Grading | 4 |
Higher Education | 4 |
Scoring | 4 |
More ▼ |
Source
Computers & Education | 2 |
Educational Measurement:… | 2 |
Assessment | 1 |
Journal of Educational… | 1 |
Language Testing | 1 |
Performance and Instruction | 1 |
Author
Publication Type
Reports - Research | 9 |
Speeches/Meeting Papers | 9 |
Journal Articles | 8 |
Reports - Evaluative | 6 |
Books | 1 |
Collected Works - Proceedings | 1 |
Guides - Non-Classroom | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Practitioners | 1 |
Teachers | 1 |
Location
Netherlands | 1 |
Sweden | 1 |
United Kingdom (England) | 1 |
United Kingdom (Northern… | 1 |
United Kingdom (Wales) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACTFL Oral Proficiency… | 1 |
Learning Style Inventory | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis

Abraham, Elizabeth; Axelrod, Bradley N.; Paolo, Anthony M. – Assessment, 1997
Seven different short forms of the Wechsler Adult Intelligence Scale, Revised (WAIS-R) were compared in a mixed clinical sample of 306 adults to evaluate short form predictions of the full scale IQ. The seven-test short form of C. Ward (1990) was the best predictor across ranges of intellectual functioning, supporting its use when time is limited.…
Descriptors: Adults, Comparative Analysis, Intelligence Quotient, Intelligence Tests
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level

Downing, Steven M. – Educational Measurement: Issues and Practice, 1992
Research on true-false (TF), multiple-choice, and alternate-choice (AC) tests is reviewed, discussing strengths, weaknesses, and the usefulness in classroom and large-scale testing of each. Recommendations are made for improving use of AC items to overcome some of the problems associated with TF items. (SLD)
Descriptors: Comparative Analysis, Educational Research, Multiple Choice Tests, Objective Tests

Wainer, Howard; And Others – Journal of Educational Measurement, 1994
The comparability of scores on test forms that are constructed through examinee item choice is examined in an item response theory framework. The approach is illustrated with data from the College Board's Advanced Placement Test in Chemistry taken by over 18,000 examinees. (SLD)
Descriptors: Advanced Placement, Chemistry, Comparative Analysis, Constructed Response
Chiarelott, Leigh; Davidman, Leonard – 1983
This paper summarizes two related sets of data. First, the findings of an inquiry titled, "The Appropriateness of Forced Choice True/False Response Formats for Selected Learning Style Inventories" is reported. This inquiry compared data from two forms of the Dunn/Price Learning Style Inventory. The first form was the real Dunn/Price…
Descriptors: Cognitive Measurement, Cognitive Style, Comparative Analysis, Curriculum Development

Green, Bert F. – Educational Measurement: Issues and Practice, 1995
If annual performance assessments are to yield results that can be compared from year to year, many technical problems must be addressed. It is essential that tests to be equated measure the same construct. Methods of equating performance assessment scores, ways of equating system assessments, and standard setting are discussed. (SLD)
Descriptors: Comparative Analysis, Educational Assessment, Educational Change, Equated Scores
Kemerer, Richard; Wahlstrom, Merlin – Performance and Instruction, 1985
Compares the features, learning outcomes tested, reliability, viability, and cost effectiveness of essay tests with those of interpretive tests used in training programs. A case study illustrating how an essay test was converted to an interpretive test and pilot tested is included to illustrate the advantages of interpretive testing. (MBR)
Descriptors: Case Studies, Comparative Analysis, Cost Effectiveness, Essay Tests
Russell, Michael – 2000
This paper addresses problems with performance on open-ended (i.e., not multiple choice) test questions administered on paper for students accustomed to writing with computers. The first section summarizes the results of three studies that examined differences between computer and paper-and-pencil test results. The second section identifies three…
Descriptors: Achievement Tests, Comparative Analysis, Computer Assisted Testing, Elementary Secondary Education
Pommerich, Mary; Burden, Timothy – 2000
A small-scale study was conducted to compare test-taking strategies, problem-solving strategies, and general impressions about the test across computer and paper-and-pencil administration modes. Thirty-six examinees (high school students) participated in the study. Each examinee took a test in one of the content areas of English, Mathematics,…
Descriptors: Adaptive Testing, Attitudes, Comparative Analysis, Computer Assisted Testing
Shaha, Steven H. – 1982
Traditionally, matching test formats have been avoided in favor of multiple-choice items for several reasons, including item analysis properties and chance performance characteristics. In the light of research on test format and anxiety, this study postulates that, if a matching test could assess knowledge for a given topic as effectively as an…
Descriptors: Comparative Analysis, Multiple Choice Tests, Objective Tests, Response Style (Tests)
Oosterhof, Albert C.; Coats, Pamela K. – 1981
Instructors who develop classroom examinations that require students to provide a numerical response to a mathematical problem are often very concerned about the appropriateness of the multiple-choice format. The present study augments previous research relevant to this concern by comparing the difficulty and reliability of multiple-choice and…
Descriptors: Comparative Analysis, Difficulty Level, Grading, Higher Education
Lance, Charles E.; Moomaw, Michael E. – 1983
Direct assessments of the accuracy with which raters can use a rating instrument are presented. This study demonstrated how surplus behavioral incidents scaled during the development of Behaviorally Anchored Rating Scales (BARS) can be used effectively in the evaluation of the newly developed scales. Construction of scenarios of hypothetical…
Descriptors: Behavior Rating Scales, Comparative Analysis, Error of Measurement, Evaluation Criteria
Previous Page | Next Page ยป
Pages: 1 | 2