Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Computer Assisted Testing | 11 |
| Multiple Choice Tests | 11 |
| Testing Problems | 11 |
| Scoring | 6 |
| Evaluation Methods | 4 |
| Test Format | 4 |
| Test Items | 4 |
| Comparative Analysis | 3 |
| College Students | 2 |
| Comparative Testing | 2 |
| Educational Technology | 2 |
| More ▼ | |
Source
Author
Publication Type
| Journal Articles | 7 |
| Reports - Research | 7 |
| Reports - Evaluative | 3 |
| Speeches/Meeting Papers | 2 |
| Non-Print Media | 1 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| South Africa | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Advanced Placement… | 1 |
What Works Clearinghouse Rating
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Schifter, Catherine C.; Carey, Martha – International Association for Development of the Information Society, 2014
The No Child Left Behind (NCLB) legislation spawned a plethora of standardized testing services for all the high stakes testing required by the law. We argue that one-size-fits all assessments disadvantage students who are English Language Learners, in the USA, as well as students with limited economic resources, special needs, and not reading on…
Descriptors: Standardized Tests, Models, Evaluation Methods, Educational Legislation
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Peer reviewedBennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewedRussell, Michael; Haney, Walt – Education Policy Analysis Archives, 1997
The effect that mode of administration, computer versus paper and pencil, had on the performance of 120 middle school students on multiple choice and written test questions was studied. Results show that, for students accustomed to writing on computers, responses written on the computer were more successful. Implications for testing are discussed.…
Descriptors: Computer Assisted Testing, Essay Tests, Middle School Students, Middle Schools
Russell, Michael; Haney, Walt – 1996
The results of a small research project that studied the effect computer administration has on student performance for writing or essay tests are presented. The introduction of computer-administered tests has raised concern about the equivalence of scores generated by computer versus paper-and-pencil test versions. For this study a sample of…
Descriptors: Computer Assisted Testing, Essay Tests, High School Students, High Schools
Newsom, Robert S.; And Others – Evaluation Quarterly, 1978
For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Occupational Tests, Personnel Evaluation
Peer reviewedAnderson, Paul S.; And Others – Illinois School Research and Development, 1985
Concludes that the Multi-Digit Test stimulates better retention than multiple choice tests while offering the advantage of computerized scoring and analysis. (FL)
Descriptors: Comparative Analysis, Computer Assisted Testing, Educational Research, Higher Education

Direct link
