Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Computer Assisted Testing | 13 |
| Guessing (Tests) | 13 |
| Multiple Choice Tests | 13 |
| Test Items | 7 |
| Scoring | 6 |
| Adaptive Testing | 4 |
| Test Format | 4 |
| Comparative Analysis | 3 |
| Difficulty Level | 3 |
| Foreign Countries | 3 |
| Latent Trait Theory | 3 |
| More ▼ | |
Source
Author
| Wise, Steven L. | 2 |
| Anderson, Paul S. | 1 |
| Bin Usop, Hasbee | 1 |
| Bramley, Tom | 1 |
| Braswell, James S. | 1 |
| Choppin, Bruce | 1 |
| Choppin, Bruce H. | 1 |
| Crisp, Victoria | 1 |
| Dupray, Laurence M. | 1 |
| Harper, R. | 1 |
| Hong, Kian Sam | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 8 |
| Reports - Research | 7 |
| Reports - Descriptive | 3 |
| Reports - Evaluative | 3 |
| Speeches/Meeting Papers | 3 |
Education Level
| Secondary Education | 2 |
| Elementary Secondary Education | 1 |
| Postsecondary Education | 1 |
Audience
| Researchers | 1 |
Location
| Malaysia | 2 |
| United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Measures of Academic Progress | 1 |
| Preliminary Scholastic… | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee – Educational Technology & Society, 2011
The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Computers, Scoring
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee – Online Submission, 2009
Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Adaptive Testing, Scoring
Peer reviewedHarper, R. – Journal of Computer Assisted Learning, 2003
Discusses multiple choice questions and presents a statistical approach to post-test correction for guessing that can be used in spreadsheets to automate the correction and generate a grade. Topics include the relationship between the learning objectives and multiple-choice assessments; and guessing correction by negative marking. (LRW)
Descriptors: Behavioral Objectives, Computer Assisted Testing, Grades (Scholastic), Guessing (Tests)
Choppin, Bruce H. – 1983
In the answer-until-correct mode of multiple-choice testing, respondents are directed to continue choosing among the alternatives to each item until they find the correct response. There is no consensus as to how to convert the resulting pattern of responses into a measure because of two conflicting models of item response behavior. The first…
Descriptors: Computer Assisted Testing, Difficulty Level, Guessing (Tests), Knowledge Level
Choppin, Bruce – 1982
The answer-until-correct procedure has made comparatively little impact on the field of educational testing due to the absence of a sound theoretical base for turning the response data into measures. Three new latent trait models are described. They differ in their complexity, though each is designed to yield a single parameter to measure student…
Descriptors: Academic Achievement, Computer Assisted Testing, Computer Programs, Educational Testing
Nicewander, W. Alan; And Others – 1980
Two methods of interactive, computer-assisted testing methods for multiple-choice items were compared with each other and with conventional multiple-choice tests. The interactive testing methods compared were tailored testing and the respond-until-correct (RUC) item response method. In tailored testing, examinee ability is successively estimated…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Guessing (Tests)
Braswell, James S.; Jackson, Carol A. – 1995
A new free-response item type for mathematics tests is described. The item type, referred to as the Student-Produced Response (SPR), was first introduced into the Preliminary Scholastic Aptitude Test/National Merit Scholarship Qualifying Test in 1993 and into the Scholastic Aptitude Test in 1994. Students solve a problem and record the answer by…
Descriptors: Computer Assisted Testing, Educational Assessment, Guessing (Tests), Mathematics Tests
Anderson, Paul S.; Kanzler, Eileen M. – 1985
Test scores were compared for two types of objective achievement tests--multiple choice tests and the recently developed Multi-Digit Test (MDT) procedure. MDT is an approximation of the fill-in-the-blank technique. Students select their answers from long lists of alphabetized terms, with each answer corresponding to a number from 001 to 999. The…
Descriptors: Achievement Tests, Cloze Procedure, Comparative Testing, Computer Assisted Testing

Direct link
