Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Algorithms | 10 |
| Guessing (Tests) | 10 |
| Test Items | 6 |
| Item Response Theory | 5 |
| Ability | 4 |
| Multiple Choice Tests | 4 |
| Adaptive Testing | 3 |
| Computer Assisted Testing | 3 |
| Item Analysis | 3 |
| Accuracy | 2 |
| Aptitude Tests | 2 |
| More ▼ | |
Author
| Berger, Martijn P. F. | 1 |
| Bliss, Leonard B. | 1 |
| Bock, R. Darrell | 1 |
| Bulut, Okan | 1 |
| Chevalier, Shirley A. | 1 |
| Choppin, Bruce | 1 |
| Chun Wang | 1 |
| Gorgun, Guher | 1 |
| Jing Lu | 1 |
| Jiwei Zhang | 1 |
| Kurz, Terri Barber | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 6 |
| Journal Articles | 3 |
| Reports - Evaluative | 3 |
| Speeches/Meeting Papers | 3 |
| Reports - Descriptive | 1 |
Education Level
| Secondary Education | 1 |
Audience
Location
| Virgin Islands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Iowa Tests of Basic Skills | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
A Sequential Bayesian Changepoint Detection Procedure for Aberrant Behaviors in Computerized Testing
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Kurz, Terri Barber – 1999
Multiple-choice tests are generally scored using a conventional number right scoring method. While this method is easy to use, it has several weaknesses. These weaknesses include decreased validity due to guessing and failure to credit partial knowledge. In an attempt to address these weaknesses, psychometricians have developed various scoring…
Descriptors: Algorithms, Guessing (Tests), Item Response Theory, Multiple Choice Tests
Chevalier, Shirley A. – 1998
In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…
Descriptors: Ability, Algorithms, Aptitude Tests, Cognitive Tests
Bliss, Leonard B. – 1981
The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…
Descriptors: Algorithms, Guessing (Tests), Intermediate Grades, Multiple Choice Tests
Peer reviewedBock, R. Darrell; And Others – Applied Psychological Measurement, 1988
A method of item factor analysis is described, which is based on Thurstone's multiple-factor model and implemented by marginal maximum likelihood estimation and the EM algorithm. Also assessed are the statistical significance of successive factors added to the model, provisions for guessing and omitted items, and Bayes constraints. (TJH)
Descriptors: Algorithms, Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics)
Peer reviewedMislevy, Robert J.; Verhelst, Norman – Psychometrika, 1990
A model is presented for item responses when different subjects use different strategies, but only responses--not choice of strategy--can be observed. Substantive theory is used to differentiate the likelihoods of response vectors under a fixed set of strategies, and response probabilities are modeled via item parameters for each strategy. (TJH)
Descriptors: Algorithms, Guessing (Tests), Item Response Theory, Mathematical Models
Choppin, Bruce – 1982
On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…
Descriptors: Ability, Algorithms, Guessing (Tests), Item Analysis
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
PDF pending restorationVale, C. David; Weiss, David J. – 1977
Twenty multiple-choice vocabulary items and 20 free-response vocabulary items were administered to 660 college students. The free-response items consisted of the stem words of the multiple-choice items. Testees were asked to respond to the free-response items with synonyms. A computer algorithm was developed to transform the numerous…
Descriptors: Ability, Adaptive Testing, Algorithms, Aptitude Tests

Direct link
