Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Bayesian Statistics | 7 |
| Models | 7 |
| Multiple Choice Tests | 7 |
| Test Items | 4 |
| Item Response Theory | 3 |
| Test Construction | 3 |
| Foreign Countries | 2 |
| Grading | 2 |
| Guessing (Tests) | 2 |
| Information Technology | 2 |
| Natural Language Processing | 2 |
| More ▼ | |
Source
| Psychometrika | 2 |
| Alberta Journal of… | 1 |
| Applied Measurement in… | 1 |
| IGI Global | 1 |
| Journal of Applied Testing… | 1 |
Author
| Abu-Ghazalah, Rashid M. | 1 |
| Azevedo, Ana, Ed. | 1 |
| Azevedo, José, Ed. | 1 |
| Bradlow, Eric T. | 1 |
| Dubins, David N. | 1 |
| Mead, Alan D. | 1 |
| Poon, Gregory M. K. | 1 |
| Revuelta, Javier | 1 |
| Wainer, Howard | 1 |
| Wang, Jianjun | 1 |
| Wang, Xiaohui | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 3 |
| Reports - Research | 2 |
| Books | 1 |
| Collected Works - General | 1 |
| Reports - Descriptive | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
Audience
| Administrators | 1 |
| Researchers | 1 |
| Students | 1 |
| Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Mead, Alan D.; Zhou, Chenxuan – Journal of Applied Testing Technology, 2022
This study fit a Naïve Bayesian classifier to the words of exam items to predict the Bloom's taxonomy level of the items. We addressed five research questions, showing that reasonably good prediction of Bloom's level was possible, but accuracy varies across levels. In our study, performance for Level 2 was poor (Level 2 items were misclassified…
Descriptors: Artificial Intelligence, Prediction, Taxonomy, Natural Language Processing
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Azevedo, Ana, Ed.; Azevedo, José, Ed. – IGI Global, 2019
E-assessments of students profoundly influence their motivation and play a key role in the educational process. Adapting assessment techniques to current technological advancements allows for effective pedagogical practices, learning processes, and student engagement. The "Handbook of Research on E-Assessment in Higher Education"…
Descriptors: Higher Education, Computer Assisted Testing, Multiple Choice Tests, Guides
Peer reviewedBradlow, Eric T.; Wainer, Howard; Wang, Xiaohui – Psychometrika, 1999
Proposes a parametric approach that involves a modification of standard Item Response Theory models that explicitly accounts for the nesting of items within the same testlets and that can be applied to multiple-choice sections comprising a mixture of independent items and testlets. (Author/SLD)
Descriptors: Bayesian Statistics, Item Response Theory, Models, Multiple Choice Tests
Wang, Jianjun – 1995
Effects of blind guessing on the success of passing true-false and multiple-choice tests are investigated under a stochastic binomial model. Critical values of guessing are thresholds which signify when the effect of guessing is negligible. By checking a table of critical values assembled in this paper, one can make a decision with 95% confidence…
Descriptors: Bayesian Statistics, Grading, Guessing (Tests), Models
Revuelta, Javier – Psychometrika, 2004
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori…
Descriptors: Multiple Choice Tests, Psychometrics, Models, Difficulty Level
van Barneveld, Christina – Alberta Journal of Educational Research, 2003
The purpose of this study was to examine the potential effect of false assumptions regarding the motivation of examinees on item calibration and test construction. A simulation study was conducted using data generated by means of several models of examinee item response behaviors (the three-parameter logistic model alone and in combination with…
Descriptors: Simulation, Motivation, Computation, Test Construction

Direct link
