Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Multiple Choice Tests | 5 |
| Probability | 5 |
| Science Tests | 5 |
| Student Evaluation | 3 |
| Biology | 2 |
| Comparative Analysis | 2 |
| Correlation | 2 |
| Evaluation Methods | 2 |
| Item Response Theory | 2 |
| Knowledge Level | 2 |
| Physics | 2 |
| More ▼ | |
Source
| Grantee Submission | 1 |
| Interactive Learning… | 1 |
| International Journal of… | 1 |
| Journal of Research in… | 1 |
| Physical Review Physics… | 1 |
Author
Publication Type
| Reports - Research | 5 |
| Journal Articles | 4 |
| Speeches/Meeting Papers | 1 |
Education Level
| Elementary Education | 1 |
| High Schools | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Briggs, Derek C.; Circi, Ruhan – International Journal of Testing, 2017
Artificial Neural Networks (ANNs) have been proposed as a promising approach for the classification of students into different levels of a psychological attribute hierarchy. Unfortunately, because such classifications typically rely upon internally produced item response patterns that have not been externally validated, the instability of ANN…
Descriptors: Artificial Intelligence, Classification, Student Evaluation, Tests
Kalinowski, Steven T.; Willoughby, Shannon – Journal of Research in Science Teaching, 2019
We present a multiple-choice test, the Montana State University Formal Reasoning Test (FORT), to assess college students' scientific reasoning ability. The test defines scientific reasoning to be equivalent to formal operational reasoning. It contains 20 questions divided evenly among five types of problems: control of variables, hypothesis…
Descriptors: Science Tests, Test Construction, Science Instruction, Introductory Courses
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2017
The paper describes a biology tutoring system with adaptive question selection. Questions were selected for presentation to the student based on their utilities, which were estimated from the chance that the student's competence would increase if the questions were asked. Competence was represented by the probability of mastery of a set of biology…
Descriptors: Biology, Science Instruction, Intelligent Tutoring Systems, Probability

Peer reviewed
Direct link
