Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 5 |
Descriptor
Author
| Chuang, Chi-ching | 2 |
| Fujiki, Mayo | 2 |
| Herman, Keith | 2 |
| Reinke, Wendy | 2 |
| Rohrer, David | 2 |
| Wang, Ze | 2 |
| Custer, Michael | 1 |
| De Ayala, R. J. | 1 |
| Monroe, Scott | 1 |
| Ramsay, James O. | 1 |
| Sharairi, Sid | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 4 |
| Reports - Research | 4 |
| Reports - Evaluative | 2 |
| Speeches/Meeting Papers | 2 |
Education Level
| Elementary Education | 2 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
| Sweden | 1 |
| United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Ramsay, James O.; Wiberg, Marie – Journal of Educational and Behavioral Statistics, 2017
This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and…
Descriptors: Scoring, Test Theory, Computation, Maximum Likelihood Statistics
Custer, Michael; Sharairi, Sid; Swift, David – Online Submission, 2012
This paper utilized the Rasch model and Joint Maximum Likelihood Estimation to study different scoring options for omitted and not-reached items. Three scoring treatments were studied. The first method treated omitted and not-reached items as "ignorable/blank". The second treatment, scored omits as incorrect with "0" and left not-reached as blank…
Descriptors: Scoring, Test Items, Item Response Theory, Maximum Likelihood Statistics
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy – Journal of Experimental Education, 2015
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
Descriptors: Scoring, Check Lists, Comparative Analysis, Differences
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy – Grantee Submission, 2015
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items,(b) latent factor scores with continuous indicators, (c)…
Descriptors: Scoring, Check Lists, Differences, Comparative Analysis
De Ayala, R. J.; And Others – 1990
Computerized adaptive testing procedures (CATPs) based on the graded response method (GRM) of F. Samejima (1969) and the partial credit model (PCM) of G. Masters (1982) were developed and compared. Both programs used maximum likelihood estimation of ability, and item selection was conducted on the basis of information. Two simulated data sets, one…
Descriptors: Ability Identification, Adaptive Testing, Comparative Analysis, Computer Assisted Testing

Peer reviewed
Direct link
