Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 8 |
| Since 2017 (last 10 years) | 12 |
| Since 2007 (last 20 years) | 15 |
Descriptor
| Error Patterns | 31 |
| Item Response Theory | 9 |
| Computer Assisted Testing | 7 |
| Models | 7 |
| Test Items | 7 |
| Measurement Techniques | 6 |
| Simulation | 6 |
| Evaluation Methods | 5 |
| Accuracy | 4 |
| Computation | 4 |
| Goodness of Fit | 4 |
| More ▼ | |
Source
| Journal of Educational… | 31 |
Author
| Tatsuoka, Kikumi K. | 4 |
| Amanda Goodwin | 2 |
| Matthew Naveiras | 2 |
| Sun-Joo Cho | 2 |
| Alex J. Mechaber | 1 |
| Bejar, Isaac I. | 1 |
| Biancarosa, Gina | 1 |
| Birenbaum, Menucha | 1 |
| Brian E. Clauser | 1 |
| Carlson, Sarah E. | 1 |
| Choe, Edison M. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 29 |
| Reports - Research | 22 |
| Reports - Evaluative | 5 |
| Information Analyses | 1 |
Education Level
| Elementary Education | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 1 |
Location
| Georgia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedQuereshi, M. Y. – Journal of Educational Measurement, 1971
The study investigated the degree to which errors of scaling and selection depress the linear relationship and whether the reduction in the magnitude of r differs with the type of error. Results indicated that various scaling errors caused considerable discrepancy in the measurement of underlying relations, but the effect of non-normality was…
Descriptors: Correlation, Error Patterns, Factor Analysis, Scaling
Peer reviewedCudeck, Robert – Journal of Educational Measurement, 1980
Methods for evaluating the consistency of responses to test items were compared. When a researcher is unwilling to make the assumptions of classical test theory, has only a small number of items, or is in a tailored testing context, Cliff's dominance indices may be useful. (Author/CTM)
Descriptors: Error Patterns, Item Analysis, Test Items, Test Reliability
Peer reviewedClauser, Brian E.; Clyman, Stephen G.; Swanson, David B. – Journal of Educational Measurement, 1999
Two studies focused on aspects of the rating process in performance assessment. The first, which involved 15 raters and about 400 medical students, made the "committee" facet of raters working in groups explicit, and the second, which involved about 200 medical students and four raters, made the "rating-occasion" facet…
Descriptors: Error Patterns, Evaluation Methods, Evaluators, Higher Education
Peer reviewedMuller, Douglas; And Others – Journal of Educational Measurement, 1972
Purpose of this study was to examine the effect of using separate, machine scorable answer sheets on the number of marking errors made by third-, fourth-, and sixth-grade students. (Authors)
Descriptors: Answer Keys, Elementary School Students, Error Patterns, Measurement Instruments
Peer reviewedTatsuoka, Kikumi K.; Tatsuoka, Maurice M. – Journal of Educational Measurement, 1983
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Error Patterns, Measurement Techniques
Peer reviewedHamilton, Lawrence C. – Journal of Educational Measurement, 1981
Errors in self-reports of three academic performance measures are analyzed. Empirical errors are shown to depart radically from both no-error and random-error assumptions. Self-reports by females depart farther from the no-error and random-error models for all three performance measures. (Author/BW)
Descriptors: Academic Achievement, Error Patterns, Grade Point Average, Models
Peer reviewedBirenbaum, Menucha; Tatsuoka, Kikumi K. – Journal of Educational Measurement, 1987
The present study examined the effect of three modes of feedback on the seriousness of error types committed on a post-test. The effect of the feedback made on post-test errors was found to be differential and dependent upon the seriousness of errors committed on the pre-test. (Author/LMO)
Descriptors: Computer Assisted Testing, Error Patterns, Feedback, Junior High Schools
Peer reviewedHarnisch, Delwyn L.; Linn, Robert L. – Journal of Educational Measurement, 1981
Different indices can be used to measure an individual's pattern of responses on an achievement test as usual or consistent with the norm. The relationships among eight of these indices are investigated for a math and reading test given to approximately 2,100 fourth-grade students. (Author/BW)
Descriptors: Comparative Analysis, Correlation, Error Patterns, Grade 4
Peer reviewedBejar, Isaac I. – Journal of Educational Measurement, 1984
Approaches proposed for educational diagnostic assessment are reviewed and identified as deficit assessment and error analysis. The development of diagnostic instruments may require a reexamination of existing psychometric models and development of alternative ones. The psychometric and content demands of diagnostic assessment all but require test…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Criterion Referenced Tests, Diagnostic Tests
Peer reviewedTatsuoka, Kikumi K. – Journal of Educational Measurement, 1987
This study examined whether the item response curves from a two-parameter model reflected characteristics of the mathematics items, each of which required unique cognitive tasks. A computer program performed error analysis of test performance. Cognitive subtasks appeared to influence the slopes and difficulties of item response curves. (GDC)
Descriptors: Cognitive Processes, Computer Assisted Testing, Error Patterns, Item Analysis
Peer reviewedEngelhard, George, Jr. – Journal of Educational Measurement, 1994
Rater errors (rater severity, halo effect, central tendency, and restriction of range) are described, and criteria are presented for evaluating rating quality based on a many-faceted Rasch (FACETS) model. Ratings of 264 compositions from the Eighth Grade Writing Test in Georgia by 15 raters illustrate the discussion. (SLD)
Descriptors: Criteria, Educational Assessment, Elementary Education, Elementary School Students
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1986
Advantages and disadvantages of joint maximum likelihood, marginal maximum likelihood, and Bayesian methods of parameter estimation in item response theory are discussed and compared. (Author)
Descriptors: Bayesian Statistics, Error Patterns, Estimation (Mathematics), Higher Education
Peer reviewedWebb, Noreen M.; And Others – Journal of Educational Measurement, 1986
The consistency of student response patterns on a test of language arts was examined in a set of studies as a first step toward designing a computer adaptive test to diagnose errors. (Author/LMO)
Descriptors: Adaptive Testing, Computer Assisted Testing, Diagnostic Tests, Error Patterns
Peer reviewedTatsuoka, Kikumi K. – Journal of Educational Measurement, 1983
A newly introduced approach, rule space, can represent large numbers of erroneous rules of arithmetic operations quantitatively and can predict the likelihood of each erroneous rule. The new model challenges the credibility of the traditional right-or-wrong scoring procedure. (Author/PN)
Descriptors: Addition, Algorithms, Arithmetic, Diagnostic Tests
Peer reviewedMiller, M. David – Journal of Educational Measurement, 1986
An index of student patterns of item response, when aggregated to the class level, was shown to identify classes that have a poor match between test content and instructional coverage. The mean caution index for a class can best be interpreted knowing the within-class standard deviation of the index. (Author/LMO)
Descriptors: Classes (Groups of Students), Elementary Education, Error Patterns, Goodness of Fit


