Descriptor
| Problem Solving | 5 |
| Test Reliability | 5 |
| Achievement Tests | 3 |
| Algorithms | 3 |
| Junior High Schools | 3 |
| Factor Structure | 2 |
| Measurement Techniques | 2 |
| Multidimensional Scaling | 2 |
| Scoring | 2 |
| Test Construction | 2 |
| Cognitive Processes | 1 |
| More ▼ | |
Source
| Journal of Educational… | 5 |
Author
| Birenbaum, Menucha | 2 |
| Evans, Glen T. | 1 |
| Fatsuoka, Kikumi K. | 1 |
| Forsyth, Robert A. | 1 |
| Spratt, Kevin F. | 1 |
| Stevenson, John C. | 1 |
| Tatsuoka, Kikumi | 1 |
| Tatsuoka, Kikumi K. | 1 |
| Tatsuoka, Maurice M. | 1 |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 5 |
Education Level
Audience
Location
| Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedTatsuoka, Kikumi K.; Tatsuoka, Maurice M. – Journal of Educational Measurement, 1983
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Error Patterns, Measurement Techniques
Peer reviewedBirenbaum, Menucha; Fatsuoka, Kikumi K. – Journal of Educational Measurement, 1983
The outcomes of two scoring methods (one based on an error analysis and the second on a conventional method) on free-response tests, compared in terms of reliability and dimensionality, indicates the conventional method is inferior in both aspects. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Data, Junior High Schools
Peer reviewedForsyth, Robert A.; Spratt, Kevin F. – Journal of Educational Measurement, 1980
The effects of two item formats on item difficulty and item discrimination indices for mathematics problem solving multiple-choice tests were investigated. One format required identifying the proper "set-up" for the item; the other format required complete solving of the item. (Author/JKS)
Descriptors: Difficulty Level, Junior High Schools, Multiple Choice Tests, Problem Solving
Peer reviewedStevenson, John C.; Evans, Glen T. – Journal of Educational Measurement, 1994
Cognitive holding power is defined as a characteristic of the learning setting that presses students into different kinds of cognitive activity. Development of an instrument to measure cognitive holding power and studies of the instrument's reliability with over 1,500 Australian technical college students are reported. (SLD)
Descriptors: Cognitive Processes, College Students, Factor Structure, Foreign Countries
Peer reviewedBirenbaum, Menucha; Tatsuoka, Kikumi – Journal of Educational Measurement, 1982
Empirical results from two studies--a simulation study and an experimental one--indicated that, in achievement data of the problem-solving type where a specific subject matter area is being tested, the greater the variety of the algorithms used, the higher the dimensionality of the test data. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Data Analysis, Factor Structure


