Descriptor
| Criterion Referenced Tests | 20 |
| Mathematical Models | 20 |
| Test Items | 20 |
| Test Construction | 13 |
| Item Analysis | 11 |
| Latent Trait Theory | 11 |
| Test Reliability | 10 |
| Test Theory | 9 |
| Difficulty Level | 8 |
| Cutting Scores | 5 |
| Higher Education | 5 |
| More ▼ | |
Author
| Hambleton, Ronald K. | 4 |
| van der Linden, Wim J. | 2 |
| Bernknopf, Stan | 1 |
| Cook, Linda L. | 1 |
| Davis, John N. | 1 |
| De Gruijter, Dato N. M. | 1 |
| Divgi, D. R. | 1 |
| Forster, Fred | 1 |
| Haladyna, Tom | 1 |
| Izard, J. F. | 1 |
| Karr, Chad | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 15 |
| Speeches/Meeting Papers | 7 |
| Journal Articles | 3 |
| Reports - Evaluative | 2 |
| Collected Works - General | 1 |
| Guides - General | 1 |
| Opinion Papers | 1 |
| Reports - General | 1 |
Education Level
Audience
| Researchers | 3 |
Laws, Policies, & Programs
Assessments and Surveys
| Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Mellenbergh, Gideon J.; van der Linden, Wim J. – Evaluation in Education: International Progress, 1982
Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)
Descriptors: Criterion Referenced Tests, Educational Testing, Item Analysis, Latent Trait Theory
Divgi, D. R. – 1978
One aim of criterion-referenced testing is to classify an examinee without reference to a norm group; therefore, any statements about the dependability of such classification ought to be group-independent also. A population-independent index is proposed in terms of the probability of incorrect classification near the cutoff true score. The…
Descriptors: Criterion Referenced Tests, Cutting Scores, Difficulty Level, Error of Measurement
Peer reviewedHambleton, Ronald K.; De Gruijter, Dato N. M. – Journal of Educational Measurement, 1983
Addressing the shortcomings of classical item statistics for selecting criterion-referenced test items, this paper describes an optimal item selection procedure utilizing item response theory (IRT) and offers examples in which random selection and optimal item selection methods are compared. Theoretical advantages of optimal selection based upon…
Descriptors: Criterion Referenced Tests, Cutting Scores, Item Banks, Latent Trait Theory
Wilcox, Rand R. – 1981
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
Descriptors: Achievement Tests, Criterion Referenced Tests, Guessing (Tests), Mathematical Models
Peer reviewedSecolsky, Charles – Journal of Educational Measurement, 1983
A model is presented using examinee judgements in detecting ambiguous/misinterpreted items on teacher-made criterion-referenced tests. A computational example and guidelines for constructing domain categories and interpreting the indices are presented. (Author/PN)
Descriptors: Criterion Referenced Tests, Higher Education, Item Analysis, Mathematical Models
van der Linden, Wim J. – 1982
A latent trait method is presented to investigate the possibility that Angoff or Nedelsky judges specify inconsistent probabilities in standard setting techniques for objectives-based instructional programs. It is suggested that judges frequently specify a low probability of success for an easy item but a large probability for a hard item. The…
Descriptors: Criterion Referenced Tests, Cutting Scores, Error of Measurement, Interrater Reliability
PDF pending restorationCook, Linda L.; Hambleton, Ronald K. – 1978
Latent trait models may offer considerable potential for the improvement of educational measurement practices, but until recently, they have received only limited attention from measurement specialists. This paper provides a brief introduction to latent trait models, and provides test practitioners with a non-technical introduction to the…
Descriptors: Career Development, Criterion Referenced Tests, Difficulty Level, Item Analysis
Quellmalz, Edys S.; Shaha, Steven – 1982
The potential of a cognitive model task analysis scheme (CMS) that specifies features of test problems shown by research to affect performance is explored. CMS describes the general skill area and the generic task or problem type. It elaborates features of the problem situation and required responses found by research to influence performance.…
Descriptors: Academic Achievement, Cognitive Measurement, Criterion Referenced Tests, Elementary Secondary Education
Phillips, Gary W. – 1982
This paper presents an introduction to the use of latent trait models for the estimation of domain scores. It was shown that these models provided an advantage over classical test theory and binomial error models in that unbiased estimates of true domain scores could be obtained even when items were not randomly selected from a universe of items.…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Estimation (Mathematics), Goodness of Fit
Thomas, Gregory P. – 1986
This paper argues that no single measurement strategy serves all purposes and that applying methods and techniques which allow a variety of data elements to be retrieved and juxtaposed may be an investment in the future. Item response theory, Rasch model, and latent trait theory are all approaches to a single conceptual topic. An abbreviated look…
Descriptors: Achievement Tests, Adaptive Testing, Criterion Referenced Tests, Data Collection
Forster, Fred; Karr, Chad – 1987
This study investigates two fixed parameter models for calibrating test items for an item bank under local school district control. Mathematics, reading, and language tests administered to students (Grades 7-9) provided the criterion information. Model I used calibrations from the Northwest Evaluation Association Item Bank to calibrate new items…
Descriptors: Criterion Referenced Tests, Item Analysis, Item Banks, Junior High Schools
Silva, Sharron J. – 1985
Test item selection techniques based on traditional item analysis methods were compared to techniques based on item response theory. The consistency of mastery classifications in criterion referenced reading tests was examined. Pretest and posttest data were available for 945 first and second grade students and for 1796 fourth to sixth grade…
Descriptors: Analysis of Variance, Comparative Testing, Criterion Referenced Tests, Elementary Education
Bernknopf, Stan; And Others – 1979
The effectiveness of a model for determining a minimal cut-off score for criterion-referenced tests was examined. The model, based upon techniques presented originally by Nedelsky and by Angoff, was first used in conjunction with a multiple choice test developed for use in certifying school counselors in Georgia. A "knowledge estimation panel" was…
Descriptors: Counselor Certification, Court Litigation, Criterion Referenced Tests, Cutting Scores
Davis, John N.; Moriel, James, Jr. – 1980
In order to compose equivalent certification tests in the basic skills in reading, mathematics calculations and applications, and writing for high school graduation requirements, the San Jose Unified School District, California, used the Rasch computer program to arrive at the scaled difficulty level of test items. The procedure for achieving a…
Descriptors: Basic Skills, Criterion Referenced Tests, Cutting Scores, Difficulty Level
Nassif, Paula M.; And Others – 1979
A procedure which employs a method of item substitution based on item difficulty is recommended for developing parallel criterion referenced test forms. This procedure is currently being used in the Florida functional literacy testing program and the Georgia teacher certification testing program. Reasons for developing parallel test forms involve…
Descriptors: Criterion Referenced Tests, Difficulty Level, Equated Scores, Functional Literacy
Previous Page | Next Page ยป
Pages: 1 | 2

