Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Milewski, Glenn B.; Patelis, Thanos – 2001
The 1999 Advanced Placement[R] (AP[R] Psychology Examination contains items drawn from 13 factors related to the study of psychology. This factor structure had not been explored previously. This study focuses on evaluating the fit of confirmatory factor analysis (CFA) models to examination items. Since examination items were dichotomous and…
Descriptors: Advanced Placement, Factor Structure, Goodness of Fit, High School Students
Glas, Cees A. W.; Vos, Hans J. – 2000
This paper focuses on a version of sequential mastery testing (i.e., classifying students as a master/nonmaster or continuing testing and administering another item or testlet) in which response behavior is modeled by a multidimensional item response theory (IRT) model. First, a general theoretical framework is outlined that is based on a…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Drasgow, Fritz, Ed.; Olson-Buchanan, Julie B., Ed. – 1999
Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow);…
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Scoring
Louisiana State Department of Education, 2004
This document is part of a series of materials meant to promote understanding of knowledge and skills students must have and the kinds of work they must produce to be successful on the LEAP 21. LEAP 21 is an integral part of the Louisiana school and district accountability system passed by the state legislature and signed into law in 1997. The…
Descriptors: Grade 8, Test Items, Social Studies, Scores
Peer reviewedWilcox, Rand R. – Educational and Psychological Measurement, 1982
When determining criterion-referenced test length, problems of guessing are shown to be more serious than expected. A new method of scoring is presented that corrects for guessing without assuming that guessing is random. Empirical investigations of the procedure are examined. Test length can be substantially reduced. (Author/CM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests, Scoring
Peer reviewedOwens, Robert E.; And Others – Language, Speech, and Hearing Services in Schools, 1983
The test item content of 17 language assessment tools that are widely used in a school or preschool setting are displayed in tabular form to assist speech-language pathologists. By noting the categories that are marked, the clinician can compare the breadth of the tests. (SEW)
Descriptors: Diagnostic Tests, Grammar, Language Handicaps, Language Tests
Mellenbergh, Gideon J.; van der Linden, Wim J. – Evaluation in Education: International Progress, 1982
Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)
Descriptors: Criterion Referenced Tests, Educational Testing, Item Analysis, Latent Trait Theory
Peer reviewedGross, Leon J. – Evaluation and the Health Professions, 1982
Despite the 50 percent probability of a correctly guessed response, a multiple true-false examination should provide sufficient score variability for adequate discrimination without formula scoring. This scoring system directs examinees to respond to each item, with their scores based simply on the number of correct responses. (Author/CM)
Descriptors: Achievement Tests, Guessing (Tests), Health Education, Higher Education
Peer reviewedWeiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewedMeredith, Gerald M. – Perceptual and Motor Skills, 1982
The School of Architecture faculty posed the methodological problem to construct a scale of 10 items or less to reliably evaluate instruction at different levels of technical and artistic instruction. Among the first 10 ordered items were: "The instructor did a good job" and "The course was worthwhile." (CM)
Descriptors: Architectural Education, Factor Analysis, Higher Education, Student Evaluation of Teacher Performance
Signer, Barbara – Computing Teacher, 1982
Describes computer program designed to diagnose student arithmetic achievement in following categories: number concepts, addition, subtraction, multiplication, and division. Capabilities of the program are discussed, including immediate diagnosis, tailored testing, test security (unique tests generated), generative responses (nonmultiple choice),…
Descriptors: Computer Assisted Testing, Computer Programs, Diagnostic Tests, Elementary Secondary Education
Peer reviewedKolstad, Rosemarie; And Others – Journal of Dental Education, 1982
Nonrestricted-answer, multiple-choice test items are recommended as a way of including more facts and fewer incorrect answers in test items, and they do not cue successful guessing as restricted multiple choice items can. Examination construction, scoring, and reliability are discussed. (MSE)
Descriptors: Guessing (Tests), Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewedMentzer, Thomas L. – Educational and Psychological Measurement, 1982
Evidence of biases in the correct answers in multiple-choice test item files were found to include "all of the above" bias in which that answer was correct more than 25 percent of the time, and a bias that the longest answer was correct too frequently. Seven bias types were studied. (Author/CM)
Descriptors: Educational Testing, Higher Education, Multiple Choice Tests, Psychology
Peer reviewedPopham, W. James – Reading Horizons, 1982
Details the steps followed in the development of the Basic Skills Word List. (FL)
Descriptors: Elementary Education, Readability, Reading Tests, Test Construction
Peer reviewedRoger, Derek; And Others – Educational Review, 1981
Reports construction of a questionnaire for measuring secondary students' attitudes toward learning French. Suggests that the instrument may be easily adapted to other languages. An appendix provides a list of scaled items on the questionnaire. (SJL)
Descriptors: Attitude Measures, Factor Analysis, French, Questionnaires


