Publication Date
| In 2026 | 0 |
| Since 2025 | 9 |
| Since 2022 (last 5 years) | 50 |
| Since 2017 (last 10 years) | 103 |
| Since 2007 (last 20 years) | 160 |
Descriptor
Source
Author
| Plake, Barbara S. | 7 |
| Huntley, Renee M. | 5 |
| Tollefson, Nona | 4 |
| Wainer, Howard | 4 |
| Baghaei, Purya | 3 |
| Bennett, Randy Elliot | 3 |
| Halpin, Glennelle | 3 |
| Katz, Irvin R. | 3 |
| Lunz, Mary E. | 3 |
| Allen, Nancy L. | 2 |
| Anderson, Paul S. | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 8 |
| Policymakers | 1 |
| Practitioners | 1 |
| Teachers | 1 |
Location
| Germany | 8 |
| Turkey | 8 |
| Australia | 5 |
| China | 4 |
| Indonesia | 4 |
| Iran | 4 |
| United Kingdom (England) | 4 |
| Canada | 3 |
| Japan | 3 |
| Netherlands | 3 |
| Taiwan | 3 |
| More ▼ | |
Laws, Policies, & Programs
| Pell Grant Program | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Bowman, Robert W., Jr.; Frary, Robert B. – 1983
College teachers often use norm-referenced classroom tests which are too easy for distinguishing adequately among levels of student achievement, yet they are reluctant to adopt more difficult tests. We explored the basis for current practices concerning test difficulty through informal interviews and questionnaires completed by faculty members and…
Descriptors: Achievement Tests, Difficulty Level, Higher Education, Measurement Techniques
Roid, Gale H.; Wendler, Cathy L. W. – 1983
The development of the emerging technology of item writing was motivated in part by the desire to reduce potential subjectivity and bias between different item writers who attempt to construct parallel achievement tests. The present study contrasts four test forms constructed by the combined efforts of six item writers using four methods of item…
Descriptors: Achievement Tests, Difficulty Level, Intermediate Grades, Item Analysis
Oosterhof, Albert C.; Coats, Pamela K. – 1981
Instructors who develop classroom examinations that require students to provide a numerical response to a mathematical problem are often very concerned about the appropriateness of the multiple-choice format. The present study augments previous research relevant to this concern by comparing the difficulty and reliability of multiple-choice and…
Descriptors: Comparative Analysis, Difficulty Level, Grading, Higher Education
Roid, Gale; And Others – 1980
Using informal, objectives-based, or linguistic methods, three elementary school teachers and three experienced item writers developed criterion-referenced pretests-posttests to accompany a prose passage. Item difficulites were tabulated on the responses of 364 elementary students. The informal-subjective method, used by many achievement test…
Descriptors: Criterion Referenced Tests, Difficulty Level, Elementary Education, Elementary School Teachers
Burton, Nancy W.; And Others – 1976
Assessment exercises (items) in three different formats--multiple-choice with an "I don't know" (IDK) option, multiple-choice without the IDK, and open-ended--were placed at the beginning, middle and end of 45-minute assessment packages (instruments). A balanced incomplete blocks analysis of variance was computed to determine the biasing…
Descriptors: Age Differences, Difficulty Level, Educational Assessment, Guessing (Tests)
Rachor, Robert E.; Gray, George T. – 1996
Two frequently cited guidelines for writing multiple choice test item stems are: (1) the stem can be written in either a question or statement-to-be-completed format; and (2) only positively worded stems should be used. These guidelines were evaluated in a survey of the test item banks of 13 nationally administered examinations in the physician…
Descriptors: Allied Health Personnel, Difficulty Level, High Achievement, Item Banks
Bethscheider, Janine K. – 1992
Standard and experimental forms of the Johnson O'Connor Research Foundations Analytical Reasoning test were administered to 1,496 clients of the Foundation (persons seeking information about aptitude for educational and career decisions). The objectives were to develop a new form of the test and to better understand what makes some items more…
Descriptors: Adults, Aptitude Tests, Career Choice, Comparative Testing
Dowd, Steven B. – 1992
An alternative to multiple-choice (MC) testing is suggested as it pertains to the field of radiologic technology education. General principles for writing MC questions are given and contrasted with a new type of MC question, the alternate-choice (AC) question, in which the answer choices are embedded in the question in a short form that resembles…
Descriptors: Comparative Testing, Difficulty Level, Evaluation Methods, Higher Education
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Smith, Robert L.; Carlson, Alfred B. – 1995
The feasibility of constructing test forms with practically equivalent cut scores using judges' estimates of item difficulty as target "statistical" specifications was investigated. Test forms with equivalent judgmental cut scores (based on judgments of item difficulty) were assembled using items from six operational forms of the…
Descriptors: Cutting Scores, Decision Making, Difficulty Level, Equated Scores
Linacre, John M. – 1987
This paper describes a computer program in Microsoft BASIC which selects and administers test items from a small item bank. The level of the difficulty of the item selected depends on the test taker's previous response. This adaptive system is based on the Rasch model. The Rasch model uses a unit of measurement based on the logarithm of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Individual Testing
Bolden, Bernadine J.; Stoddard, Ann – 1980
This study examined the effect of two ways of question phrasing as related to three styles of expository writing on the test performance of elementary school children. Multiple-choice questions were developed for sets of passages which were written using three different syntactic structures; and which had different levels of difficulty. The…
Descriptors: Difficulty Level, Elementary Education, Kernel Sentences, Multiple Choice Tests
Peer reviewedKumar, V. K.; And Others – Contemporary Educational Psychology, 1979
Ninth-graders read a passage for a test to be taken the next day, anticipating a recall test, a multiple-choice test, and a retention test. Half received either a recall or a recognition test regardless of prior instructions. Subjects did better on the recognition tests in all conditions. (Author/RD)
Descriptors: Difficulty Level, Educational Testing, Expectation, Junior High Schools
Test-Retest Analyses of the Test of English as a Foreign Language. TOEFL Research Reports Report 45.
Henning, Grant – 1993
This study provides information about the total and component scores of the Test of English as a Foreign Language (TOEFL). First, the study provides comparative global and component estimates of test-retest, alternate-form, and internal-consistency reliability, controlling for sources of measurement error inherent in the examinees and the testing…
Descriptors: Difficulty Level, English (Second Language), Error of Measurement, Estimation (Mathematics)
Katz, Irvin R.; Friedman, Debra E.; Bennett, Randy Elliot; Berger, Aliza E. – College Entrance Examination Board, 1996
This study investigated the strategies subjects adopted to solve STEM-equivalent SAT-Mathematics (SAT-M) word problems in constructed-response (CR) and multiple-choice (MC) formats. Parallel test forms of CR and MC items were administered to subjects representing a range of mathematical abilities. Format-related differences in difficulty were more…
Descriptors: Multiple Choice Tests, College Entrance Examinations, Problem Solving, Cognitive Style


