Publication Date
| In 2026 | 0 |
| Since 2025 | 11 |
| Since 2022 (last 5 years) | 66 |
| Since 2017 (last 10 years) | 144 |
| Since 2007 (last 20 years) | 255 |
Descriptor
| Difficulty Level | 492 |
| Item Analysis | 492 |
| Test Items | 377 |
| Test Construction | 153 |
| Foreign Countries | 118 |
| Multiple Choice Tests | 103 |
| Test Validity | 95 |
| Item Response Theory | 91 |
| Test Reliability | 89 |
| Comparative Analysis | 80 |
| Statistical Analysis | 79 |
| More ▼ | |
Source
Author
| Reckase, Mark D. | 6 |
| Lord, Frederic M. | 5 |
| Roid, Gale | 4 |
| Bratfisch, Oswald | 3 |
| Cahen, Leonard S. | 3 |
| Dorans, Neil J. | 3 |
| Dunne, Tim | 3 |
| Facon, Bruno | 3 |
| Hambleton, Ronald K. | 3 |
| Huck, Schuyler W. | 3 |
| Kostin, Irene | 3 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 34 |
| Practitioners | 4 |
| Teachers | 2 |
Location
| Indonesia | 8 |
| Nigeria | 8 |
| Turkey | 8 |
| Germany | 7 |
| Taiwan | 7 |
| South Africa | 6 |
| United States | 6 |
| Canada | 5 |
| India | 5 |
| China | 4 |
| Florida | 4 |
| More ▼ | |
Laws, Policies, & Programs
| Education Consolidation… | 1 |
| Elementary and Secondary… | 1 |
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Bennett, Randy Elliot; And Others – 1988
This study developed, applied, and evaluated a theory-based method of detecting the underlying causes of differential difficulty. The method was applied to two subgroups taking the Scholastic Aptitude Test-Mathematics (SAT-M), 261 visually impaired students taking Braille forms of the test and 1,985 black students at 3 test administrations. It…
Descriptors: Black Students, Braille, Cluster Analysis, Difficulty Level
Bethscheider, Janine K. – 1992
Standard and experimental forms of the Johnson O'Connor Research Foundations Analytical Reasoning test were administered to 1,496 clients of the Foundation (persons seeking information about aptitude for educational and career decisions). The objectives were to develop a new form of the test and to better understand what makes some items more…
Descriptors: Adults, Aptitude Tests, Career Choice, Comparative Testing
Sinharay, Sandip; Johnson, Matthew – ETS Research Report Series, 2005
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate/produce items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996; Bejar, 2002). They have the potential to produce large number of high-quality items at reduced cost. This paper introduces…
Descriptors: Item Analysis, Test Items, Scoring, Psychometrics
Gonzalez-Tamayo, Eulogio – 1987
The agreement between the Educational Testing Service (ETS) and the Golden Rule Insurance Company of Illinois is interpreted as setting the general principles on which items must be selected to be included in a licensure test. These principles put a limit to the difficulty level of any item, and they also limit the size of the difference in…
Descriptors: Analysis of Variance, Content Validity, Difficulty Level, Item Analysis
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Simpson, Deborah E.; Cohen, Elsa B. – 1985
This paper reports a multi-method approach for examining the cognitive level of multiple-choice items used in a medical pathology course at a large midwestern medical school. Analysis of the standard item analysis data and think-out-loud reports of a sample of students completing a 66 item examination were used to test assumptions related to the…
Descriptors: Abstract Reasoning, Cognitive Objectives, Difficulty Level, Graduate Medical Education
Percy, Virginia R.; Smith, Richard M. – 1982
Replicating a 1978 study of methods of setting cut-off scores in standardized tests for student placement, the Rasch model was used to estimate ability levels and item difficulty for use in determining the probability of course success. Community college students were administered the New Jersey College Basic Skills Placement Tests and items were…
Descriptors: Achievement Tests, Basic Skills, Computer Oriented Programs, Difficulty Level
Robertson, David W.; And Others – 1977
A comparative study of item analysis was conducted on the basis of race to determine whether alternative test construction or processing might increase the proportion of black enlisted personnel among those passing various military technical knowledge examinations. The study used data from six specialists at four grade levels and investigated item…
Descriptors: Difficulty Level, Enlisted Personnel, Item Analysis, Occupational Tests
Wearne, Diana Catherine – 1976
A test of problem solving behavior which provides information about the mastery of the prerequisites of the problems has been developed for fourth grade children. Each problem solving question is preceded by two other questions which assess the child's understanding of the information contained in the actual question, and an application question…
Descriptors: Difficulty Level, Elementary Education, Elementary School Students, Grade 4
Ironson, Gail H. – 1978
Four statistical methods for identifying biased test items were used with data from two ethnic groups (1,691 black and 1,794 white high school seniors). The data were responses to 150 items in five subtests including two traditional tests (reading and mathematics) and three nontraditional tests (picture number test of associative memory, letter…
Descriptors: Aptitude Tests, Comparative Analysis, Culture Fair Tests, Difficulty Level
Hambleton, Ronald K.; And Others – 1987
The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…
Descriptors: Comparative Analysis, Content Validity, Cutting Scores, Difficulty Level
Linacre, John M. – 1987
This paper describes a computer program in Microsoft BASIC which selects and administers test items from a small item bank. The level of the difficulty of the item selected depends on the test taker's previous response. This adaptive system is based on the Rasch model. The Rasch model uses a unit of measurement based on the logarithm of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Individual Testing
Tanner, David E. – 1986
A multiple choice achievement test was constructed in which both cognitive level and degree of abstractness were controlled. Subjects were 75 students from a major university in the Southwest. A group of 13 judges, also university students, classified the concepts for degree of abstractness. Results indicated that both cognitive level and degree…
Descriptors: Abstract Reasoning, Achievement Tests, Analysis of Variance, Cognitive Processes
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis

Peer reviewed
Direct link
