Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Difficulty Level | 20 |
| Mathematical Models | 20 |
| Statistical Analysis | 20 |
| Test Items | 17 |
| Item Analysis | 11 |
| Latent Trait Theory | 8 |
| Test Construction | 5 |
| Test Reliability | 5 |
| Estimation (Mathematics) | 4 |
| Factor Analysis | 4 |
| Goodness of Fit | 4 |
| More ▼ | |
Source
| Applied Psychological… | 1 |
| International Journal of… | 1 |
| Journal of Educational… | 1 |
| Online Submission | 1 |
| Psychometrika | 1 |
Author
Publication Type
| Reports - Research | 15 |
| Speeches/Meeting Papers | 6 |
| Journal Articles | 4 |
| Reports - Evaluative | 2 |
| Guides - General | 1 |
| Guides - Non-Classroom | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - General | 1 |
Education Level
Audience
| Researchers | 4 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
| Stanford Binet Intelligence… | 1 |
What Works Clearinghouse Rating
Dorozhkin, Evgenij M.; Chelyshkova, Marina B.; Malygin, Alexey A.; Toymentseva, Irina A.; Anopchenko, Tatiana Y. – International Journal of Environmental and Science Education, 2016
The relevance of the investigated problem is determined by the need to improving the evaluation procedures in education and the student assessment in the age of the context of education widening, new modes of study developing (such as blending learning, e-learning, massive open online courses), immediate feedback necessity, reliable and valid…
Descriptors: Student Evaluation, Evaluation Methods, Item Response Theory, Mathematical Models
Lorié, William A. – Online Submission, 2013
A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…
Descriptors: Numeracy, Mathematical Concepts, Mathematical Logic, Difficulty Level
Peer reviewedHarris, Chester W.; Pearlman, Andrea Pastorok – Journal of Educational Statistics, 1978
A theory and a procedure are presented for estimating a domain parameter and item parameters for test items in a homogeneous domain, such that the combined domain and item parameters account for observed proportions right for each item in a test. (CTM)
Descriptors: Achievement Tests, Difficulty Level, Item Analysis, Mathematical Models
Engelen, Ron J. H.; And Others – 1988
Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling examinees from an ability distribution is made. For the…
Descriptors: Ability, Difficulty Level, Foreign Countries, Latent Trait Theory
Peer reviewedYen, Wendy M. – Psychometrika, 1985
An approximate relationship is devised between the unidimensional model used in data analysis and a multidimensional model hypothesized to be generating the item responses. Scale shrinkage is successfully predicted for several sets of simulated data. (Author/LMO)
Descriptors: Difficulty Level, Hypothesis Testing, Item Analysis, Latent Trait Theory
Peer reviewedvan der Linden, Wim J. – Applied Psychological Measurement, 1979
The restrictions on item difficulties that must be met when binomial models are applied to domain-referenced testing are examined. Both a deterministic and a stochastic conception of item responses are discussed with respect to difficulty and Guttman-type items. (Author/BH)
Descriptors: Difficulty Level, Item Sampling, Latent Trait Theory, Mathematical Models
Fisk, Yvette Hester – 1991
The reasons for recent endeavors to evaluate item bias are discussed, and item bias is defined. Some of the literature regarding the most promising methods of detecting item bias is reviewed. Three classes of methods for detecting item bias are discussed using concrete examples and illustrations. These methods are: (1) latent trait; (2)…
Descriptors: Chi Square, Comparative Analysis, Difficulty Level, Item Bias
Spray, Judith A.; Welch, Catherine J. – 1986
The purpose of this study was to examine the effect that large within-examinee item difficulty variability had on estimates of the proportion of consistent classification of examinees into mastery categories over two test administrations. The classification consistency estimate was based on a single test administration from an estimation procedure…
Descriptors: Adults, Difficulty Level, Estimation (Mathematics), Mathematical Models
Choppin, Bruce – 1982
A strategy for overcoming problems with the Rasch model's inability to handle missing data involves a pairwise algorithm which manipulates the data matrix to separate out the information needed for the estimation of item difficulty parameters in a test. The method of estimation compares two or three items at a time, separating out the ability…
Descriptors: Difficulty Level, Estimation (Mathematics), Goodness of Fit, Item Analysis
Kreines, David C.; Mead, Ronald J. – 1979
An explanation is given of what is meant by "sample-free" item calibration and by "item-free" person measurement as these terms are applied to the one-parameter logistic test theory model of Georg Rasch. When the difficulty of an item is calibrated separately for two different samples the results may differ; but, according the…
Descriptors: Difficulty Level, Equated Scores, Goodness of Fit, Item Analysis
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Ackerman, Phillip L.; And Others – 1982
Statistical methods employed to test individual differences in dual-task performance and the existence of a general time-sharing ability are reviewed and critiqued. Specifically, both the types of data being collected and the types of data analysis procedures have been inadequate to the critical evaluation of a hypothetical…
Descriptors: Attention Control, Cognitive Processes, Difficulty Level, Factor Analysis
Lord, Frederic M. – 1971
Some stochastic approximation procedures are considered in relation to the problem of choosing a sequence of test questions to accurately estimate a given examinee's standing on a psychological dimension. Illustrations are given evaluating certain procedures in a specific context. (Author/CK)
Descriptors: Academic Ability, Adaptive Testing, Computer Programs, Difficulty Level
A Comparison of Three Types of Test Development Procedures Using Classical and Latent Trait Methods.
Benson, Jeri; Wilson, Michael – 1979
Three methods of item selection were used to select sets of 38 items from a 50-item verbal analogies test and the resulting item sets were compared for internal consistency, standard errors of measurement, item difficulty, biserial item-test correlations, and relative efficiency. Three groups of 1,500 cases each were used for item selection. First…
Descriptors: Comparative Analysis, Difficulty Level, Efficiency, Error of Measurement
Merz, William R.; Grossen, Neal E. – 1978
Six approaches to assessing test item bias were examined: transformed item difficulty, point biserial correlations, chi-square, factor analysis, one parameter item characteristic curve, and three parameter item characteristic curve. Data sets for analysis were generated by a Monte Carlo technique based on the three parameter model; thus, four…
Descriptors: Difficulty Level, Evaluation Methods, Factor Analysis, Item Analysis
Previous Page | Next Page »
Pages: 1 | 2

