Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Difficulty Level | 102 |
| Mathematical Models | 102 |
| Test Items | 102 |
| Item Analysis | 49 |
| Latent Trait Theory | 48 |
| Test Construction | 25 |
| Goodness of Fit | 21 |
| Estimation (Mathematics) | 19 |
| Statistical Analysis | 17 |
| Test Reliability | 16 |
| Item Response Theory | 15 |
| More ▼ | |
Source
Author
Publication Type
Audience
| Researchers | 25 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Braun, Thorsten; Stierle, Rolf; Fischer, Matthias; Gross, Joachim – Chemical Engineering Education, 2023
Contributing to a competency model for engineering thermodynamics, we investigate the empirical competency structure of our exams in an attempt to answer the question: Do we test the competencies we want to convey to our students? We demonstrate that thermodynamic modeling and mathematical solution emerge as significant dimensions of thermodynamic…
Descriptors: Thermodynamics, Consciousness Raising, Engineering Education, Test Format
Ong, Yoke Mooi; Williams, Julian; Lamprianou, Iasonas – International Journal of Research & Method in Education, 2013
Researchers interested in exploring substantive group differences are increasingly attending to bundles of items (or testlets): the aim is to understand how gender differences, for instance, are explained by differential performances on different types or bundles of items, hence differential bundle functioning (DBF). Some previous work has…
Descriptors: Mathematics Tests, Gender Differences, Mathematics Instruction, Mathematical Models
Rahman, Nazia – ProQuest LLC, 2013
Samejima hypothesized that non-monotonically increasing item response functions (IRFs) of ability might occur for multiple-choice items (referred to here as "Samejima items") if low ability test takers with some, though incomplete, knowledge or skill are drawn to a particularly attractive distractor, while very low ability test takers…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Probability
Lorié, William A. – Online Submission, 2013
A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…
Descriptors: Numeracy, Mathematical Concepts, Mathematical Logic, Difficulty Level
Yildirim, Huseyin H.; Yildirim, Selda – Hacettepe University Journal of Education, 2011
Multivariate matching in Differential Item Functioning (DIF) analyses may contribute to understand the sources of DIF. In this context, detecting appropriate additional matching variables is a crucial issue. This present article argues that the variables which are correlated with communalities in item difficulties can be used as an additional…
Descriptors: Test Bias, Multivariate Analysis, Probability, Regression (Statistics)
Peer reviewedButter, Rene; De Boeck, Paul – Psychometrika, 1998
An item response theory model based on the Rasch model is proposed for composite tasks, those decomposed into subtasks of different kinds. The model, which is illustrated with an application to spelling tasks, constrains the difficulties of the composite tasks to be linear combinations of the difficulties of the subtask items. (SLD)
Descriptors: Difficulty Level, Item Response Theory, Mathematical Models, Spelling
Ackerman, Terry A.; Spray, Judith A. – 1986
A model of test item dependency is presented and used to illustrate the effect that violations of local independence have on the behavior of item characteristic curves. The dependency model is flexible enough to simulate the interaction of a number of factors including item difficulty and item discrimination, varying degrees of item dependence,…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Groome, Mary Lynn; Groome, William R. – 1979
Angoff's method for identifying possible biased test items was applied to four computer-generated hypothetical tests, two of which contained no biased items and two of which contained a few biased items. The tests were generated to match specifications of a latent trait model. Angoff's method compared item difficulty estimates for two different…
Descriptors: Difficulty Level, Identification, Item Analysis, Mathematical Models
Peer reviewedHarris, Chester W.; Pearlman, Andrea Pastorok – Journal of Educational Statistics, 1978
A theory and a procedure are presented for estimating a domain parameter and item parameters for test items in a homogeneous domain, such that the combined domain and item parameters account for observed proportions right for each item in a test. (CTM)
Descriptors: Achievement Tests, Difficulty Level, Item Analysis, Mathematical Models
Peer reviewedReckase, Mark D.; McKinley, Robert L. – Applied Psychological Measurement, 1991
The concept of item discrimination is generalized to the case in which more than one ability is required to determine the correct response to an item, using the conceptual framework of item response theory and the definition of multidimensional item difficulty previously developed by M. Reckase (1985). (SLD)
Descriptors: Ability, Definitions, Difficulty Level, Equations (Mathematics)
Linacre, John Michael – 1991
A rating scale can be expressed as a chain of dichotomous items. The relationship between the dichotomies depends on the manner in which the rating scale is presented to the test taker. Three models for ordered scales are discussed. In the success model, which represents growth, the lowest or easiest category is presented first. If the test taker…
Descriptors: Difficulty Level, Equations (Mathematics), Mathematical Models, Rating Scales
Berger, Martijn P. F. – 1989
The problem of obtaining designs that result in the most precise parameter estimates is encountered in at least two situations where item response theory (IRT) models are used. In so-called two-stage testing procedures, certain designs that match difficulty levels of the test items with the ability of the examinees may be located. Such designs…
Descriptors: Difficulty Level, Efficiency, Equations (Mathematics), Heuristics
Holland, Paul W.; Thayer, Dorothy T. – 1985
An alternative definition has been developed of the delta scale of item difficulty used at Educational Testing Service. The traditional delta scale uses an inverse normal transformation based on normal ogive models developed years ago. However, no use is made of this fact in typical uses of item deltas. It is simply one way to make the probability…
Descriptors: Difficulty Level, Error Patterns, Estimation (Mathematics), Item Analysis
Engelen, Ron J. H.; And Others – 1988
Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling examinees from an ability distribution is made. For the…
Descriptors: Ability, Difficulty Level, Foreign Countries, Latent Trait Theory

Direct link
