NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bejar, Isaac I.; Deane, Paul D.; Flor, Michael; Chen, Jing – ETS Research Report Series, 2017
The report is the first systematic evaluation of the sentence equivalence item type introduced by the "GRE"® revised General Test. We adopt a validity framework to guide our investigation based on Kane's approach to validation whereby a hierarchy of inferences that should be documented to support score meaning and interpretation is…
Descriptors: College Entrance Examinations, Graduate Study, Generalization, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Bennett, Randy Elliot; And Others – 1991
This exploratory study applied two new cognitively sensitive measurement models to constructed-response quantitative data. The models, intended to produce qualitative characteristics of examinee performance, were fitted to algebra word problem solutions produced by 285 examinees taking the Graduate Record Examinations (GRE) General Test. The two…
Descriptors: Algebra, College Entrance Examinations, College Students, Constructed Response
Mislevy, Robert J.; Almond, Russell G. – 1997
This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Testing, Higher Education
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1991
Convergent validity of expert-systems scores for 4 complex constructed-response mathematical formats was assessed for 249 examinees from the Graduate Record Examinations (GRE) General Test in June 1989. The hypothesized five-factor model fit the data well, but an alternative with two dimensions (GRE-quantitative and constructed-response)…
Descriptors: College Entrance Examinations, Constructed Response, Educational Assessment, Expert Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Graf, Edith Aurora; Peterson, Stephen; Steffen, Manfred; Lawless, René – ETS Research Report Series, 2005
We describe the item modeling development and evaluation process as applied to a quantitative assessment with high-stakes outcomes. In addition to expediting the item-creation process, a model-based approach may reduce pretesting costs, if the difficulty and discrimination of model-generated items may be predicted to a predefined level of…
Descriptors: Psychometrics, Accuracy, Item Analysis, High Stakes Tests