Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Difficulty Level | 7 |
Item Analysis | 7 |
Test Items | 7 |
College Entrance Examinations | 6 |
Graduate Study | 5 |
Models | 3 |
Psychometrics | 3 |
Statistical Analysis | 3 |
Test Construction | 3 |
Accuracy | 2 |
Correlation | 2 |
More ▼ |
Author
Attali, Yigal | 1 |
Chalifour, Clark L. | 1 |
Embretson, Susan E. | 1 |
Futagi, Yoko | 1 |
Gorin, Joanna S. | 1 |
Graf, Edith Aurora | 1 |
Johnson, Matthew | 1 |
Kingston, Neal M. | 1 |
Kostin, Irene | 1 |
Lawless, René | 1 |
Peterson, Stephen | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 7 |
What Works Clearinghouse Rating
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko – ETS Research Report Series, 2007
This paper explores alternative approaches for facilitating efficient, evidence-centered item development for a new type of verbal reasoning item developed for use on the GRE® General Test. Results obtained in two separate studies are reported. The first study documented the development and validation of a fully automated approach for locating the…
Descriptors: College Entrance Examinations, Graduate Study, Test Items, Item Analysis

Chalifour, Clark L.; Powers, Donald E. – Journal of Educational Measurement, 1989
Content characteristics of 1,400 Graduate Record Examination (GRE) analytical reasoning items were coded for item difficulty and discrimination. The results provide content characteristics for consideration in extending specifications for analytical reasoning items and a better understanding of the construct validity of these items. (TJH)
Descriptors: College Entrance Examinations, Construct Validity, Content Analysis, Difficulty Level
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition
Sinharay, Sandip; Johnson, Matthew – ETS Research Report Series, 2005
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate/produce items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996; Bejar, 2002). They have the potential to produce large number of high-quality items at reduced cost. This paper introduces…
Descriptors: Item Analysis, Test Items, Scoring, Psychometrics
Graf, Edith Aurora; Peterson, Stephen; Steffen, Manfred; Lawless, René – ETS Research Report Series, 2005
We describe the item modeling development and evaluation process as applied to a quantitative assessment with high-stakes outcomes. In addition to expediting the item-creation process, a model-based approach may reduce pretesting costs, if the difficulty and discrimination of model-generated items may be predicted to a predefined level of…
Descriptors: Psychometrics, Accuracy, Item Analysis, High Stakes Tests
Kingston, Neal M. – 1985
Birnbaum's three-parameter logistic item response model was used to study guessing behavior of low ability examinees on the Graduate Record Examinations (GRE) General Test, Verbal Measure. GRE scoring procedures had recently changed, from a scoring formula which corrected for guessing, to number-right scoring. The three-parameter theory was used…
Descriptors: Academic Aptitude, Analysis of Variance, College Entrance Examinations, Difficulty Level