Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Models | 6 |
| Test Items | 6 |
| Item Response Theory | 5 |
| Standard Setting (Scoring) | 4 |
| Cutting Scores | 3 |
| Difficulty Level | 3 |
| Achievement Tests | 2 |
| Advanced Placement Programs | 2 |
| Science Tests | 2 |
| Standard Setting | 2 |
| Validity | 2 |
| More ▼ | |
Source
| College Board | 1 |
| Educational and Psychological… | 1 |
| Grantee Submission | 1 |
| Higher Education Studies | 1 |
| ProQuest LLC | 1 |
Author
Publication Type
| Journal Articles | 3 |
| Reports - Research | 3 |
| Dissertations/Theses -… | 1 |
| Guides - Classroom - Teacher | 1 |
| Non-Print Media | 1 |
| Reference Materials - General | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
| High Schools | 1 |
| Secondary Education | 1 |
Audience
| Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 2 |
What Works Clearinghouse Rating
Torres Irribarra, David; Diakow, Ronli; Freund, Rebecca; Wilson, Mark – Grantee Submission, 2015
This paper presents the Latent Class Level-PCM as a method for identifying and interpreting latent classes of respondents according to empirically estimated performance levels. The model, which combines elements from latent class models and reparameterized partial credit models for polytomous data, can simultaneously (a) identify empirical…
Descriptors: Item Response Theory, Test Items, Statistical Analysis, Models
Shulruf, Boaz; Jones, Phil; Turner, Rolf – Higher Education Studies, 2015
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Descriptors: Undergraduate Students, Pass Fail Grading, Decision Making, Probability
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary – College Board, 2012
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
Descriptors: Advanced Placement Programs, Achievement Tests, Item Response Theory, Models
Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A. – Educational and Psychological Measurement, 2013
The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…
Descriptors: Item Response Theory, Models, Standard Setting (Scoring), Science Tests
Kroopnick, Marc Howard – ProQuest LLC, 2010
When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…
Descriptors: Simulation, Scaling, Standard Setting, Item Response Theory
Hambleton, Ronald K.; Eignor, Daniel R. – 1979
This instructional training package introduces practitioners to methods for developing, validating, using, and reporting criterion-referenced tests. It provides a comprehensive presentation of criterion-referenced testing technology. The package emphasizes the most recent substantive and technological advances in the field that are both important…
Descriptors: Criterion Referenced Tests, Cutting Scores, Evaluation Methods, Mastery Tests

Peer reviewed
Direct link
