NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Torres Irribarra, David; Diakow, Ronli; Freund, Rebecca; Wilson, Mark – Grantee Submission, 2015
This paper presents the Latent Class Level-PCM as a method for identifying and interpreting latent classes of respondents according to empirically estimated performance levels. The model, which combines elements from latent class models and reparameterized partial credit models for polytomous data, can simultaneously (a) identify empirical…
Descriptors: Item Response Theory, Test Items, Statistical Analysis, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shulruf, Boaz; Jones, Phil; Turner, Rolf – Higher Education Studies, 2015
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Descriptors: Undergraduate Students, Pass Fail Grading, Decision Making, Probability
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary – College Board, 2012
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
Descriptors: Advanced Placement Programs, Achievement Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A. – Educational and Psychological Measurement, 2013
The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…
Descriptors: Item Response Theory, Models, Standard Setting (Scoring), Science Tests
Kroopnick, Marc Howard – ProQuest LLC, 2010
When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…
Descriptors: Simulation, Scaling, Standard Setting, Item Response Theory
Hambleton, Ronald K.; Eignor, Daniel R. – 1979
This instructional training package introduces practitioners to methods for developing, validating, using, and reporting criterion-referenced tests. It provides a comprehensive presentation of criterion-referenced testing technology. The package emphasizes the most recent substantive and technological advances in the field that are both important…
Descriptors: Criterion Referenced Tests, Cutting Scores, Evaluation Methods, Mastery Tests