Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 3 |
Descriptor
Source
| College Board | 3 |
Author
| Kaliski, Pamela | 2 |
| Reshetar, Rosemary | 2 |
| Chajewski, Michael | 1 |
| Engelhard, George, Jr. | 1 |
| Hendrickson, Amy | 1 |
| Lionberger, Karen | 1 |
| Melican, Gerald | 1 |
| Morgan, Deanna | 1 |
| Patterson, Brian | 1 |
| Plake, Barbara | 1 |
| Wind, Stefanie A. | 1 |
| More ▼ | |
Publication Type
| Non-Print Media | 3 |
| Reference Materials - General | 3 |
Education Level
| High Schools | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 2 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen – College Board, 2012
This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.
Descriptors: Advanced Placement Programs, Science Tests, Achievement Tests, Classification
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary – College Board, 2012
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
Descriptors: Advanced Placement Programs, Achievement Tests, Item Response Theory, Models
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability


