Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 6 |
Descriptor
| Test Items | 6 |
| Models | 4 |
| Test Construction | 4 |
| Advanced Placement Programs | 3 |
| Difficulty Level | 3 |
| Test Format | 3 |
| Achievement Tests | 2 |
| College Entrance Examinations | 2 |
| College Freshmen | 2 |
| Correlation | 2 |
| Grade Point Average | 2 |
| More ▼ | |
Source
| College Board | 6 |
Author
| Hendrickson, Amy | 2 |
| Huff, Kristen | 2 |
| Kaliski, Pamela | 2 |
| Engelhard, George, Jr. | 1 |
| France, Megan | 1 |
| Kim, Rachel | 1 |
| Kobrin, Jennifer L. | 1 |
| Luecht, Ric | 1 |
| Melican, Gerald | 1 |
| Morgan, Deanna | 1 |
| Patterson, Brian | 1 |
| More ▼ | |
Publication Type
| Non-Print Media | 6 |
| Reference Materials - General | 6 |
Education Level
| High Schools | 2 |
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Secondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 2 |
| SAT (College Admission Test) | 2 |
What Works Clearinghouse Rating
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary – College Board, 2012
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
Descriptors: Advanced Placement Programs, Achievement Tests, Item Response Theory, Models
Kaliski, Pamela; France, Megan; Huff, Kristen; Thurber, Allison – College Board, 2011
Developing a cognitive model of task performance is an important and often overlooked phase in assessment design; failing to establish such a model can threaten the validity of the inferences made from the scores produced by an assessment (e.g., Leighton, 2004). Conducting think aloud interviews (TAIs), where students think aloud while completing…
Descriptors: World History, Advanced Placement Programs, Achievement Tests, Protocol Analysis
Hendrickson, Amy; Huff, Kristen; Luecht, Ric – College Board, 2009
[Slides] presented at the Annual Meeting of National Council on Measurement in Education (NCME) in San Diego, CA in April 2009. This presentation describes how the vehicles for gathering student evidence--task models and test specifications--are developed.
Descriptors: Test Items, Test Construction, Evidence, Achievement
Kobrin, Jennifer L.; Kim, Rachel; Sackett, Paul – College Board, 2011
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice versus constructed response), cognitive complexity, and content of these assessments (achievement versus aptitude) at the forefront of the discussion. This study addressed these questions by…
Descriptors: College Entrance Examinations, Mathematics Tests, Test Items, Predictive Validity
Wiley, Andrew – College Board, 2009
Presented at the national conference for the American Educational Research Association (AERA) in 2009. This discussed the development and implementation of the new SAT writing section.
Descriptors: Aptitude Tests, Writing Tests, Test Construction, Test Format
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability


