NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Babcock, Ben – Journal of Educational Measurement, 2019
One common phenomenon in Angoff standard setting is that panelists regress their ratings in toward the middle of the probability scale. This study describes two indices based on taking ratios of standard deviations that can be utilized with a scatterplot of item ratings versus expected probabilities of success to identify whether ratings are…
Descriptors: Item Analysis, Standard Setting, Probability, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rümeysa Kaya; Bayram Çetin – International Journal of Assessment Tools in Education, 2025
In this study, the cut-off scores obtained from the Angoff, Angoff Y/N, Nedelsky and Ebel standard methods were compared with the 50 T score and the current cut-off score in various aspects. Data were collected from 448 students who took Module B1+ English Exit Exam IV and 14 experts. It was seen that while the Nedelsky method gave the lowest…
Descriptors: Standard Setting, Cutting Scores, Exit Examinations, Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sivakorn Tangsakul; Kornwipa Poonpon – rEFLections, 2024
Given the significant global influence of the Common European Framework of Reference for Languages: Teaching, Learning, and Assessment (CEFR) on English language education, this study deals with aligning a university's academic reading tests to the CEFR. It aimed at validating the test construct of the academic reading tests in relation to the…
Descriptors: Alignment (Education), Reading Tests, Second Language Learning, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Peabody, Michael R.; Wind, Stefanie A. – Journal of Educational Measurement, 2019
Setting performance standards is a judgmental process involving human opinions and values as well as technical and empirical considerations. Although all cut score decisions are by nature somewhat arbitrary, they should not be capricious. Judges selected for standard-setting panels should have the proper qualifications to make the judgments asked…
Descriptors: Standard Setting, Decision Making, Performance Based Assessment, Evaluators
Moyer, Eric L.; Galindo, Jennifer – National Assessment Governing Board, 2023
The National Assessment Governing Board (the Board) contracted with Pearson to design and implement a review of the achievement level descriptions (ALDs) for National Assessment of Educational Progress (NAEP) Grade 8 assessments in Science, U.S. History, and Civics. This document describes the procedural and technical aspects and outcomes of the…
Descriptors: National Competency Tests, Student Evaluation, Grade 8, Academic Achievement
Nebraska Department of Education, 2024
The Nebraska Student-Centered Assessment System (NSCAS) is a statewide assessment system that embodies Nebraska's holistic view of students and helps them prepare for success in postsecondary education, career, and civic life. It uses multiple measures throughout the year to provide educators and decision-makers at all levels with the insights…
Descriptors: Student Evaluation, Evaluation Methods, Elementary School Students, Middle School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bichi, Ado Abdu; Talib, Rohaya; Embong, Rahimah; Mohamed, Hasnah Binti; Ismail, Mohd Sani; Ibrahim, Abdallah – Eurasian Journal of Educational Research, 2019
Purpose: University placement test is an important admission policy priority in Nigeria, because it serves as a university-based selection criterion for placement of students into undergraduate programs in Nigeria. Although recently attention have been shifted on the call to develop a standard content and standardize the test, yet attention has…
Descriptors: Standard Setting, Economics Education, Student Placement, Cutting Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Applied Measurement in Education, 2018
This article discusses regression effects that are commonly observed in Angoff ratings where panelists tend to think that hard items are easier than they are and easy items are more difficult than they are in comparison to estimated item difficulties. Analyses of data from two credentialing exams illustrate these regression effects and the…
Descriptors: Regression (Statistics), Test Items, Difficulty Level, Licensing Examinations (Professions)
Moyer, Eric L.; Galindo, Jennifer – National Assessment Governing Board, 2022
The National Assessment Governing Board has a legislatively mandated responsibility to develop National Assessment of Educational Progress (NAEP) achievement levels. The Board Policy Statement on Developing Student Achievement Levels for the National Assessment of Educational Progress provides policy definitions of "NAEP Basic,"…
Descriptors: Reading Achievement, Mathematics Achievement, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Jerome C.; Hambleton, Ronald K.; Baldwin, Peter – Educational and Psychological Measurement, 2017
The Angoff standard setting method relies on content experts to review exam items and make judgments about the performance of the minimally proficient examinee. Unfortunately, at times content experts may have gaps in their understanding of specific exam content. These gaps are particularly likely to occur when the content domain is broad and/or…
Descriptors: Scores, Item Analysis, Classification, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wudthayagorn, Jirada – LEARN Journal: Language Education and Acquisition Research Network, 2018
The purpose of this study was to map the Chulalongkorn University Test of English Proficiency, or the CU-TEP, to the Common European Framework of Reference (CEFR) by employing a standard setting methodology. Thirteen experts judged 120 items of the CU-TEP using the Yes/No Angoff technique. The experts decided whether or not a borderline student at…
Descriptors: Guidelines, Rating Scales, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Margolis, Melissa J.; Mee, Janet; Clauser, Brian E.; Winward, Marcia; Clauser, Jerome C. – Educational Measurement: Issues and Practice, 2016
Evidence to support the credibility of standard setting procedures is a critical part of the validity argument for decisions made based on tests that are used for classification. One area in which there has been limited empirical study is the impact of standard setting judge selection on the resulting cut score. One important issue related to…
Descriptors: Academic Standards, Standard Setting (Scoring), Cutting Scores, Credibility
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bichi, Ado Abdu; Hafiz, Hadiza; Bello, Samira Abdullahi – International Journal of Evaluation and Research in Education, 2016
High-stakes testing is used for the purposes of providing results that have important consequences. Validity is the cornerstone upon which all measurement systems are built. This study applied the Item Response Theory principles to analyse Northwest University Kano Post-UTME Economics test items. The developed fifty (50) economics test items was…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Statistical Analysis
DiBartolomeo, Matthew – ProQuest LLC, 2010
Multiple factors have influenced testing agencies to more carefully consider the manner and frequency in which pretest item data are collected and analyzed. One potentially promising development is judges' estimates of item difficulty. Accurate estimates of item difficulty may be used to reduce pretest samples sizes, supplement insufficient…
Descriptors: Test Items, Group Discussion, Athletics, Pretests Posttests
Previous Page | Next Page »
Pages: 1  |  2