NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Szendey, Olivia; Kaplan, Larry – Educational Assessment, 2021
Differential Item Function (DIF) analysis is commonly employed to examine potential bias produced by a test item. Since its introduction DIF analyses have focused on potential bias related to broad categories of oppression, including gender, racial stratification, economic class, and ableness. More recently, efforts to examine the effects of…
Descriptors: Test Bias, Achievement Tests, Individual Characteristics, Disadvantaged
He, Wei – NWEA, 2022
To ensure that student academic growth in a subject area is accurately captured, it is imperative that the underlying scale remains stable over time. As item parameter stability constitutes one of the factors that affects scale stability, NWEA® periodically conducts studies to check for the stability of the item parameter estimates for MAP®…
Descriptors: Achievement Tests, Test Items, Test Reliability, Academic Achievement
Tomkowicz, Joanna; Kim, Dong-In; Wan, Ping – Online Submission, 2022
In this study we evaluated the stability of item parameters and student scores, using the pre-equated (pre-pandemic) parameters from Spring 2019 and post-equated (post-pandemic) parameters from Spring 2021 in two calibration and equating designs related to item parameter treatment: re-estimating all anchor parameters (Design 1) and holding the…
Descriptors: Equated Scores, Test Items, Evaluation Methods, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Dale J.; Ballman, Alesha; Rijmen, Frank; Cohen, Jon – Applied Measurement in Education, 2020
Computer-based, pop-up glossaries are perhaps the most promising accommodation aimed at mitigating the influence of linguistic structure and cultural bias on the performance of English Learner (EL) students on statewide assessments. To date, there is no established procedure for identifying the words that require a glossary for EL students that is…
Descriptors: Glossaries, Testing Accommodations, English Language Learners, Computer Assisted Testing
Nebraska Department of Education, 2024
The Nebraska Student-Centered Assessment System (NSCAS) is a statewide assessment system that embodies Nebraska's holistic view of students and helps them prepare for success in postsecondary education, career, and civic life. It uses multiple measures throughout the year to provide educators and decision-makers at all levels with the insights…
Descriptors: Student Evaluation, Evaluation Methods, Elementary School Students, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Morrison, Kristin M. – Educational Assessment, 2019
Assessment items are commonly field tested prior to operational use to observe statistical item properties such as difficulty. Item parameter estimates from field testing may be used to assign scores via pre-equating or computer adaptive designs. This study examined differences between item difficulty estimates based on field test and operational…
Descriptors: Field Tests, Test Items, Statistics, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lopez, Alexis A.; Guzman-Orth, Danielle; Zapata-Rivera, Diego; Forsyth, Carolyn M.; Luce, Christine – ETS Research Report Series, 2021
Substantial progress has been made toward applying technology enhanced conversation-based assessments (CBAs) to measure the English-language proficiency of English learners (ELs). CBAs are conversation-based systems that use conversations among computer-animated agents and a test taker. We expanded the design and capability of prior…
Descriptors: Accuracy, English Language Learners, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lopez, Alexis A.; Tolentino, Florencia – ETS Research Report Series, 2020
In this study we investigated how English learners (ELs) interacted with "®" summative English language arts (ELA) and mathematics items, the embedded online tools, and accessibility features. We focused on how EL students navigated the assessment items; how they selected or constructed their responses; how they interacted with the…
Descriptors: English Language Learners, Student Evaluation, Language Arts, Summative Evaluation
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ajeigbe, Taiwo Oluwafemi; Afolabi, Eyitayo Rufus Ifedayo – World Journal of Education, 2017
This study assessed unidimensionality and occurrence of Differential Item Functioning (DIF) in Mathematics and English Language items of Osun State Qualifying Examination. The study made use of secondary data. The results showed that OSQ Mathematics (-0.094 = r = 0.236) and English Language items (-0.095 = r = 0.228) were unidimensional. Also,…
Descriptors: Foreign Countries, Test Bias, Secondary School Students, Statistical Analysis
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Peer reviewed Peer reviewed
Direct linkDirect link
Latifi, Syed; Bulut, Okan; Gierl, Mark; Christie, Thomas; Jeeva, Shehzad – SAGE Open, 2016
The purpose of this study is to evaluate two methodological perspectives of test fairness using a national Secondary School Certificate (SSC) examinations. SSC is a suit of multi-subject national qualification tests at Grade 10 level in South Asian countries, such as Bangladesh, India, and Pakistan. Because it is a high-stakes test, the fairness…
Descriptors: Foreign Countries, National Competency Tests, Language Tests, Mathematics Tests
Ferrara, Steve; Steedle, Jeffrey; Kinsman, Amy – Partnership for Assessment of Readiness for College and Careers, 2015
We report results from the following three analyses of PARCC [Partnership for Assessment of Readiness for College and Careers] cognitive complexity measures, based on 2014 field test item and task development and field test data. We conducted classification and regression tree analyses using 2014 PARCC field test data to do the following: (1)…
Descriptors: Cognitive Processes, Difficulty Level, Test Items, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Combrinck, Celeste; Scherman, Vanessa; Maree, David – Perspectives in Education, 2016
This study describes how criterion-referenced feedback was produced from English language, mathematics and natural sciences monitoring assessments. The assessments were designed for grades 8 to 11 to give an overall indication of curriculum-standards attained in a given subject over the course of a year (N = 1113). The Rasch Item Map method was…
Descriptors: Item Response Theory, Feedback (Response), Criterion Referenced Tests, Academic Standards
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Previous Page | Next Page »
Pages: 1  |  2  |  3