NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Benjamin; van Rijn, Peter; Molenaar, Dylan; Debeer, Dries – Assessment & Evaluation in Higher Education, 2022
A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue…
Descriptors: Information Security, High Stakes Tests, Computer Security, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Raymond, Mark R.; Stevens, Craig; Bucak, S. Deniz – Advances in Health Sciences Education, 2019
Research suggests that the three-option format is optimal for multiple choice questions (MCQs). This conclusion is supported by numerous studies showing that most distractors (i.e., incorrect answers) are selected by so few examinees that they are essentially nonfunctional. However, nearly all studies have defined a distractor as nonfunctional if…
Descriptors: Multiple Choice Tests, Credentials, Test Format, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Linyu – English Language Teaching, 2020
As a high-stakes standardized test, IELTS is expected to have comparable forms of test papers so that test takers from different test administration on different dates receive comparable test scores. Therefore, this study examined the text difficulty and task characteristics of four parallel academic IELTS reading tests to reveal to what extent…
Descriptors: Second Language Learning, English (Second Language), Language Tests, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hohensinn, Christine; Baghaei, Purya – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Regan, Blake B. – ProQuest LLC, 2012
This study examined the relationship between high school exit exams and mathematical proficiency. With the No Child Left Behind (NCLB) Act requiring all students to be proficient in mathematics by 2014, it is imperative that high-stakes assessments accurately evaluate all aspects of student achievement, appropriately set the yardstick by which…
Descriptors: Exit Examinations, Mathematics Achievement, High School Students, Test Items