Publication Date
| In 2026 | 0 |
| Since 2025 | 18 |
| Since 2022 (last 5 years) | 120 |
| Since 2017 (last 10 years) | 262 |
| Since 2007 (last 20 years) | 435 |
Descriptor
| Test Format | 956 |
| Test Items | 956 |
| Test Construction | 363 |
| Multiple Choice Tests | 260 |
| Foreign Countries | 227 |
| Difficulty Level | 199 |
| Higher Education | 179 |
| Computer Assisted Testing | 160 |
| Item Response Theory | 151 |
| Item Analysis | 149 |
| Scores | 146 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 62 |
| Teachers | 47 |
| Researchers | 32 |
| Students | 15 |
| Administrators | 13 |
| Parents | 6 |
| Policymakers | 5 |
| Community | 1 |
| Counselors | 1 |
Location
| Turkey | 27 |
| Canada | 15 |
| Germany | 15 |
| Australia | 13 |
| Israel | 13 |
| Japan | 12 |
| Netherlands | 10 |
| United Kingdom | 10 |
| United States | 9 |
| Arizona | 6 |
| Iran | 6 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 2 |
| No Child Left Behind Act 2001 | 2 |
| Elementary and Secondary… | 1 |
| Head Start | 1 |
| Job Training Partnership Act… | 1 |
| Perkins Loan Program | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Sangwin, Christopher J.; Jones, Ian – Educational Studies in Mathematics, 2017
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Descriptors: Mathematics Achievement, Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing
Sheybani, Elias; Zeraatpishe, Mitra – International Journal of Language Testing, 2018
Test method is deemed to affect test scores along with examinee ability (Bachman, 1996). In this research the role of method facet in reading comprehension tests is studied. Bachman divided method facet into five categories, one category is the nature of input and the nature of expected response. This study examined the role of method effect in…
Descriptors: Reading Comprehension, Reading Tests, Test Items, Test Format
Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K. – Practical Assessment, Research & Evaluation, 2015
In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…
Descriptors: Item Response Theory, Monte Carlo Methods, Scaling, Test Items
Undersander, Molly A.; Kettler, Richard M.; Stains, Marilyne – Journal of Geoscience Education, 2017
Concept inventories have been determined to be useful assessment tools for evaluating students' knowledge, particularly in the sciences. However, these assessment tools must be validated to reflect as accurately as possible students' understanding of concepts. One possible threat to this validation is what previous literature calls the item order…
Descriptors: Earth Science, Test Items, Scientific Concepts, Science Tests
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Using Necessary Information to Identify Item Dependence in Passage-Based Reading Comprehension Tests
Baldonado, Angela Argo; Svetina, Dubravka; Gorin, Joanna – Applied Measurement in Education, 2015
Applications of traditional unidimensional item response theory models to passage-based reading comprehension assessment data have been criticized based on potential violations of local independence. However, simple rules for determining dependency, such as including all items associated with a particular passage, may overestimate the dependency…
Descriptors: Reading Tests, Reading Comprehension, Test Items, Item Response Theory
Edward Paul Getman – Online Submission, 2020
Despite calls for engaging assessments targeting young language learners (YLLs) between 8 and 13 years old, what makes assessment tasks engaging and how such task characteristics affect measurement quality have not been well studied empirically. Furthermore, there has been a dearth of validity research about technology-enhanced speaking tests for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Learner Engagement
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Saß, Steffani; Schütte, Kerstin – Journal of Psychoeducational Assessment, 2016
Solving test items might require abilities in test-takers other than the construct the test was designed to assess. Item and student characteristics such as item format or reading comprehension can impact the test result. This experiment is based on cognitive theories of text and picture comprehension. It examines whether integration aids, which…
Descriptors: Reading Difficulties, Science Tests, Test Items, Visual Aids
Stevenson, Claire E.; Heiser, Willem J.; Resing, Wilma C. M. – Journal of Psychoeducational Assessment, 2016
Multiple-choice (MC) analogy items are often used in cognitive assessment. However, in dynamic testing, where the aim is to provide insight into potential for learning and the learning process, constructed-response (CR) items may be of benefit. This study investigated whether training with CR or MC items leads to differences in the strategy…
Descriptors: Logical Thinking, Multiple Choice Tests, Test Items, Cognitive Tests
Boggan, Matthew; Wallin, Penny – Excellence in Education Journal, 2016
The mandate that teachers are accountable for student performance has driven states to initiate new evaluation instruments for administrator use to assess teachers. This study examined leaders' perceptions of a first-time statewide accountability instrument used for teacher assessment in one southern state, with 230 K-12 principals anonymously…
Descriptors: Principals, Administrator Attitudes, Teacher Evaluation, Scoring Rubrics
Carnegie, Jacqueline A. – Canadian Journal for the Scholarship of Teaching and Learning, 2017
Summative evaluation for large classes of first- and second-year undergraduate courses often involves the use of multiple choice question (MCQ) exams in order to provide timely feedback. Several versions of those exams are often prepared via computer-based question scrambling in an effort to deter cheating. An important parameter to consider when…
Descriptors: Undergraduate Students, Student Evaluation, Multiple Choice Tests, Test Format
Schoen, Robert C.; Yang, Xiaotong; Liu, Sicong; Paek, Insu – Grantee Submission, 2017
The Early Fractions Test v2.2 is a paper-pencil test designed to measure mathematics achievement of third- and fourth-grade students in the domain of fractions. The purpose, or intended use, of the Early Fractions Test v2.2 is to serve as a measure of student outcomes in a randomized trial designed to estimate the effect of an educational…
Descriptors: Psychometrics, Mathematics Tests, Mathematics Achievement, Fractions
Katrina C. Roohr; HyeSun Lee; Jun Xu; Ou Lydia Liu; Zhen Wang – Numeracy, 2017
Quantitative literacy has been identified as an important student learning outcome (SLO) by both the higher education and workforce communities. This paper aims to provide preliminary evidence of the psychometric quality of the pilot forms for "HEIghten" quantitative literacy, a next-generation SLO assessment for students in higher…
Descriptors: Psychometrics, Numeracy, Test Items, Item Analysis
Kevelson, Marisol J. C. – ETS Research Report Series, 2019
This study presents estimates of Black-White, Hispanic-White, and income achievement gaps using data from two different types of reading and mathematics assessments: constructed-response assessments that were likely more cognitively demanding and state achievement tests that were likely less cognitively demanding (i.e., composed solely or largely…
Descriptors: Racial Differences, Achievement Gap, White Students, African American Students

Peer reviewed
Direct link
