Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 7 |
| Since 2017 (last 10 years) | 11 |
| Since 2007 (last 20 years) | 23 |
Descriptor
| Comparative Analysis | 29 |
| Mathematics Tests | 29 |
| Multiple Choice Tests | 29 |
| Test Items | 19 |
| Test Format | 13 |
| Item Response Theory | 11 |
| Foreign Countries | 9 |
| Scores | 8 |
| Computer Assisted Testing | 7 |
| Statistical Analysis | 7 |
| Item Analysis | 6 |
| More ▼ | |
Source
Author
| Li, Dongmei | 2 |
| Suh, Youngsuk | 2 |
| ALKursheh, Taha Okleh | 1 |
| Al-zboon, Habis Saad | 1 |
| AlNasraween, Mo'en Salman | 1 |
| Anthony Petrosino | 1 |
| Ayaz, Hasan | 1 |
| Beserra, Vagner | 1 |
| Binici, Salih | 1 |
| Bärbel Barzel | 1 |
| Cai, Li | 1 |
| More ▼ | |
Publication Type
Education Level
Audience
| Administrators | 1 |
| Practitioners | 1 |
| Researchers | 1 |
Location
| Australia | 1 |
| Canada | 1 |
| Chile (Santiago) | 1 |
| Czech Republic | 1 |
| Germany | 1 |
| Indiana | 1 |
| Indonesia | 1 |
| Maryland | 1 |
| New Jersey | 1 |
| Nigeria | 1 |
| Pennsylvania (Pittsburgh) | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 3 |
| ACT Assessment | 2 |
| Advanced Placement… | 1 |
| General Educational… | 1 |
| National Assessment of… | 1 |
| Program for International… | 1 |
| State of Texas Assessments of… | 1 |
| Work Keys (ACT) | 1 |
What Works Clearinghouse Rating
Katrin Klingbeil; Fabian Rösken; Bärbel Barzel; Florian Schacht; Kaye Stacey; Vicki Steinle; Daniel Thurm – ZDM: Mathematics Education, 2024
Assessing students' (mis)conceptions is a challenging task for teachers as well as for researchers. While individual assessment, for example through interviews, can provide deep insights into students' thinking, this is very time-consuming and therefore not feasible for whole classes or even larger settings. For those settings, automatically…
Descriptors: Multiple Choice Tests, Formative Evaluation, Mathematics Tests, Misconceptions
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Walter M. Stroup; Anthony Petrosino; Corey Brady; Karen Duseau – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
Tests of statistical significance often play a decisive role in establishing the empirical warrant of evidence-based research in education. The results from pattern-based assessment items, as introduced in this paper, are categorical and multimodal and do not immediately support the use of measures of central tendency as typically related to…
Descriptors: Statistical Significance, Comparative Analysis, Research Methodology, Evaluation Methods
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
PaaBen, Benjamin; Dywel, Malwina; Fleckenstein, Melanie; Pinkwart, Niels – International Educational Data Mining Society, 2022
Item response theory (IRT) is a popular method to infer student abilities and item difficulties from observed test responses. However, IRT struggles with two challenges: How to map items to skills if multiple skills are present? And how to infer the ability of new students that have not been part of the training data? Inspired by recent advances…
Descriptors: Item Response Theory, Test Items, Item Analysis, Inferences
Viskotová, Lenka; Hampel, David – Mathematics Teaching Research Journal, 2022
Computer-aided assessment is an important tool that reduces the workload of teachers and increases the efficiency of their work. The multiple-choice test is considered to be one of the most common forms of computer-aided testing and its application for mid-term has indisputable advantages. For the purposes of a high-quality and responsible…
Descriptors: Undergraduate Students, Mathematics Tests, Computer Assisted Testing, Faculty Workload
ALKursheh, Taha Okleh; Al-zboon, Habis Saad; AlNasraween, Mo'en Salman – International Journal of Instruction, 2022
This study aimed at comparing the effect of two test item formats (multiple-choice and complete) on estimating person's ability, item parameters and the test information function (TIF).To achieve the aim of the study, two format of mathematics(1) test have been created: multiple-choice and complete, In its final format consisted of (31) items. The…
Descriptors: Comparative Analysis, Test Items, Item Response Theory, Test Format
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Falk, Carl F.; Cai, Li – Journal of Educational Measurement, 2016
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Item Response Theory, Guessing (Tests), Mathematics Tests, Simulation
Beserra, Vagner; Nussbaum, Miguel; Grass, Antonio – Interactive Learning Environments, 2017
When using educational video games, particularly drill-and-practice video games, there are several ways of providing an answer to a quiz. The majority of paper-based options can be classified as being either multiple-choice or constructed-response. Therefore, in the process of creating an educational drill-and-practice video game, one fundamental…
Descriptors: Multiple Choice Tests, Drills (Practice), Educational Games, Video Games
Nikolov, Margaret C.; Withers, Wm. Douglas – PRIMUS, 2016
We propose a new course structure to address the needs of college students with previous calculus study but no course validations as an alternative to repeating the first year of calculus. Students are introduced directly to topics from Calculus III unpreceded by a formal review of topics from Calculus I or II, but with additional syllabus time…
Descriptors: Mathematics Instruction, College Mathematics, Undergraduate Study, Calculus
Suh, Youngsuk; Talley, Anna E. – Applied Measurement in Education, 2015
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Methods
Hamdi, Syukrul; Kartowagiran, Badrun; Haryanto – International Journal of Instruction, 2018
The purpose of this study was to develop a Mathematics test instrument testlet model for a classroom assessment at elementary school. Testlet Model is a group of multiple choice question acquiring similar information with different grade of responses model. This research was conducted in East Lombok, Indonesia. The design used was research…
Descriptors: Test Items, Models, Elementary School Mathematics, Mathematics Instruction
Terzi, Ragip; Suh, Youngsuk – Journal of Educational Measurement, 2015
An odds ratio approach (ORA) under the framework of a nested logit model was proposed for evaluating differential distractor functioning (DDF) in multiple-choice items and was compared with an existing ORA developed under the nominal response model. The performances of the two ORAs for detecting DDF were investigated through an extensive…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Comparative Analysis
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
