Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 8 |
Descriptor
Source
Author
| A. Lopez, Alexis | 1 |
| Ali, Usama S. | 1 |
| Basaraba, Deni L. | 1 |
| Bennett, Randy Elliot | 1 |
| Binici, Salih | 1 |
| Brese, Falk, Ed. | 1 |
| Chang, Hua-Hua | 1 |
| Crabtree, Ashleigh R. | 1 |
| Cuhadar, Ismail | 1 |
| Curtis, Deborah A. | 1 |
| Heller, Joan I. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 7 |
| Journal Articles | 6 |
| Reports - Descriptive | 2 |
| Dissertations/Theses -… | 1 |
| Guides - Non-Classroom | 1 |
| Information Analyses | 1 |
| Reports - Evaluative | 1 |
| Tests/Questionnaires | 1 |
Education Level
Audience
Location
| Iowa | 1 |
| Kansas | 1 |
| Massachusetts | 1 |
| New Jersey | 1 |
| Oregon | 1 |
| Texas | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Cuhadar, Ismail; Binici, Salih – Educational Measurement: Issues and Practice, 2022
This study employs the 4-parameter logistic item response theory model to account for the unexpected incorrect responses or slipping effects observed in a large-scale Algebra 1 End-of-Course assessment, including several innovative item formats. It investigates whether modeling the misfit at the upper asymptote has any practical impact on the…
Descriptors: Item Response Theory, Measurement, Student Evaluation, Algebra
Basaraba, Deni L.; Yovanoff, Paul; Shivraj, Pooja; Ketterlin-Geller, Leanne R. – Practical Assessment, Research & Evaluation, 2020
Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically…
Descriptors: Formative Evaluation, Test Format, Test Items, Difficulty Level
A. Lopez, Alexis – Journal of Latinos and Education, 2023
In this study, I examined how 34 Spanish-speaking English language learners (ELLs) used their linguistic resources (English and Spanish) and language modes (oral and written language) to demonstrate their knowledge of proportional reasoning in a dual language mathematics assessment task. The assessment allows students to see the item in both…
Descriptors: Spanish Speaking, English Language Learners, Language Usage, Mathematics Instruction
Crabtree, Ashleigh R. – ProQuest LLC, 2016
The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties…
Descriptors: Psychometrics, Computer Assisted Testing, Test Items, Test Format
Sangwin, Christopher J.; Jones, Ian – Educational Studies in Mathematics, 2017
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Descriptors: Mathematics Achievement, Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Brese, Falk, Ed. – International Association for the Evaluation of Educational Achievement, 2012
The goal for selecting the released set of test items was to have approximately 25% of each of the full item sets for mathematics content knowledge (MCK) and mathematics pedagogical content knowledge (MPCK) that would represent the full range of difficulty, content, and item format used in the TEDS-M study. The initial step in the selection was to…
Descriptors: Preservice Teacher Education, Elementary School Teachers, Secondary School Teachers, Mathematics Teachers
Heller, Joan I.; Curtis, Deborah A.; Jaffe, Rebecca; Verboncoeur, Carol J. – Online Submission, 2005
This study investigated the relationship between instructional use of handheld graphing calculators and student achievement in Algebra 1. Three end-of-course test forms were administered (without calculators) using matrix sampling to 458 high-school students in two suburban school districts in Oregon and Kansas. Test questions on two forms were…
Descriptors: Test Items, Standardized Tests, Suburban Schools, Item Sampling
Wainer, Howard; And Others – 1990
The initial development of a testlet-based algebra test was previously reported (Wainer and Lewis, 1990). This account provides the details of this excursion into the use of hierarchical testlets and validity-based scoring. A pretest of two 15-item hierarchical testlets was carried out in which examinees' performance on a 4-item subset of each…
Descriptors: Adaptive Testing, Algebra, Comparative Analysis, Computer Assisted Testing
Peer reviewedMartinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science
Plake, Barbara S.; Wise, Steven L. – 1986
One question regarding the utility of adaptive testing is the effect of individualized item arrangements on examinee test scores. The purpose of this study was to analyze the item difficulty choices by examinees as a function of previous item performance. The examination was a 25-item test of basic algebra skills given to 36 students in an…
Descriptors: Adaptive Testing, Algebra, College Students, Computer Assisted Testing

Direct link
