Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Aryadoust, Vahid; Ng, Li Ying; Sayama, Hiroki – Language Testing, 2021
Over the past decades, the application of Rasch measurement in language assessment has gradually increased. In the present study, we coded 215 papers using Rasch measurement published in 21 applied linguistics journals for multiple features. We found that seven Rasch models and 23 software packages were adopted in these papers, with many-facet…
Descriptors: Language Tests, Testing, Test Items, Network Analysis
Chandra Hawley Orrill; Martha Epstein; Kun Wang; Hamza Malik; Yasemin Copur-Gencturk – Grantee Submission, 2021
Measures of teacher mathematical knowledge are notoriously difficult to develop (e.g., Orrill et al., 2015). This is in part because of the multidimensional nature of teacher knowledge. As part of two separate projects being undertaken by this research team, we have attempted to write assessments of teacher pedagogical content knowledge (PCK) in…
Descriptors: Mathematics Instruction, Pedagogical Content Knowledge, Mathematics Tests, Thinking Skills
Heather Dorrian – ProQuest LLC, 2021
This mixed methods study aimed to categorize and analyze the frequencies and percentages of complex thinking in the PARCC practices assessments in English Language Arts grade 10 and Geometry. The Hess' Cognitive Rigor Matrix was used for the first part of the study to code each of the PARCC assessment questions in Language Arts grade 10 and…
Descriptors: Thinking Skills, Drills (Practice), Standardized Tests, Grade 10
Sahin, Melek Gülsah; Yildirim, Yildiz; Boztunç Öztürk, Nagihan – Participatory Educational Research, 2023
Literature review shows that the development process of an achievement test is mainly investigated in dissertations. Moreover, preparing a form that will shed light on developing an achievement test is expected to guide those who will administer the test. In this line, the current study aims to create an "Achievement Test Development Process…
Descriptors: Achievement Tests, Test Construction, Records (Forms), Mathematics Achievement
Gantt, Allison L.; Paoletti, Teo; Corven, Julien – International Journal of Science and Mathematics Education, 2023
Covariational reasoning (or the coordination of two dynamically changing quantities) is central to secondary STEM subjects, but research has yet to fully explore its applicability to elementary and middle-grade levels within various STEM fields. To address this need, we selected a globally referenced STEM assessment--the Trends in International…
Descriptors: Incidence, Abstract Reasoning, Mathematics Education, Science Education
Alexander James Kwako – ProQuest LLC, 2023
Automated assessment using Natural Language Processing (NLP) has the potential to make English speaking assessments more reliable, authentic, and accessible. Yet without careful examination, NLP may exacerbate social prejudices based on gender or native language (L1). Current NLP-based assessments are prone to such biases, yet research and…
Descriptors: Gender Bias, Natural Language Processing, Native Language, Computational Linguistics
Aleyna Altan; Zehra Taspinar Sener – Online Submission, 2023
This research aimed to develop a valid and reliable test to be used to detect sixth grade students' misconceptions and errors regarding the subject of fractions. A misconception diagnostic test has been developed that includes the concept of fractions, different representations of fractions, ordering and comparing fractions, equivalence of…
Descriptors: Diagnostic Tests, Mathematics Tests, Fractions, Misconceptions
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
Zijlmans, Eva A. O.; Tijmstra, Jesper; van der Ark, L. Andries; Sijtsma, Klaas – Educational and Psychological Measurement, 2018
Reliability is usually estimated for a total score, but it can also be estimated for item scores. Item-score reliability can be useful to assess the repeatability of an individual item score in a group. Three methods to estimate item-score reliability are discussed, known as method MS, method [lambda][subscript 6], and method CA. The item-score…
Descriptors: Test Items, Test Reliability, Correlation, Comparative Analysis
Luo, Yong – Educational and Psychological Measurement, 2018
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Descriptors: Computer Software, Models, Statistical Analysis, Computation
Fay, Derek M.; Levy, Roy; Mehta, Vandhana – Journal of Educational Measurement, 2018
A common practice in educational assessment is to construct multiple forms of an assessment that consists of tasks with similar psychometric properties. This study utilizes a Bayesian multilevel item response model and descriptive graphical representations to evaluate the psychometric similarity of variations of the same task. These approaches for…
Descriptors: Psychometrics, Performance Based Assessment, Bayesian Statistics, Item Response Theory
Embretson, Susan E.; Kingston, Neal M. – Journal of Educational Measurement, 2018
The continual supply of new items is crucial to maintaining quality for many tests. Automatic item generation (AIG) has the potential to rapidly increase the number of items that are available. However, the efficiency of AIG will be mitigated if the generated items must be submitted to traditional, time-consuming review processes. In two studies,…
Descriptors: Mathematics Instruction, Mathematics Achievement, Psychometrics, Test Items
Lu, Ru; Guo, Hongwen – ETS Research Report Series, 2018
In this paper we compare the newly developed pseudo-equivalent groups (PEG) linking method with the linking methods based on the traditional nonequivalent groups with anchor test (NEAT) design and illustrate how to use the PEG methods under imperfect equating conditions. To do this, we proposed a new method that combines the features of PEG…
Descriptors: Equated Scores, Comparative Analysis, Test Items, Background
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format

Peer reviewed
Direct link
