Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Vicky Chondrogianni; Morna Butcher – Studies in Second Language Acquisition, 2023
This study investigated the psycholinguistic and child-related variables that modulate vocabulary development and the so-called receptive-expressive gap in child L2 learners of Gaelic with English as their L1. In total, 50 6- to 8-year-old English-Gaelic bilingual children attending Gaelic-medium immersion education were administered the English…
Descriptors: Psycholinguistics, Children, Bilingual Students, English
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Chioma C. Ezeh – AERA Online Paper Repository, 2023
Culturally relevant assessments (CRA) account for multiple socio-cultural identities, experiences, and values that mediate how students know, think, and respond to test items. Given the diversity of modern classrooms, it is critical that education researchers and practitioners understand and strive to implement CRA practices. This systematic…
Descriptors: Educational Practices, Culturally Relevant Education, Culture Fair Tests, Classroom Techniques
Benton, Tom – Research Matters, 2020
This article reviews the evidence on the extent to which experts' perceptions of item difficulties, captured using comparative judgement, can predict empirical item difficulties. This evidence is drawn from existing published studies on this topic and also from statistical analysis of data held by Cambridge Assessment. Having reviewed the…
Descriptors: Test Items, Difficulty Level, Expertise, Comparative Analysis
Das, Syaamantak; Mandal, Shyamal Kumar Das; Basu, Anupam – Contemporary Educational Technology, 2020
Cognitive learning complexity identification of assessment questions is an essential task in the domain of education, as it helps both the teacher and the learner to discover the thinking process required to answer a given question. Bloom's Taxonomy cognitive levels are considered as a benchmark standard for the classification of cognitive…
Descriptors: Classification, Difficulty Level, Test Items, Identification
Metsämuuronen, Jari – International Journal of Educational Methodology, 2020
A new index of item discrimination power (IDP), dimension-corrected Somers' D (D2) is proposed. Somers' D is one of the superior alternatives for item-total- (Rit) and item-rest correlation (Rir) in reflecting the real IDP with items with scales 0/1 and 0/1/2, that is, up to three categories. D also reaches the extreme value +1 and -1 correctly…
Descriptors: Item Analysis, Correlation, Test Items, Simulation
Bimpeh, Yaw; Pointer, William; Smith, Ben Alexander; Harrison, Liz – Applied Measurement in Education, 2020
Many high-stakes examinations in the United Kingdom (UK) use both constructed-response items and selected-response items. We need to evaluate the inter-rater reliability for constructed-response items that are scored by humans. While there are a variety of methods for evaluating rater consistency across ratings in the psychometric literature, we…
Descriptors: Scoring, Generalizability Theory, Interrater Reliability, Foreign Countries
Chen, Chia-Wen; Wang, Wen-Chung; Chiu, Ming Ming; Ro, Sage – Journal of Educational Measurement, 2020
The use of computerized adaptive testing algorithms for ranking items (e.g., college preferences, career choices) involves two major challenges: unacceptably high computation times (selecting from a large item pool with many dimensions) and biased results (enhanced preferences or intensified examinee responses because of repeated statements across…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Pugh, Debra; De Champlain, André; Gierl, Mark; Lai, Hollis; Touchie, Claire – Research and Practice in Technology Enhanced Learning, 2020
The purpose of this study was to compare the quality of multiple choice questions (MCQs) developed using automated item generation (AIG) versus traditional methods, as judged by a panel of experts. The quality of MCQs developed using two methods (i.e., AIG or traditional) was evaluated by a panel of content experts in a blinded study. Participants…
Descriptors: Computer Assisted Testing, Test Construction, Multiple Choice Tests, Test Items
Fu, Yanyan; Strachan, Tyler; Ip, Edward H.; Willse, John T.; Chen, Shyh-Huei; Ackerman, Terry – International Journal of Testing, 2020
This research examined correlation estimates between latent abilities when using the two-dimensional and three-dimensional compensatory and noncompensatory item response theory models. Simulation study results showed that the recovery of the latent correlation was best when the test contained 100% of simple structure items for all models and…
Descriptors: Item Response Theory, Models, Test Items, Simulation
Guo, Hongwen; Dorans, Neil J. – Journal of Educational Measurement, 2020
We make a distinction between the operational practice of using an observed score to assess differential item functioning (DIF) and the concept of departure from measurement invariance (DMI) that conditions on a latent variable. DMI and DIF indices of effect sizes, based on the Mantel-Haenszel test of common odds ratio, converge under restricted…
Descriptors: Weighted Scores, Test Items, Item Response Theory, Measurement
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M. – Educational Measurement: Issues and Practice, 2020
In test-centered standard-setting methods, borderline performance can be represented by many different profiles of strengths and weaknesses. As a result, asking panelists to estimate item or test performance for a hypothetical group study of borderline examinees, or a typical borderline examinee, may be an extremely difficult task and one that can…
Descriptors: Standard Setting (Scoring), Cutting Scores, Testing Problems, Profiles
Metsämuuronen, Jari – International Journal of Educational Methodology, 2020
Pearson product-moment correlation coefficient between item g and test score X, known as item-test or item-total correlation ("Rit"), and item-rest correlation ("Rir") are two of the most used classical estimators for item discrimination power (IDP). Both "Rit" and "Rir" underestimate IDP caused by the…
Descriptors: Correlation, Test Items, Scores, Difficulty Level
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Grantee Submission, 2020
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Item Response Theory, Test Bias, Test Items

Peer reviewed
Direct link
