Publication Date
| In 2026 | 0 |
| Since 2025 | 55 |
| Since 2022 (last 5 years) | 369 |
| Since 2017 (last 10 years) | 922 |
| Since 2007 (last 20 years) | 2707 |
Descriptor
| Models | 3650 |
| Item Response Theory | 1169 |
| Feedback (Response) | 1129 |
| Foreign Countries | 627 |
| Responses | 540 |
| Emotional Response | 437 |
| Teaching Methods | 385 |
| Test Items | 385 |
| Comparative Analysis | 342 |
| Simulation | 323 |
| Evaluation Methods | 320 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Teachers | 41 |
| Researchers | 32 |
| Practitioners | 24 |
| Administrators | 14 |
| Counselors | 12 |
| Policymakers | 4 |
| Support Staff | 4 |
| Media Staff | 3 |
| Parents | 3 |
| Students | 2 |
Location
| Australia | 73 |
| United Kingdom | 40 |
| Canada | 39 |
| California | 36 |
| Germany | 36 |
| China | 34 |
| United States | 33 |
| United Kingdom (England) | 29 |
| Taiwan | 27 |
| Netherlands | 25 |
| Florida | 23 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 2 |
Rico-Juan, Juan Ramon; Sanchez-Cartagena, Victor M.; Valero-Mas, Jose J.; Gallego, Antonio Javier – IEEE Transactions on Learning Technologies, 2023
Online Judge (OJ) systems are typically considered within programming-related courses as they yield fast and objective assessments of the code developed by the students. Such an evaluation generally provides a single decision based on a rubric, most commonly whether the submission successfully accomplished the assignment. Nevertheless, since in an…
Descriptors: Artificial Intelligence, Models, Student Behavior, Feedback (Response)
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Juliette Woodrow; Sanmi Koyejo; Chris Piech – International Educational Data Mining Society, 2025
High-quality feedback requires understanding of a student's work, insights into what concepts would help them improve, and language that matches the preferences of the specific teaching team. While Large Language Models (LLMs) can generate coherent feedback, adapting these responses to align with specific teacher preferences remains an open…
Descriptors: Feedback (Response), Artificial Intelligence, Teacher Attitudes, Preferences
Anthony R. Reibel – Journal of School Administration Research and Development, 2025
Traditional assessment design focuses on outcomes and often disregards how students perceive their abilities, process emotions, or self-express. This indifference can undermine assessment outcomes and evaluation reliability (Hattie, 2023; Nilson, 2023; Reibel 2022). This paper introduces "empathetic assessment design" (EAD), a framework…
Descriptors: Empathy, Evaluation Methods, Student Evaluation, Models
Lindsay Maffei-Almodovar; Peter Sturmey; Joshua Jessel – Journal of Behavioral Education, 2025
Pyramidal training is an effective model for disseminating behavior analytic skills. However, pyramidal training in research is often conducted in controlled university settings. Further, research that has evaluated the effectiveness of pyramidal training in classroom settings (see Pence et al. 2014) often focuses on improving the use of one…
Descriptors: Functional Behavioral Assessment, Training Methods, Training, Program Effectiveness
Xiaowen Liu – International Journal of Testing, 2024
Differential item functioning (DIF) often arises from multiple sources. Within the context of multidimensional item response theory, this study examined DIF items with varying secondary dimensions using the three DIF methods: SIBTEST, Mantel-Haenszel, and logistic regression. The effect of the number of secondary dimensions on DIF detection rates…
Descriptors: Item Analysis, Test Items, Item Response Theory, Correlation
Dubravka Svetina Valdivia; Shenghai Dai – Journal of Experimental Education, 2024
Applications of polytomous IRT models in applied fields (e.g., health, education, psychology) are abound. However, little is known about the impact of the number of categories and sample size requirements for precise parameter recovery. In a simulation study, we investigated the impact of the number of response categories and required sample size…
Descriptors: Item Response Theory, Sample Size, Models, Classification
Austin M. Shin; Ayaan M. Kazerouni – ACM Transactions on Computing Education, 2024
Background and Context: Students' programming projects are often assessed on the basis of their tests as well as their implementations, most commonly using test adequacy criteria like branch coverage, or, in some cases, mutation analysis. As a result, students are implicitly encouraged to use these tools during their development process (i.e., so…
Descriptors: Feedback (Response), Programming, Student Projects, Computer Software
Xiangyi Liao; Daniel M Bolt – Educational Measurement: Issues and Practice, 2024
Traditional approaches to the modeling of multiple-choice item response data (e.g., 3PL, 4PL models) emphasize slips and guesses as random events. In this paper, an item response model is presented that characterizes both disjunctively interacting guessing and conjunctively interacting slipping processes as proficiency-related phenomena. We show…
Descriptors: Item Response Theory, Test Items, Error Correction, Guessing (Tests)
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Panadero, Ernesto – Educational Psychologist, 2023
As the articles in this special issue on "Psychological Perspectives on the Effects and Effectiveness of Assessment Feedback" have shown, feedback is a key factor in education. Although there exists a substantial body of research on the topic, it is imperative to continue advancing the field. My aim is to outline five steps to solidify…
Descriptors: Educational Change, Feedback (Response), Educational Research, Models
Ho, Eric Ming-Yin – ProQuest LLC, 2023
Personalized learning, which has the potential to raise student achievement, requires understanding the competencies of students. Visualizations can help provide this understanding. Jeon et al. (2021) presented a latent space model that creates interaction maps visualizing response patterns from item response data. My dissertation proposes…
Descriptors: Visual Aids, Individualized Instruction, Responses, Models
Kim, Stella Y. – Educational Measurement: Issues and Practice, 2022
In this digital ITEMS module, Dr. Stella Kim provides an overview of multidimensional item response theory (MIRT) equating. Traditional unidimensional item response theory (IRT) equating methods impose the sometimes untenable restriction on data that only a single ability is assessed. This module discusses potential sources of multidimensionality…
Descriptors: Item Response Theory, Models, Equated Scores, Evaluation Methods
Moira McDonald; Michael-Anne Noble; Brigitte Harris; Valeria Cortés; Ken Jeffery – Papers on Postsecondary Learning and Teaching, 2024
Educators within post-secondary institutions receive input in the form of course evaluations from their students. The aim of receiving student input is to improve the teaching and learning experience for all. There are, however, inherent problems with the current methods of obtaining students' views through course evaluations. In this pilot study,…
Descriptors: Equal Education, Feedback (Response), Learning Experience, Postsecondary Education
Lubbe, Dirk; Schuster, Christof – Journal of Educational and Behavioral Statistics, 2020
Extreme response style is the tendency of individuals to prefer the extreme categories of a rating scale irrespective of item content. It has been shown repeatedly that individual response style differences affect the reliability and validity of item responses and should, therefore, be considered carefully. To account for extreme response style…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Models

Peer reviewed
Direct link
