Publication Date
| In 2026 | 4 |
| Since 2025 | 1746 |
| Since 2022 (last 5 years) | 8902 |
| Since 2017 (last 10 years) | 20747 |
| Since 2007 (last 20 years) | 42058 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 1491 |
| Practitioners | 997 |
| Researchers | 608 |
| Administrators | 233 |
| Students | 150 |
| Policymakers | 126 |
| Parents | 125 |
| Counselors | 106 |
| Media Staff | 28 |
| Support Staff | 19 |
| Community | 15 |
| More ▼ | |
Location
| Australia | 1574 |
| United Kingdom | 1112 |
| Canada | 1071 |
| China | 969 |
| Turkey | 897 |
| United Kingdom (England) | 665 |
| United States | 629 |
| Germany | 618 |
| California | 523 |
| Netherlands | 508 |
| Taiwan | 407 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 35 |
| Meets WWC Standards with or without Reservations | 50 |
| Does not meet standards | 49 |
Fu, Qiang; Guo, Xin; Land, Kenneth C. – Sociological Methods & Research, 2020
Count responses with grouping and right censoring have long been used in surveys to study a variety of behaviors, status, and attitudes. Yet grouping or right-censoring decisions of count responses still rely on arbitrary choices made by researchers. We develop a new method for evaluating grouping and right-censoring decisions of count responses…
Descriptors: Surveys, Artificial Intelligence, Evaluation Methods, Probability
Quintelier, Amy; De Maeyer, Sven; Vanhoof, Jan – Educational Assessment, Evaluation and Accountability, 2020
Feedback acceptance and use are often seen as requirements for teacher change after a school inspection. Non-educational research, however, points to the role of feedback recipients' willingness to use the feedback received as an intermediate phase between their acceptance and use of the feedback. It also postulates the importance of a recipient's…
Descriptors: Feedback (Response), Observation, Elementary School Teachers, Foreign Countries
Hope, Michelle – Educational Leadership, 2020
Grading should be more about the feedback--and less about the score, writes assistant principal Michelle Hope. By focusing attention on the steps toward mastery, teachers can offer students a more complete picture of progress.
Descriptors: Grading, Feedback (Response), Mastery Learning, Progress Monitoring
Besser, Erin D.; Newby, Timothy J. – TechTrends: Linking Research and Practice to Improve Learning, 2020
There is growing interest in how various technical tools can be used to leverage the instructional process for both teaching and learning. Digital badges are a visual representation of learning and skills. Digital badges have been used as a way to reduce gaps in knowledge (Bowen and Thomas "Change," 46(1), 21-25, 2014; Guskey…
Descriptors: Feedback (Response), Recognition (Achievement), Information Storage, Evaluation Methods
Duncan, David A. – Review of Education, 2020
Supporting, caring for and working with bereaved children is both daunting and challenging, yet not much is known about how schools can help children to cope with death and dying. The main objective of this study was to identify approaches used to support children who are grieving, and to explore implications for teachers. The use of retrospective…
Descriptors: Grief, Coping, Children, Death
Fujimoto, Ken A.; Neugebauer, Sabina R. – Educational and Psychological Measurement, 2020
Although item response theory (IRT) models such as the bifactor, two-tier, and between-item-dimensionality IRT models have been devised to confirm complex dimensional structures in educational and psychological data, they can be challenging to use in practice. The reason is that these models are multidimensional IRT (MIRT) models and thus are…
Descriptors: Bayesian Statistics, Item Response Theory, Sample Size, Factor Structure
Using Differential Item Functioning to Test for Interrater Reliability in Constructed Response Items
Walker, Cindy M.; Göçer Sahin, Sakine – Educational and Psychological Measurement, 2020
The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared…
Descriptors: Test Bias, Interrater Reliability, Responses, Correlation
Fu, Yanyan; Strachan, Tyler; Ip, Edward H.; Willse, John T.; Chen, Shyh-Huei; Ackerman, Terry – International Journal of Testing, 2020
This research examined correlation estimates between latent abilities when using the two-dimensional and three-dimensional compensatory and noncompensatory item response theory models. Simulation study results showed that the recovery of the latent correlation was best when the test contained 100% of simple structure items for all models and…
Descriptors: Item Response Theory, Models, Test Items, Simulation
Guo, Hongwen; Dorans, Neil J. – Journal of Educational Measurement, 2020
We make a distinction between the operational practice of using an observed score to assess differential item functioning (DIF) and the concept of departure from measurement invariance (DMI) that conditions on a latent variable. DMI and DIF indices of effect sizes, based on the Mantel-Haenszel test of common odds ratio, converge under restricted…
Descriptors: Weighted Scores, Test Items, Item Response Theory, Measurement
Kontorovich, Igor' – Educational Studies in Mathematics, 2020
Spurred by Kilpatrick's (1987) "Where do good problems come from?", this study explores problem-posing triggers of experienced problem posers for mathematics competitions. Triggers are conceived as instances of noticing, where an impulse draws a poser's attention and "triggers off" a mathematical re-action, one of the outcomes…
Descriptors: Problem Solving, Mathematics Education, Competition, Mathematics Skills
Lancaster, Gary; Bayless, Sarah; Punia, Ricky – Psychology Teaching Review, 2020
We explored whether the academic grade a student sees influences how positively or negatively they interpret written assessment feedback. Specifically, an experimental design was used where N = 94 psychology students each read an identical passage of neutrally worded feedback. Depending upon which of three experimental conditions they had been…
Descriptors: Grades (Scholastic), Student Attitudes, Feedback (Response), Psychology
Naresh, Aparna; Short, Mary K.; Fienup, Daniel M. – Analysis of Verbal Behavior, 2020
A goal of behavior-analytic interventions is to produce behavior that is maintained under naturalistic conditions. In this experiment, we studied the effects of a speaker immersion protocol (SIP) on the number of speaker responses (tacts and mands) emitted by 3 preschool students under naturalistic, not directly targeted, conditions. During the…
Descriptors: Verbal Communication, Intervention, Speech Communication, Responses
Hanson, Jana M.; Florestano, Megan – New Directions for Teaching and Learning, 2020
Classroom assessment techniques (CATs) provide instructors with the opportunity to get feedback about what students are learning, where they need help, and provide instructors with an important tool to tailor course design and student learning.
Descriptors: Instructional Design, Teaching Methods, Student Evaluation, Evaluation Methods
Myyry, Liisa; Karaharju-Suvanto, Terhi; Vesalainen, Marjo; Virtala, Anna-Maija; Raekallio, Marja; Salminen, Outi; Vuorensola, Katariina; Nevgi, Anne – Assessment & Evaluation in Higher Education, 2020
The aim of this study was to examine the emotions higher education teachers associate with assessment and the factors in their teaching environment that triggered these emotions. As a starting point, Frenzel's model of teacher emotions and Pekrun's control-value theory of achievement emotions were used. The sample consisted of 16 experienced and…
Descriptors: Psychological Patterns, College Faculty, Student Evaluation, Foreign Countries
Rocconi, Louis M.; Dumford, Amber D.; Butler, Brenna – Research in Higher Education, 2020
Researchers, assessment professionals, and faculty in higher education increasingly depend on survey data from students to make pivotal curricular and programmatic decisions. The surveys collecting these data often require students to judge frequency (e.g., how often), quantity (e.g., how much), or intensity (e.g., how strongly). The response…
Descriptors: Student Surveys, College Students, Rating Scales, Response Style (Tests)

Peer reviewed
Direct link
