NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 2,082 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mücahit Öztürk – Open Praxis, 2024
This study examined the problems that pre-service teachers face in the online assessment process and their suggestions for solutions to these problems. The participants were 136 pre-service teachers who have been experiencing online assessment for a long time and who took the Foundations of Open and Distance Learning course. This research is a…
Descriptors: Foreign Countries, Preservice Teacher Education, Preservice Teachers, Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Educational and Psychological Measurement, 2023
Rating scale analysis techniques provide researchers with practical tools for examining the degree to which ordinal rating scales (e.g., Likert-type scales or performance assessment rating scales) function in psychometrically useful ways. When rating scales function as expected, researchers can interpret ratings in the intended direction (i.e.,…
Descriptors: Rating Scales, Testing Problems, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Carlos Cinelli; Andrew Forney; Judea Pearl – Sociological Methods & Research, 2024
Many students of statistics and econometrics express frustration with the way a problem known as "bad control" is treated in the traditional literature. The issue arises when the addition of a variable to a regression equation produces an unintended discrepancy between the regression coefficient and the effect that the coefficient is…
Descriptors: Regression (Statistics), Robustness (Statistics), Error of Measurement, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Chvál, Martin; Vondrová, Nada; Novotná, Jarmila – Educational Studies in Mathematics, 2021
The goal of this study is to show a novel way of using large-scale data (N = 6203) to identify pupils' strategies when solving missing value number equations. It is based on the assumption that wrong numerical results appearing more frequently than would be the case if they were consequences of random guessing can be expected to be underlain by a…
Descriptors: Learning Strategies, Problem Solving, Equations (Mathematics), Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Sireci, Stephen G. – Educational Measurement: Issues and Practice, 2022
This module is designed for educators, educational researchers, and psychometricians who would like to develop an understanding of the basic concepts of validity theory, test validation, and documenting a "validity argument." It also describes how an in-depth understanding of the purposes and uses of educational tests sets the foundation…
Descriptors: Test Validity, Tests, Testing Problems, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Alex Buckley – Studies in Higher Education, 2024
Despite a large amount of critical research literature, traditional examinations continue to be widely used in higher education. This article reviews recent literature in order to assess the role played by the approaches adopted by researchers in the gap between research on exams, and the way exams are used. Viviane Robinson's 'problem-based…
Descriptors: Literature Reviews, Testing, Higher Education, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gökhan Iskifoglu – Turkish Online Journal of Educational Technology - TOJET, 2024
This research paper investigated the importance of conducting measurement invariance analysis in developing measurement tools for assessing differences between and among study variables. Most of the studies, which tended to develop an inventory to assess the existence of an attitude, behavior, belief, IQ, or an intuition in a person's…
Descriptors: Testing, Testing Problems, Error of Measurement, Attitude Measures
Peer reviewed Peer reviewed
Direct linkDirect link
James D. Weese; Ronna C. Turner; Allison Ames; Xinya Liang; Brandon Crawford – Journal of Experimental Education, 2024
In this study a standardized effect size was created for use with the SIBTEST procedure. Using this standardized effect size, a single set of heuristics was developed that are appropriate for data fitting different item response models (e.g., 2-parameter logistic, 3-parameter logistic). The standardized effect size rescales the raw beta-uni value…
Descriptors: Test Bias, Test Items, Item Response Theory, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Curdt, Wiebke; Schreiber-Barsch, Silke – International Review of Education, 2020
In the past decade, the numeracy component in adult basic education has gained scholarly attention. The issue has been addressed by large-scale assessments of adults' skills and intergovernmental policy agendas, but also by qualitative research into numeracy from the perspective of social practice theory. However, some aspects of numeracy are…
Descriptors: Participatory Research, Numeracy, Adult Basic Education, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Brunfaut, Tineke – Language Testing, 2023
In this invited Viewpoint on the occasion of the 40th anniversary of the journal "Language Testing," I argue that at the core of future challenges and opportunities for the field--both in scholarly and operational respects--remain basic questions and principles in language testing and assessment. Despite the high levels of sophistication…
Descriptors: Language Tests, Testing, Language Usage, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Pornphan Sureeyatanapas; Panitas Sureeyatanapas; Uthumporn Panitanarak; Jittima Kraisriwattana; Patchanan Sarootyanapat; Daniel O'Connell – Language Testing in Asia, 2024
Ensuring consistent and reliable scoring is paramount in education, especially in performance-based assessments. This study delves into the critical issue of marking consistency, focusing on speaking proficiency tests in English language learning, which often face greater reliability challenges. While existing literature has explored various…
Descriptors: Foreign Countries, Students, English Language Learners, Speech
Peer reviewed Peer reviewed
Direct linkDirect link
Linda Borger; Stefan Johansson; Rolf Strietholt – Educational Assessment, Evaluation and Accountability, 2024
PISA aims to serve as a "global yardstick" for educational success, as measured by student performance. For comparisons to be meaningful across countries or over time, PISA samples must be representative of the population of 15-year-old students in each country. Exclusions and non-response can undermine this representativeness and…
Descriptors: Achievement Tests, International Assessment, Foreign Countries, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Brittany N. Zakszeski; Heather E. Ormiston; Malena A. Nygaard; Kane Carlock – School Psychology Review, 2025
Despite the widespread use of school-based universal screening systems for social, emotional, and behavioral risk, limited research has examined discrepancies in ratings provided by teachers and their secondary students. Using the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS; teacher report) and mySAEBRS (student report) scores…
Descriptors: Middle School Students, Middle School Teachers, Screening Tests, Affective Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Coggeshall, Whitney Smiley – Educational Measurement: Issues and Practice, 2021
The continuous testing framework, where both successful and unsuccessful examinees have to demonstrate continued proficiency at frequent prespecified intervals, is a framework that is used in noncognitive assessment and is gaining in popularity in cognitive assessment. Despite the rigorous advantages of this framework, this paper demonstrates that…
Descriptors: Classification, Accuracy, Testing, Failure
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  139