NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 152 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Emma Somer; Carl Falk; Milica Miocevic – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Factor Score Regression (FSR) is increasingly employed as an alternative to structural equation modeling (SEM) in small samples. Despite its popularity in psychology, the performance of FSR in multigroup models with small samples remains relatively unknown. The goal of this study was to examine the performance of FSR, namely Croon's correction and…
Descriptors: Scores, Structural Equation Models, Comparative Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Castellano, Katherine E.; McCaffrey, Daniel F. – Journal of Educational Measurement, 2020
The residual gain score has been of historical interest, and its percentile rank has been of interest more recently given its close correspondence to the popular Student Growth Percentile. However, these estimators suffer from low accuracy and systematic bias (bias conditional on prior latent achievement). This article explores three…
Descriptors: Accuracy, Student Evaluation, Measurement Techniques, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Looney, Marilyn A. – Measurement in Physical Education and Exercise Science, 2018
The purpose of this article was two-fold (1) provide an overview of the commonly reported and under-reported absolute agreement indices in the kinesiology literature for continuous data; and (2) present examples of these indices for hypothetical data along with recommendations for future use. It is recommended that three types of information be…
Descriptors: Interrater Reliability, Evaluation Methods, Kinetics, Indexes
Greifer, Noah – ProQuest LLC, 2018
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study…
Descriptors: Evaluation Methods, Probability, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Schlauch, Robert S.; Carney, Edward – Journal of Speech, Language, and Hearing Research, 2018
Purpose: Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition…
Descriptors: Word Recognition, Computer Simulation, Scores, Phonemes
Peer reviewed Peer reviewed
Direct linkDirect link
Rideout, Candice A. – Assessment & Evaluation in Higher Education, 2018
A flexible approach to assessment may promote students' engagement and academic achievement by allowing them to personalise their learning experience, even in the context of large undergraduate classes. However, studies reporting flexible assessment strategies and their impact are limited. In this paper, I present a feasible and effective approach…
Descriptors: Undergraduate Students, Academic Achievement, Evaluation Methods, Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rajagopal, Prabha; Ravana, Sri Devi – Information Research: An International Electronic Journal, 2017
Introduction: The use of averaged topic-level scores can result in the loss of valuable data and can cause misinterpretation of the effectiveness of system performance. This study aims to use the scores of each document to evaluate document retrieval systems in a pairwise system evaluation. Method: The chosen evaluation metrics are document-level…
Descriptors: Information Retrieval, Documentation, Scores, Information Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Measurement and Evaluation in Counseling and Development, 2017
This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.
Descriptors: Test Bias, Item Response Theory, Factor Analysis, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2018
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Descriptors: Evaluation Methods, Tests, Computer Assisted Testing, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Priyogi, Bilih; Santoso, Harry B.; Berliyanto; Hasibuan, Zainal A. – Turkish Online Journal of Educational Technology - TOJET, 2017
The concept of Open Education (OE) is based on the philosophy of e-Learning which aims to provide learning environment anywhere, anytime, and for anyone. One of the main issue in the development of OE services is the availability of the quality assurance mechanism. This study proposes a metric for measuring the quality of OE service. Based on…
Descriptors: Open Education, Educational Quality, Electronic Learning, Guidelines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khuzwayo, Mamsi Ethel – South African Journal of Education, 2018
This study records findings of a study carried out in a group of 100 students at a South African university. This study examines the group's assignments as a way of gathering evidence about pre-service teachers' achievements in the process of education and training. The empirical study was based on comparative analysis of scores obtained by…
Descriptors: Foreign Countries, College Students, Teacher Education Programs, Preservice Teacher Education
Peer reviewed Peer reviewed
Direct linkDirect link
List, Marit K.; Robitzsch, Alexander; Lüdtke, Oliver; Köller, Olaf; Nagy, Gabriel – Large-scale Assessments in Education, 2017
Background: In low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to…
Descriptors: Evaluation Methods, Student Evaluation, Item Response Theory, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wendy – Journal of Research on Educational Effectiveness, 2017
Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…
Descriptors: Generalization, Inferences, Probability, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Potter, Kyle; Lewandowski, Lawrence; Spenceley, Laura – Assessment & Evaluation in Higher Education, 2016
Standardised and other multiple-choice examinations often require the use of an answer sheet with fill-in bubbles (i.e. "bubble" or Scantron sheet). Students with disabilities causing impairments in attention, learning and/or visual-motor skill may have difficulties with multiple-choice examinations that employ such a response style.…
Descriptors: Testing Accommodations, Disabilities, Multiple Choice Tests, Vocabulary
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11