NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 102 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gyamfi, Abraham; Acquaye, Rosemary – Acta Educationis Generalis, 2023
Introduction: Item response theory (IRT) has received much attention in validation of assessment instrument because it allows the estimation of students' ability from any set of the items. Item response theory allows the difficulty and discrimination levels of each item on the test to be estimated. In the framework of IRT, item characteristics are…
Descriptors: Item Response Theory, Models, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Seyda Aydin-Karaca; Mustafa Serdar Köksal; Bilkay Bi – Journal of Psychoeducational Assessment, 2024
This study aimed to develop a parent rating scale (PRSG) for screening children for further identification process in terms of giftedness. The participants of the study were 255 parents of gifted and non-gifted students. The PRSG, consisting of 30 items, was created by consulting parents and reviewing instruments existent in the literature. As…
Descriptors: Rating Scales, Parent Attitudes, Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Yoon Ah; Lee, Won-Chan – Applied Measurement in Education, 2022
This article presents the performance of item response theory (IRT) models when double ratings are used as item scores over single ratings when rater effects are present. Study 1 examined the influence of the number of ratings on the accuracy of proficiency estimation in the generalized partial credit model (GPCM). Study 2 compared the accuracy of…
Descriptors: Item Response Theory, Item Analysis, Scores, Accuracy
Wu, Tong – ProQuest LLC, 2023
This three-article dissertation aims to address three methodological challenges to ensure comparability in educational research, including scale linking, test equating, and propensity score (PS) weighting. The first study intends to improve test scale comparability by evaluating the effect of six missing data handling approaches, including…
Descriptors: Educational Research, Comparative Analysis, Equated Scores, Weighted Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mehtap Aktas; Nezaket Bilge Uzun; Bilge Bakir Aygar – International Journal of Contemporary Educational Research, 2023
This study aims to examine the measurement invariance of the Social Media Addiction Scale (SMAS) in terms of gender, time spent on social media accounts, and the number of social media accounts. Invariance analyses conducted within the scope of the research were carried out on 672 participants. Measurement invariance studies were examined…
Descriptors: Addictive Behavior, Scores, Comparative Analysis, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Farshad Effatpanah; Purya Baghaei; Mona Tabatabaee-Yazdi; Esmat Babaii – Language Testing, 2025
This study aimed to propose a new method for scoring C-Tests as measures of general language proficiency. In this approach, the unit of analysis is sentences rather than gaps or passages. That is, the gaps correctly reformulated in each sentence were aggregated as sentence score, and then each sentence was entered into the analysis as a polytomous…
Descriptors: Item Response Theory, Language Tests, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Hayat, Bahrul – Cogent Education, 2022
The purpose of this study comprises (1) calibrating the Basic Statistics Test for Indonesian undergraduate psychology students using the Rasch model, (2) testing the impact of adjustment for guessing on item parameters, person parameters, test reliability, and distribution of item difficulty and person ability, and (3) comparing person scores…
Descriptors: Guessing (Tests), Statistics Education, Undergraduate Students, Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Uminski, Crystal; Hubbard, Joanna K.; Couch, Brian A. – CBE - Life Sciences Education, 2023
Biology instructors use concept assessments in their courses to gauge student understanding of important disciplinary ideas. Instructors can choose to administer concept assessments based on participation (i.e., lower stakes) or the correctness of responses (i.e., higher stakes), and students can complete the assessment in an in-class or…
Descriptors: Biology, Science Tests, High Stakes Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan; Register, Brennan – Educational Assessment, 2023
Much of what we know about how children develop is based on survey data. In order to estimate growth across time and, thereby, better understand that development, short survey scales are typically administered at repeated timepoints. Before estimating growth, those repeated measures must be put onto the same scale. Yet, little research examines…
Descriptors: Comparative Analysis, Social Emotional Learning, Scaling, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jurij Selan; Mira Metljak – Center for Educational Policy Studies Journal, 2023
Since research integrity is not external to research but an integral part of it, it should be integrated into research training. However, several hindrances regarding contemporary research integrity education exist. To address them, we have developed a competency profile for teaching and learning research integrity based on four assumptions: 1) to…
Descriptors: Profiles, Integrity, Content Validity, Questionnaires
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
John B. Buncher; Jayson M. Nissen; Ben Van Dusen; Robert M. Talbot – Physical Review Physics Education Research, 2025
Research-based assessments (RBAs) allow researchers and practitioners to compare student performance across different contexts and institutions. In recent years, research attention has focused on the student populations these RBAs were initially developed with because much of that research was done with "samples of convenience" that were…
Descriptors: Science Tests, Physics, Comparative Analysis, Gender Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akhtar, Hanif – International Association for Development of the Information Society, 2022
When examinees perceive a test as low stakes, it is logical to assume that some of them will not put out their maximum effort. This condition makes the validity of the test results more complicated. Although many studies have investigated motivational fluctuation across tests during a testing session, only a small number of studies have…
Descriptors: Intelligence Tests, Student Motivation, Test Validity, Student Attitudes
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7