NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 76 to 90 of 2,935 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon; Kolen, Michael J. – Applied Measurement in Education, 2019
In applications of item response theory (IRT), fixed parameter calibration (FPC) has been used to estimate the item parameters of a new test form on the existing ability scale of an item pool. The present paper presents an application of FPC to multiple examinee groups test data that are linked to the item pool via anchor items, and investigates…
Descriptors: Item Response Theory, Item Banks, Test Items, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Hyung Rock; Lee, Sunbok; Sung, Jaeyun – International Journal of Assessment Tools in Education, 2019
Applying single-level statistical models to multilevel data typically produces underestimated standard errors, which may result in misleading conclusions. This study examined the impact of ignoring multilevel data structure on the estimation of item parameters and their standard errors of the Rasch, two-, and three-parameter logistic models in…
Descriptors: Item Response Theory, Computation, Error of Measurement, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Castellano, Katherine E.; McCaffrey, Daniel F. – Journal of Educational Measurement, 2020
The residual gain score has been of historical interest, and its percentile rank has been of interest more recently given its close correspondence to the popular Student Growth Percentile. However, these estimators suffer from low accuracy and systematic bias (bias conditional on prior latent achievement). This article explores three…
Descriptors: Accuracy, Student Evaluation, Measurement Techniques, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong – Educational and Psychological Measurement, 2018
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Descriptors: Computer Software, Models, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Monroe, Scott – Educational and Psychological Measurement, 2018
Lagrange multiplier (LM) or score tests have seen renewed interest for the purpose of diagnosing misspecification in item response theory (IRT) models. LM tests can also be used to test whether parameters differ from a fixed value. We argue that the utility of LM tests depends on both the method used to compute the test and the degree of…
Descriptors: Item Response Theory, Matrices, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Goya-Maldonado, Roberto; Keil, Maria; Brodmann, Katja; Gruber, Oliver – Creativity Research Journal, 2018
Humans possess an invaluable ability of self-expression that extends into visual, literary, musical, and many other fields of creation. More than any other profession, artists are in close contact with this subdomain of creativity. Probably one of the most intriguing aspects of creativity is its negative correlation with the availability of…
Descriptors: Rewards, Artists, Creativity, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
da Silva, Marcelo A.; Liu, Ren; Huggins-Manley, Anne C.; Bazán, Jorge L. – Educational and Psychological Measurement, 2019
Multidimensional item response theory (MIRT) models use data from individual item responses to estimate multiple latent traits of interest, making them useful in educational and psychological measurement, among other areas. When MIRT models are applied in practice, it is not uncommon to see that some items are designed to measure all latent traits…
Descriptors: Item Response Theory, Matrices, Models, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2019
M-fluctuation tests are a recently proposed method for detecting differential item functioning in Rasch models. This article discusses a generalization of this method to two additional item response theory models: the two-parametric logistic model and the three-parametric logistic model with a common guessing parameter. The Type I error rate and…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Vriens, Ingrid; Moors, Guy; Gelissen, John; Vermunt, Jeroen K. – Sociological Methods & Research, 2017
Measuring values in sociological research sometimes involves the use of ranking data. A disadvantage of a ranking assignment is that the order in which the items are presented might influence the choice preferences of respondents regardless of the content being measured. The standard procedure to rule out such effects is to randomize the order of…
Descriptors: Evaluation Methods, Social Science Research, Sociology, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
To, Jessica; Panadero, Ernesto; Carless, David – Assessment & Evaluation in Higher Education, 2022
The analysis of exemplars of different quality is a potentially powerful tool in enabling students to understand assessment expectations and appreciate academic standards. Through a systematic review methodology, this paper synthesises exemplar-based research designs, exemplar implementation and the educational effects of exemplars. The review of…
Descriptors: Research Design, Scoring Rubrics, Peer Evaluation, Self Evaluation (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Jinnie – Journal of Educational and Behavioral Statistics, 2017
This article reviews PROC IRT, which was added to Statistical Analysis Software in 2014. We provide an introductory overview of a free version of SAS, describe what PROC IRT offers for item response theory (IRT) analysis and how one can use PROC IRT, and discuss how other SAS macros and procedures may compensate the IRT functionalities of PROC IRT.
Descriptors: Item Response Theory, Computer Software, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja – Educational and Psychological Measurement, 2018
A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…
Descriptors: Models, Statistical Analysis, Error of Measurement, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Landmann, Helen; Hess, Ursula – Journal of Moral Education, 2018
Moral foundation theory posits that specific moral transgressions elicit specific moral emotions. To test this claim, participants (N = 195) were asked to rate their emotions in response to moral violation vignettes. We found that compassion and disgust were associated with care and purity respectively as predicted by moral foundation theory.…
Descriptors: Moral Values, Emotional Response, Psychological Patterns, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  196