NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research24
Journal Articles13
Speeches/Meeting Papers3
Reports - Evaluative1
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Sandip Sinharay – Educational and Psychological Measurement, 2025
Test-takers, policymakers, teachers, and institutions are increasingly demanding that testing programs provide more detailed feedback regarding test performance. As a result, there has been a growing interest in the reporting of subscores that potentially provide such detailed feedback. Haberman developed a method based on classical test theory…
Descriptors: Scores, Test Theory, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng – Educational and Psychological Measurement, 2023
When constructing measurement scales, regular and reversed items are often used (e.g., "I am satisfied with my job"/"I am not satisfied with my job"). Some methodologists recommend excluding reversed items because they are more difficult to understand and therefore engender a second, artificial factor distinct from the…
Descriptors: Test Items, Difficulty Level, Test Construction, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Brauer, Kay – Journal of Educational and Behavioral Statistics, 2022
The generalized S-X[superscript 2]-test is a test of item fit for items with polytomous responses format. The test is based on a comparison of the observed and expected number of responses in strata defined by the test score. In this article, we make four contributions. We demonstrate that the performance of the generalized S-X[superscript 2]-test…
Descriptors: Goodness of Fit, Test Items, Statistical Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Fährmann, Katharina; Köhler, Carmen; Hartig, Johannes; Heine, Jörg-Henrik – Large-scale Assessments in Education, 2022
When scaling psychological tests with methods of item response theory it is necessary to investigate to what extent the responses correspond to the model predictions. In addition to the statistical evaluation of item misfit, the question arises as to its practical significance. Although item removal is undesirable for several reasons, its…
Descriptors: Psychological Testing, Scaling, Test Items, Item Response Theory
Sinharay, Sandip; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
Response time models (RTMs) are of increasing interest in educational and psychological testing. This article focuses on the lognormal model for response times, which is one of the most popular RTMs. Several existing statistics for testing normality and the fit of factor analysis models are repurposed for testing the fit of the lognormal model. A…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Sinharay, Sandip; van Rijn, Peter – Grantee Submission, 2020
Response-time models are of increasing interest in educational and psychological testing. This paper focuses on the lognormal model for response times (van der Linden, 2006), which is one of the most popular response-time models. Several existing statistics for testing normality and the fit of factor-analysis models are repurposed for testing the…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mousavi, Amin; Cui, Ying – Education Sciences, 2020
Often, important decisions regarding accountability and placement of students in performance categories are made on the basis of test scores generated from tests, therefore, it is important to evaluate the validity of the inferences derived from test results. One of the threats to the validity of such inferences is aberrant responding. Several…
Descriptors: Student Evaluation, Educational Testing, Psychological Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias – Journal of Educational and Behavioral Statistics, 2015
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Descriptors: Psychological Testing, Reaction Time, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Peer reviewed Peer reviewed
Farley, Frank H.; Cohen, Arie – Journal of Research in Personality, 1980
Most psychological tests and inventories, particularly in personality and attitude measurement, contain common items. The present report specifically considered common-item contributions to test internal consistency reliability using extant data from the California Psychological Inventory. No negative contribution of item overlap was found.…
Descriptors: College Students, Item Analysis, Personality Measures, Psychological Testing
Northrop, Lois C. – 1977
The factor-analytic history of the isolation and identification of the cognitive ability construct known as Number or Numerical Facility is described. Empirical documentation from the psychometric literature is discussed, and examples of items from a variety of tests which have defined and been associated with the Number factor are included to…
Descriptors: Arithmetic, Cognitive Ability, Cognitive Measurement, Factor Analysis
Veccia, Ellen M.; Schroeder, David H. – 1990
A set of 150 experimental personality items was constructed for an alternate form of the word association personality worksample developed by the Johnson O'Connor Research Foundation. The items were intended to possess several semantic properties hypothesized to facilitate discrimination between objective and subjective examinees. Specifically,…
Descriptors: Adults, Correlation, Objectivity, Personality Measures
Wetzel, C. Douglas; McBride, James R. – 1983
Computer simulation was used to assess the effects of item parameter estimation errors on different item selection strategies used in adaptive and conventional testing. To determine whether these effects reduced the advantages of certain optimal item selection strategies, simulations were repeated in the presence and absence of item parameter…
Descriptors: Adaptive Testing, Computer Assisted Testing, Latent Trait Theory, Occupational Tests
Previous Page | Next Page »
Pages: 1  |  2