Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 14 |
Descriptor
Psychological Testing | 63 |
Test Validity | 17 |
Factor Analysis | 13 |
Higher Education | 12 |
Test Reliability | 11 |
Personality Measures | 10 |
Rating Scales | 10 |
Item Analysis | 9 |
Psychometrics | 9 |
Factor Structure | 8 |
Scores | 8 |
More ▼ |
Source
Educational and Psychological… | 63 |
Author
Cizek, Gregory J. | 2 |
Lowe, Patricia A. | 2 |
McQuitty, Louis L. | 2 |
Reynolds, Cecil R. | 2 |
Wang, Wen-Chung | 2 |
Abbott, Robert D. | 1 |
Ace, Merle E. | 1 |
Agnello, Jessica | 1 |
Ana Hernández-Dorado | 1 |
Andrews, Glenda | 1 |
Austin, J. Sue | 1 |
More ▼ |
Publication Type
Journal Articles | 40 |
Reports - Research | 23 |
Reports - Evaluative | 10 |
Book/Product Reviews | 4 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 2 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Adult Basic Education | 1 |
Audience
Location
Austria | 1 |
Canada | 1 |
Germany | 1 |
New Zealand | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kylie Gorney; Sandip Sinharay – Educational and Psychological Measurement, 2025
Test-takers, policymakers, teachers, and institutions are increasingly demanding that testing programs provide more detailed feedback regarding test performance. As a result, there has been a growing interest in the reporting of subscores that potentially provide such detailed feedback. Haberman developed a method based on classical test theory…
Descriptors: Scores, Test Theory, Test Items, Testing
Kam, Chester Chun Seng – Educational and Psychological Measurement, 2023
When constructing measurement scales, regular and reversed items are often used (e.g., "I am satisfied with my job"/"I am not satisfied with my job"). Some methodologists recommend excluding reversed items because they are more difficult to understand and therefore engender a second, artificial factor distinct from the…
Descriptors: Test Items, Difficulty Level, Test Construction, Construct Validity
Philippe Goldammer; Peter Lucas Stöckli; Yannik Andrea Escher; Hubert Annen; Klaus Jonas – Educational and Psychological Measurement, 2024
Indirect indices for faking detection in questionnaires make use of a respondent's deviant or unlikely response pattern over the course of the questionnaire to identify them as a faker. Compared with established direct faking indices (i.e., lying and social desirability scales), indirect indices have at least two advantages: First, they cannot be…
Descriptors: Identification, Deception, Psychological Testing, Validity
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance
Pere J. Ferrando; Fabia Morales-Vives; Ana Hernández-Dorado – Educational and Psychological Measurement, 2024
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or "quasi-traits." These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous…
Descriptors: Item Analysis, Goodness of Fit, Accuracy, Test Validity
Ranger, Jochen; Kuhn, Jörg Tobias; Ortner, Tuulia M. – Educational and Psychological Measurement, 2020
The hierarchical model of van der Linden is the most popular model for responses and response times in tests. It is composed of two separate submodels--one for the responses and one for the response times--that are joined at a higher level. The submodel for the response times is based on the lognormal distribution. The lognormal distribution is a…
Descriptors: Reaction Time, Tests, Statistical Distributions, Models
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Pfeiffer, Nils; Hagemann, Dirk; Backenstrass, Matthias – Educational and Psychological Measurement, 2011
In response to the low standards in short form development, Smith, McCarthy, and Anderson (2000) introduced a set of guidelines for the construction and evaluation of short forms of psychological tests. One of their recommendations requires researches to show that the variance overlap between the short form and its long form is adequate. This…
Descriptors: Psychological Testing, Computation, Test Length, Undergraduate Students
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Wang, Wen-Chung; Jin, Kuan-Yu – Educational and Psychological Measurement, 2010
In this study, the authors extend the standard item response model with internal restrictions on item difficulty (MIRID) to fit polytomous items using cumulative logits and adjacent-category logits. Moreover, the new model incorporates discrimination parameters and is rooted in a multilevel framework. It is a nonlinear mixed model so that existing…
Descriptors: Difficulty Level, Test Items, Item Response Theory, Generalization
Cizek, Gregory J.; Bowen, Daniel; Church, Keri – Educational and Psychological Measurement, 2010
This study followed up on previous work that examined the incidence of reporting evidence based on test consequences in "Mental Measurements Yearbook". In the present study, additional possible outlets for what has been called "consequential validity" evidence were investigated, including all articles published in the past 10 years in several…
Descriptors: Educational Research, Educational Assessment, Psychological Testing, Followup Studies
Cizek, Gregory J.; Rosenberg, Sharyn L.; Koons, Heather H. – Educational and Psychological Measurement, 2008
This study investigates aspects of validity reflected in a large and diverse sample of published measures used in educational and psychological testing contexts. The current edition of "Mental Measurements Yearbook" served as the data source for this study. The validity aspects investigated included perspective on validity represented, number and…
Descriptors: Psychological Testing, Test Validity, Testing, Test Theory

Abbott, Robert D. – Educational and Psychological Measurement, 1971
Descriptors: Factor Analysis, Personality Measures, Psychological Testing

Silverstein, A. B.; And Others – Educational and Psychological Measurement, 1976
Intercorrelations among Uzgiris and Hunt's scales for 2 samples of infants were subjected to cluster analysis and factor analysis. The results of these procedures were similar for both samples. Three clusters from the first analysis resembling 3 factors from the second. The findings were also consistent with Uzgiris and Hunt's observations on the…
Descriptors: Cluster Analysis, Factor Analysis, Infants, Psychological Testing

Koson, D.; And Others – Educational and Psychological Measurement, 1970
Descriptors: Computers, Psychological Testing, Response Style (Tests), Testing