Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Comparative Testing | 61 |
| Higher Education | 27 |
| Test Validity | 23 |
| Test Reliability | 13 |
| Intelligence Tests | 12 |
| Construct Validity | 11 |
| Psychometrics | 10 |
| Undergraduate Students | 9 |
| College Students | 8 |
| Correlation | 8 |
| Item Analysis | 8 |
| More ▼ | |
Source
| Educational and Psychological… | 61 |
Author
| Schriesheim, Chester A. | 2 |
| Whitworth, Randolph H. | 2 |
| Almond, Russell G. | 1 |
| Azen, Razia | 1 |
| Bandalos, Deborah | 1 |
| Baumgarten, Bruce S. | 1 |
| Beck, Brett | 1 |
| Beck, Frances Wi | 1 |
| Benson, Jeri | 1 |
| Beyler, Jane | 1 |
| Bhola, Dennison S. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 56 |
| Reports - Research | 48 |
| Reports - Evaluative | 9 |
| Speeches/Meeting Papers | 2 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 3 |
| Middle Schools | 2 |
| Adult Education | 1 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 8 | 1 |
| High Schools | 1 |
Audience
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Nataly Beribisky; Gregory R. Hancock – Educational and Psychological Measurement, 2024
Fit indices are descriptive measures that can help evaluate how well a confirmatory factor analysis (CFA) model fits a researcher's data. In multigroup models, before between-group comparisons are made, fit indices may be used to evaluate measurement invariance by assessing the degree to which multiple groups' data are consistent with increasingly…
Descriptors: Factor Analysis, Research Methodology, Comparative Testing, Measurement
Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A. – Educational and Psychological Measurement, 2011
Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…
Descriptors: Cutting Scores, Standard Setting (Scoring), Models, Construct Validity
Donnellan, M. Brent – Educational and Psychological Measurement, 2008
The properties of the achievement goal inventories developed by Grant and Dweck (2003) and Elliot and McGregor (2001) were evaluated in two studies with a total of 780 participants. A four-factor specification for the Grant and Dweck inventory did not closely replicate results published in their original report. In contrast, the structure of the…
Descriptors: Academic Achievement, Psychometrics, Program Validation, Achievement Rating
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing
Peer reviewedNevo, B.; Oren, C. – Educational and Psychological Measurement, 1986
Concurrent validity of the Scholastic Aptitude Test (SAT) and the Israeli Inter-University Psychometric Entrance Test (IUPRT) was supported by a high level of correlation, including a correlation of .85 between SAT and IUPET total scores. Transformation equations were presented to estimate performance on one test from the other. (Author/LMO)
Descriptors: College Entrance Examinations, Comparative Testing, Correlation, Higher Education
Ferdous, Abdullah A.; Plake, Barbara S. – Educational and Psychological Measurement, 2007
In an Angoff standard setting procedure, judges estimate the probability that a hypothetical randomly selected minimally competent candidate will answer correctly each item in the test. In many cases, these item performance estimates are made twice, with information shared with the panelists between estimates. Especially for long tests, this…
Descriptors: Test Items, Probability, Item Analysis, Standard Setting (Scoring)
Peer reviewedSpruill, Jean; Beck, Brett – Educational and Psychological Measurement, 1986
This study is a concurrent validation of Level II of the Wide Range Achievement Test-Revised (WRAT-R) using the Wechsler Adult Intelligence Scale (WAIS) Verbal, Performance, and Full Scale IQ scores as criterion measures. Forty-five subjects were administered the WAIS-R and WRAT-R and their scores correlated. (Author/LMO)
Descriptors: Achievement Tests, Adults, Comparative Testing, Concurrent Validity
Peer reviewedRiviere, Michael S. – Educational and Psychological Measurement, 1973
Descriptors: Comparative Testing, Intelligence Tests, Mental Retardation, Test Reliability
Peer reviewedSchriesheim, Chester A.; Novelli, Luke, Jr. – Educational and Psychological Measurement, 1989
Differences between recommended sets of equal-interval response anchors derived from scaling techniques using magnitude estimations and Thurstone Case III pair-comparison treatment of complete ranks were compared. Differences in results for 205 undergraduates reflected differences in the samples as well as in the tasks and computational…
Descriptors: Comparative Testing, Higher Education, Psychometrics, Rating Scales
Peer reviewedChew, Alex L.; Morris, John D. – Educational and Psychological Measurement, 1987
The validity of the Lollipop Test: A Diagnostic Screening Test of School Readiness was examined for 129 pre-kindergarten subjects using the Developmental Indicator for the Assessment of Learning as the criterion. Concurrent validity was demonstrated across the test batteries. The Lollipop Test appears to be an attractive alternative…
Descriptors: Comparative Testing, Concurrent Validity, Kindergarten, Preschool Children
Peer reviewedPascale, Pietro J. – Educational and Psychological Measurement, 1973
Descriptors: Comparative Testing, Measurement Techniques, Preschool Children, Preschool Tests
Peer reviewedGoldstein, Marc B.; Bokoros, Michael A. – Educational and Psychological Measurement, 1992
It is argued that the axes of the Learning Style Inventory and Learning Style Questionnaire are not conceptually aligned. More accurate comparison involves the degree of similar classification of learning styles, as illustrated in the modest but similar degree of classification for 44 graduate and undergraduate students. (SLD)
Descriptors: Classification, Cognitive Style, Comparative Testing, Correlation
Sinharay, Sandip; Almond, Russell G. – Educational and Psychological Measurement, 2007
A cognitive diagnostic model uses information from educational experts to describe the relationships between item performances and posited proficiencies. When the cognitive relationships can be described using a fully Bayesian model, Bayesian model checking procedures become available. Checking models tied to cognitive theory of the domains…
Descriptors: Epistemology, Clinical Diagnosis, Job Training, Item Response Theory
Peer reviewedHull, Marc; Halloran, William – Educational and Psychological Measurement, 1976
Results show that the mean number of Occupational Aptitude Patterns (OAP's) generated for a sample of mentally retarded and boarderline intelligence students is significantly greater for the Nonreading Aptitude Test Battery (NATB) than for the General Aptitude Test Battery (GATB). (DEP)
Descriptors: Comparative Testing, Intelligence Tests, Low Ability Students, Mental Retardation
Peer reviewedBoldt, R. F. – Educational and Psychological Measurement, 1974
Descriptors: Comparative Testing, Equated Scores, National Norms, Raw Scores

Direct link
