Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
Source
| Journal of Educational… | 4 |
| Journal of Technology,… | 2 |
| Developmental Psychology | 1 |
| Journal of Research on… | 1 |
Author
| Bridgeman, Brent | 3 |
| Rock, Donald A. | 2 |
| Attali, Yigal | 1 |
| Drake, Samuel | 1 |
| Freedle, Roy | 1 |
| Gerritz, Kalle | 1 |
| Gu, Lixiong | 1 |
| Kostin, Irene | 1 |
| Scheuneman, Janice Dowd | 1 |
| Stricker, Lawrence J. | 1 |
| Trapani, Catherine | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 8 |
| Reports - Research | 7 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 1 |
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 8 |
| SAT (College Admission Test) | 2 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Peer reviewedStricker, Lawrence J.; Rock, Donald A. – Developmental Psychology, 1987
Evaluated the extent to which the Graduate Record Examinations General Test measures the same constructs for older test takers that it does for younger examinees. Results suggest that the convergent validity of the test is similar across the age groups, but discriminant validity is somewhat different for older examinees. (Author/RWB)
Descriptors: Adults, Age Differences, Comparative Testing, Factor Analysis
Young, I. Phillip – Journal of Research on Leadership Education, 2008
Empirical studies addressing admission to and graduation from a doctoral program focusing on educational leadership are noticeably absent within the professional literature, and this study seeks to fill partially this void through testing specific hypotheses. Archival data were used to conduct a three group discriminant analyses where the…
Descriptors: Grade Point Average, Predictive Validity, Doctoral Programs, Sampling
Peer reviewedBridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Peer reviewedFreedle, Roy; Kostin, Irene – Journal of Educational Measurement, 1990
The importance of item difficulty (equated delta) was explored as a predictor of differential item functioning of Black versus White examinees for 4 verbal item types using 13 Graduate Record Examination forms and 11 Scholastic Aptitude Test forms. Several significant racial differences were found. (TJH)
Descriptors: Black Students, College Bound Students, College Entrance Examinations, Comparative Testing
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewedBridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
Peer reviewedScheuneman, Janice Dowd; Gerritz, Kalle – Journal of Educational Measurement, 1990
Differential item functioning (DIF) methodology for revealing sources of item difficulty and performance characteristics of different groups was explored. A total of 150 Scholastic Aptitude Test items and 132 Graduate Record Examination general test items were analyzed. DIF was evaluated for males and females and Blacks and Whites. (SLD)
Descriptors: Black Students, College Entrance Examinations, College Students, Comparative Testing


