Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 7 |
Descriptor
Source
| ACT, Inc. | 8 |
Author
| Cui, Zhongmin | 2 |
| Li, Dongmei | 2 |
| Steedle, Jeffrey | 2 |
| Chen, Hanwei | 1 |
| Fang, Yu | 1 |
| Gao, Xiaohong | 1 |
| Harris, Deborah | 1 |
| Kang, Taehoon | 1 |
| Petersen, Nancy S. | 1 |
| Traynor, Anne | 1 |
| Wang, Lu | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 4 |
| Reports - Evaluative | 3 |
| Numerical/Quantitative Data | 2 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 6 |
| Postsecondary Education | 4 |
| High Schools | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| ACT Assessment | 5 |
What Works Clearinghouse Rating
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu – ACT, Inc., 2013
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
Descriptors: Comparative Analysis, Error of Measurement, Scores, Scaling
Westrick, Paul A. – ACT, Inc., 2015
This study examined the effects of differential grading in science, technology, engineering, and mathematics (STEM) and non-STEM fields over eight consecutive semesters. Using data from 62,122 students at 26 four-year postsecondary institutions, students were subdivided by institutional admission selectivity levels, gender, and student major…
Descriptors: Grading, Student Evaluation, STEM Education, Meta Analysis
Chen, Hanwei; Cui, Zhongmin; Zhu, Rongchun; Gao, Xiaohong – ACT, Inc., 2010
The most critical feature of a common-item nonequivalent groups equating design is that the average score difference between the new and old groups can be accurately decomposed into a group ability difference and a form difficulty difference. Two widely used observed-score linear equating methods, the Tucker and the Levine observed-score methods,…
Descriptors: Equated Scores, Groups, Ability Grouping, Difficulty Level
Kang, Taehoon; Petersen, Nancy S. – ACT, Inc., 2009
This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…
Descriptors: Standards, Testing Programs, Test Items, Statistical Distributions
ACT, Inc., 2005
One of the most challenging issues a state must resolve in designing a statewide standards and college readiness assessment is that of how student scores should be reported. The ACT is an effective and reliable measure of student readiness for college and work, but in some cases states may wish to augment the ACT with tests of their own design. In…
Descriptors: Academic Achievement, Raw Scores, Achievement Rating, School Readiness


