NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Allen, Jeff – ACT, Inc., 2022
The COVID-19 pandemic caused widespread disruptions to the educational system in Arkansas and across the United States. At the onset of the pandemic in March 2020, schools in Arkansas were forced to replace on-site instruction with virtual instruction. During the 2020-2021 academic year, there were three student instructional options:…
Descriptors: COVID-19, Pandemics, Academic Achievement, Electronic Learning
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Keng, Leslie; Boyer, Michelle – National Center for the Improvement of Educational Assessment, 2020
ACT requested assistance from the National Center for the Improvement of Educational Assessment (Center for Assessment) to investigate declines of scores for states administering the ACT to its 11th grade students in 2018. This request emerged from conversations among state leaders, the Center for Assessment, and ACT in trying to understand the…
Descriptors: College Entrance Examinations, Scores, Test Score Decline, Educational Trends
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
Smolinsky, Lawrence; Marx, Brian D.; Olafsson, Gestur; Ma, Yanxia A. – Journal of Educational Computing Research, 2020
Computer-based testing is an expanding use of technology offering advantages to teachers and students. We studied Calculus II classes for science, technology, engineering, and mathematics majors using different testing modes. Three sections with 324 students employed: paper-and-pencil testing, computer-based testing, and both. Computer tests gave…
Descriptors: Test Format, Computer Assisted Testing, Paper (Material), Calculus
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koretz, Daniel; Yu, Carol; Mbekeani, Preeya P.; Langi, Meredith; Dhaliwal, Tasmin; Braslow, David – AERA Open, 2016
The current focus on assessing "college and career readiness" raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of…
Descriptors: Predictor Variables, College Freshmen, Grade Point Average, College Entrance Examinations
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Peters, Joshua A. – ProQuest LLC, 2016
There is a lack of knowledge in whether there is a difference in results for students on paper and pencil high stakes assessments and computer-based high stakes assessments when considering race and/or free and reduced lunch status. The purpose of this study was to add new knowledge to this field of study by determining whether there is a…
Descriptors: Comparative Analysis, Computer Assisted Testing, Lunch Programs, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
Most colleges and universities in the United States require students to take the SAT or ACT as part of the college application process. These tests are high stakes in at least three ways. First, most universities factor scores on these tests into admissions decisions. Second, higher scores can increase a student's chances of being admitted to…
Descriptors: College Entrance Examinations, Test Preparation, College Applicants, High Stakes Tests
Ryan, Barbara A. – ProQuest LLC, 2012
Beginning with the No Child Left Behind federal legislation, states were required to use data to monitor and improve student achievement. For high schools, the Missouri Department of Elementary and Secondary Education chose End of Course Exams (EOC) to demonstrate levels of student achievement. The policy changed from school choice of paper-pencil…
Descriptors: Federal Government, Testing, Grade Point Average, Predictor Variables
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Perez, Christina – Journal of College Admission, 2002
Spurred in part by University of California (UC) President Richard Atkinson's February 2001 proposal to drop the SAT I for UC applicants, more attention is being paid to other tests such as the SAT II and ACT. Proponents of these alternative exams argue that the SAT I is primarily an aptitude test measuring some vague concept of "inherent…
Descriptors: College Entrance Examinations, Test Reliability, Academic Achievement, Prediction
Peer reviewed Peer reviewed
Plake, Barbara S.; Huntley, Renee M. – Educational and Psychological Measurement, 1984
Two studies examined the effect of making the correct answer of a multiple choice test item grammatically consistent with the item. American College Testing Assessment experimental items were constructed to investigate grammatical compliance to investigate grammatical compliance for plural-singular and vowel-consonant agreement. Results suggest…
Descriptors: Grammar, Higher Education, Item Analysis, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2