NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Arizona1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Wiley, Andrew – College Board, 2009
Presented at the national conference for the American Educational Research Association (AERA) in 2009. This discussed the development and implementation of the new SAT writing section.
Descriptors: Aptitude Tests, Writing Tests, Test Construction, Test Format
Sykes, Robert C.; Truskosky, Denise; White, Hillory – 2001
The purpose of this research was to study the effect of the three different ways of increasing the number of points contributed by constructed response (CR) items on the reliability of test scores from mixed-item-format tests. The assumption of unidimensionality that underlies the accuracy of item response theory model-based standard error…
Descriptors: Constructed Response, Elementary Education, Elementary School Students, Error of Measurement
Read, John – 1990
This paper, a discussion of the use of written tests to assess second language proficiency and achievement, considers what constitutes a valid writing test task and addresses three questions: (1) To what extent is performance influenced by prior knowledge about the topic? (2) Does it make a difference how the writing task is specified on the test…
Descriptors: English for Academic Purposes, Foreign Countries, Higher Education, Language Proficiency
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards (AIMS), a Standards-Based test, provides educators and the public with valuable information regarding the progress of Arizona's students toward mastering Arizona's reading, writing and mathematics Standards. This specific test, Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is…
Descriptors: Grade 8, Reference Materials, Test Items, Scoring
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness
Hargett, Gary R. – 1998
The purposes and methods of testing in bilingual and English-as-a-Second-Language (ESL) education are discussed. Different instruments, including specific published tests, are listed and described briefly. They include language proficiency assessments, achievement tests, and assessments in special education. Introductory sections address topics…
Descriptors: Academic Standards, Bilingual Education, Classroom Observation Techniques, Cloze Procedure