Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 13 |
Descriptor
Source
Author
Publication Type
Numerical/Quantitative Data | 13 |
Reports - Research | 10 |
Reports - Evaluative | 3 |
Tests/Questionnaires | 2 |
Guides - General | 1 |
Journal Articles | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 4 |
Elementary Secondary Education | 4 |
Grade 4 | 3 |
Grade 8 | 3 |
Higher Education | 3 |
Postsecondary Education | 3 |
Secondary Education | 3 |
Junior High Schools | 2 |
Middle Schools | 2 |
Adult Education | 1 |
Grade 1 | 1 |
More ▼ |
Audience
Location
Canada | 1 |
Florida | 1 |
North Carolina | 1 |
Oregon | 1 |
Serbia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
Florida Comprehensive… | 1 |
General Educational… | 1 |
National Assessment of… | 1 |
North Carolina End of Course… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Wang, Yan; Murphy, Kevin B. – National Center for Education Statistics, 2020
In 2018, the National Center for Education Statistics (NCES) administered two assessments--the National Assessment of Educational Progress (NAEP) Technology and Engineering Literacy (TEL) assessment and the International Computer and Information Literacy Study (ICILS)--to two separate nationally representative samples of 8th-grade students in the…
Descriptors: National Competency Tests, International Assessment, Computer Literacy, Information Literacy
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Martin, Michael O., Ed.; von Davier, Matthias, Ed.; Mullis, Ina V. S., Ed. – International Association for the Evaluation of Educational Achievement, 2020
The chapters in this online volume comprise the TIMSS & PIRLS International Study Center's technical report of the methods and procedures used to develop, implement, and report the results of TIMSS 2019. There were various technical challenges because TIMSS 2019 was the initial phase of the transition to eTIMSS, with approximately half the…
Descriptors: Foreign Countries, Elementary Secondary Education, Achievement Tests, International Assessment
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Ferguson, Sarah Jane – Statistics Canada, 2016
Canada's knowledge-based economy--especially the fields of science, technology, engineering and mathematics (STEM)--continues to grow. Related changes in the economy, including shifts to globalized markets and an emphasis on innovation and technology, all mean that education is more and more an integral component of economic and social well-being.…
Descriptors: Foreign Countries, Womens Education, Educational Attainment, Qualifications
Verbic, Srdjan; Tomic, Boris; Kartal, Vesna – Online Submission, 2010
On-line trial testing for fourth-grade students was an exploratory study realized as a part of the project "Developing annual test of students' achievement in Nature & Society" realized by Institute for Education Quality and Evaluation. Main ideas of the study were to explore possibilities for on-line testing at national level in…
Descriptors: Foreign Countries, Item Response Theory, High School Students, Computer Assisted Testing
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
DeCarlo, Lawrence T. – ETS Research Report Series, 2008
Rater behavior in essay grading can be viewed as a signal-detection task, in that raters attempt to discriminate between latent classes of essays, with the latent classes being defined by a scoring rubric. The present report examines basic aspects of an approach to constructed-response (CR) scoring via a latent-class signal-detection model. The…
Descriptors: Scoring, Responses, Test Format, Bias
Xu, Zeyu; Nichols, Austin – National Center for Analysis of Longitudinal Data in Education Research, 2010
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Descriptors: Test Format, Reading Tests, Norm Referenced Tests, Research Design
Liu, Kimy; Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Tindal, Gerald – Behavioral Research and Teaching, 2008
BRT Math Screening Measures focus on students' mathematics performance in grade-level standards for students in grades 1-8. A total of 24 test forms are available with three test forms per grade corresponding to fall, winter, and spring testing periods. Each form contains computation problems and application problems. BRT Math Screening Measures…
Descriptors: Test Items, Test Format, Test Construction, Item Response Theory
George-Ezzelle, Carol E.; Hsu, Yung-chen – GED Testing Service, 2006
As GED (General Educational Development) Testing Service considers the feasibility of a computer administration of the GED Tests, one issue being considered is the difference in costs of supplying only a computer-based format vs. offering both computer-based and paper formats of the GED Tests. A significant concern then arises as to whether…
Descriptors: Familiarity, Computer Assisted Testing, High School Equivalency Programs, Computer Literacy