Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
ACT, Inc. | 1 |
Behavioral Research and… | 1 |
College Board | 1 |
Grantee Submission | 1 |
International Association for… | 1 |
National Center for Education… | 1 |
Pearson | 1 |
Author
Allalouf, Avi | 1 |
Ben Seipel | 1 |
Boldt, R. F. | 1 |
Brennan, Robert L. | 1 |
Cho, YoungWoo | 1 |
Eignor, Daniel R. | 1 |
Goodman, Joshua | 1 |
Ketterlin-Geller, Leanne R. | 1 |
Lee, Eunjung | 1 |
Lee, Won-Chan | 1 |
Leitner, Dennis W. | 1 |
More ▼ |
Publication Type
Numerical/Quantitative Data | 13 |
Reports - Research | 10 |
Speeches/Meeting Papers | 5 |
Reports - Evaluative | 3 |
Guides - General | 1 |
Education Level
Elementary Secondary Education | 3 |
Grade 8 | 3 |
Elementary Education | 2 |
Grade 4 | 2 |
Higher Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Grade 1 | 1 |
Grade 2 | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
National Assessment of… | 1 |
SAT (College Admission Test) | 1 |
Stanford Achievement Tests | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Wang, Yan; Murphy, Kevin B. – National Center for Education Statistics, 2020
In 2018, the National Center for Education Statistics (NCES) administered two assessments--the National Assessment of Educational Progress (NAEP) Technology and Engineering Literacy (TEL) assessment and the International Computer and Information Literacy Study (ICILS)--to two separate nationally representative samples of 8th-grade students in the…
Descriptors: National Competency Tests, International Assessment, Computer Literacy, Information Literacy
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Martin, Michael O., Ed.; von Davier, Matthias, Ed.; Mullis, Ina V. S., Ed. – International Association for the Evaluation of Educational Achievement, 2020
The chapters in this online volume comprise the TIMSS & PIRLS International Study Center's technical report of the methods and procedures used to develop, implement, and report the results of TIMSS 2019. There were various technical challenges because TIMSS 2019 was the initial phase of the transition to eTIMSS, with approximately half the…
Descriptors: Foreign Countries, Elementary Secondary Education, Achievement Tests, International Assessment
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Liu, Kimy; Ketterlin-Geller, Leanne R.; Yovanoff, Paul; Tindal, Gerald – Behavioral Research and Teaching, 2008
BRT Math Screening Measures focus on students' mathematics performance in grade-level standards for students in grades 1-8. A total of 24 test forms are available with three test forms per grade corresponding to fall, winter, and spring testing periods. Each form contains computation problems and application problems. BRT Math Screening Measures…
Descriptors: Test Items, Test Format, Test Construction, Item Response Theory
Boldt, R. F. – 1992
The Test of Spoken English (TSE) is an internationally administered instrument for assessing nonnative speakers' proficiency in speaking English. The research foundation of the TSE examination described in its manual refers to two sources of variation other than the achievement being measured: interrater reliability and internal consistency.…
Descriptors: Adults, Analysis of Variance, Interrater Reliability, Language Proficiency
Price, Larry R.; Oshima, T. C. – 1998
Often, educational and psychological measurement instruments must be translated from one language to another when they are administered to different cultural groups. The translation process often necessarily introduces measurement inequivalence. Therefore, an examination may be said to exhibit differential functioning if the test provides a…
Descriptors: Certification, Cross Cultural Studies, Cultural Differences, Diving
Rapp, Joel; Allalouf, Avi – 2002
This study examined the cross-lingual equating process adopted by a large scale testing system in which target language (TL) forms are equated to the source language (SL) forms using a set of translated items. The focus was on evaluating the degree of error inherent in the routine cross-lingual equating of the Verbal Reasoning subtest of the…
Descriptors: College Applicants, College Entrance Examinations, Equated Scores, High Stakes Tests
Gender and Achievement--Understanding Gender Differences and Similarities in Mathematics Assessment.
Zhang, Liru; Manon, Jon – 2000
The primary objective of this study was to investigate overall patterns of gender differences and similarities of test performance in mathematics. To achieve that objective, observed test scores on the Delaware standards-based assessment were analyzed to examine: (1) gender differences and similarities across grades 3, 5, 8 and 10 over 2 years;…
Descriptors: Academic Standards, Elementary School Students, Elementary Secondary Education, Mathematics Achievement
Leitner, Dennis W.; And Others – 1979
To discover factors which contribute to a high response rate for questionnaire surveys, the preferences of 150 college teachers and teaching assistants were studied. Four different questionnaire formats using 34 common items were sent to the subjects: open-ended; Likert-type (five points, from "strong influence to return," to…
Descriptors: Check Lists, College Faculty, Comparative Testing, Higher Education
Eignor, Daniel R. – 1985
The feasibility of pre-equating, or establishing conversions from raw to scaled scores through the use of pretest data before operationally administering a test, was investigated for the Scholastic Aptitude Test (SAT). Item-response theory based equating methods were used to estimate item parameters on SAT pretest data, instead of using final form…
Descriptors: College Entrance Examinations, Equated Scores, Estimation (Mathematics), Feasibility Studies