NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Higher Education28
Postsecondary Education26
Secondary Education3
High Schools2
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ivan D. Mardini G.; Christian G. Quintero M.; César A. Viloria N.; Winston S. Percybrooks B.; Heydy S. Robles N.; Karen Villalba R. – Education and Information Technologies, 2024
Today reading comprehension is considered an essential skill in modern life, therefore, higher education students require more specific skills to understand, interpret and evaluate texts effectively. Short answer questions (SAQs) are one of the relevant and proper tools for assessing reading comprehension skills. Unlike multiple-choice questions,…
Descriptors: Reading Comprehension, Reading Tests, Learning Strategies, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Marini, Jessica P.; Westrick, Paul A.; Young, Linda; Shaw, Emily J. – College Board, 2022
This study examines relationships between digital SAT scores and other relevant educational measures, such as high school grade point average (HSGPA), PSAT/NMSQT Total score, and Average AP Exam score, and compares those relationships to current paper and pencil SAT score relationships with the same measures. This information can provide…
Descriptors: Scores, College Entrance Examinations, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sharareh Sadat Sarsarabi; Zeinab Sazegar – International Journal of Language Testing, 2023
The statement stated in a multiple-choice question can be developed regarding two types of sentences: Interruptive (periodic) and cumulative (or loose). This study deals with different kinds of stems in designing multiple-choice (MC) items. To fill the existing gap in the literature, two groups of teacher students passing general English courses…
Descriptors: Language Tests, Test Format, Multiple Choice Tests, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Liao, Ray J. T. – Language Testing, 2023
Among the variety of selected response formats used in L2 reading assessment, multiple-choice (MC) is the most commonly adopted, primarily due to its efficiency and objectiveness. Given the impact of assessment results on teaching and learning, it is necessary to investigate the degree to which the MC format reliably measures learners' L2 reading…
Descriptors: Reading Tests, Language Tests, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Ben-Yehudah, Gal; Eshet-Alkalai, Yoram – British Journal of Educational Technology, 2021
The use of digital environments for both learning and assessment is becoming prevalent. This often leads to incongruent situations, in which the study medium (eg, printed textbook) is different from the testing medium (eg, online multiple-choice exams). Despite some evidence that incongruent study-test situations are associated with inferior…
Descriptors: Reading Comprehension, Reading Tests, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sanne Unger; Alanna Lecher – Journal of Effective Teaching in Higher Education, 2024
This action research project sought to understand how giving students a choice in how to demonstrate mastery of a reading would affect both grades and evaluations of the instructor, given that assessment choice might increase student engagement. We examined the effect of student assessment choice on grades and course evaluations, the two…
Descriptors: College Faculty, College Students, Alternative Assessment, Test Selection
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Ray J. T. – Reading in a Foreign Language, 2021
This study was to compare the processes that second language learners implemented when completing reading tasks in a multiple-choice question format (MCQ) with those in a short-answer question format (SAQ). Sixteen nonnative English-speaking students from a large Midwestern college in the United States were invited to complete MCQ and SAQ English…
Descriptors: Multiple Choice Tests, Test Format, Comparative Analysis, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hammad, Enas Abdullah – Arab World English Journal, 2021
Despite Palestinian university students' problems with the Test of English as a Foreign Language Internet-based Test, no researchers approached this research area in the Palestinian English as a Foreign Language context. The present study attempted to answer a question focusing on Palestinian university students' problems with the reading sections…
Descriptors: College Seniors, Computer Assisted Testing, Paper (Material), Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Previous Page | Next Page »
Pages: 1  |  2