NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 56 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Paleczek, Lisa; Seifert, Susanne; Schöfl, Martin – British Journal of Educational Technology, 2021
The current study digitalised an assessment instrument of receptive vocabulary knowledge, GraWo-KiGa, for use in Austrian kindergartens. Using a mixed-methods approach, this study looks at 85 kindergarteners in their last year (age M = 5.79 years, 51.8% male, 71.8% L1 German), to find out (a) whether the form of digital assessment employed meets…
Descriptors: Kindergarten, Receptive Language, Foreign Countries, Native Language
Jin Soo Choi – ProQuest LLC, 2022
Nonverbal behavior is essential in human interaction (Gullberg, de Bot, & Volterra, 2008; McNeill, 1992, 2005). For second language speakers, nonverbal features can be helpful for successful and efficient communication (e.g., Dahl & Ludvigsen, 2014). However, due to the complexity of nonverbal features, language testing institutions have…
Descriptors: Language Tests, Language Proficiency, Videoconferencing, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al Habbash, Maha; Alsheikh, Negmeldin; Liu, Xu; Al Mohammedi, Najah; Al Othali, Safa; Ismail, Sadiq Abdulwahed – International Journal of Instruction, 2021
This convergent mixed method study aimed at exploring the English context of the widely used Emirates Standardized Test (EmSAT) by juxtaposing it to its sequel, the International English Language Testing System (IELTS). For this purpose, the study used the Common European Framework of Reference (CEFR) international standards which is used as a…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Guidelines
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akbay, Tuncer; Akbay, Lokman; Erol, Osman – Malaysian Online Journal of Educational Technology, 2021
Integration of e-learning and computerized assessments into many levels of educational programs has been increasing as digital technology progresses. Due to a handful of prominent advantages of computer-based-testing (CBT), a rapid transition in test administration mode from paper-based-testing (PBT) to CBT has emerged. Recently, many national and…
Descriptors: Computer Assisted Testing, Testing, High Stakes Tests, International Assessment
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
Solheim, Oddny Judith; Lundetrae, Kjersti – Assessment in Education: Principles, Policy & Practice, 2018
Gender differences in reading seem to increase throughout schooling and then decrease or even disappear with age, but the reasons for this are unclear. In this study, we explore whether differences in the way "reading literacy" is operationalised can add to our understanding of varying gender differences in international large-scale…
Descriptors: Achievement Tests, Foreign Countries, Grade 4, Reading Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Hubbard, Joanna K.; Potts, Macy A.; Couch, Brian A. – CBE - Life Sciences Education, 2017
Assessments represent an important component of undergraduate courses because they affect how students interact with course content and gauge student achievement of course objectives. To make decisions on assessment design, instructors must understand the affordances and limitations of available question formats. Here, we use a crossover…
Descriptors: Test Format, Questioning Techniques, Undergraduate Students, Objective Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Lee, Shinhye; Chapman, Mark; Wilmes, Carsten – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2019
This study aimed to investigate how Grade 1-2 English language learners (ELLs) differ in their performance on a writing test in two test modes: paper and online. Participants were 139 ELLs in the United States. They completed three writing tasks, representing three test modes: (1) a paper in which students completed their writing using a…
Descriptors: Elementary School Students, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Stenlund, Tova; Sundström, Anna; Jonsson, Bert – Educational Psychology, 2016
This study examined whether practice testing with short-answer (SA) items benefits learning over time compared to practice testing with multiple-choice (MC) items, and rereading the material. More specifically, the aim was to test the hypotheses of "retrieval effort" and "transfer appropriate processing" by comparing retention…
Descriptors: Short Term Memory, Long Term Memory, Test Format, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4