NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 76 to 90 of 567 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sahlan; Alberth; Madil, Wahyudin; Hutnisyawati – Issues in Educational Research, 2021
Online testing is now becoming more prevalent internationally, across different educational levels. However, little is known about the impact of this mode of testing on male and female students' anxiety levels and test performance. Conducted in an Indonesian sociocultural context which is under-represented in the international literature, this…
Descriptors: Computer Assisted Testing, Test Anxiety, Test Format, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kathleen A. Paciga; Christina M. Cassano – AERA Open, 2024
Early literacy assessment has become commonplace in the preschool years, with phonological awareness constituting one component of emergent literacy targeted by such practices. This within-subjects experimental study examines the role of word familiarity on 93 dual language preschoolers' performance on phoneme-level awareness tasks in…
Descriptors: Emergent Literacy, Phonological Awareness, Bilingualism, Preschool Children
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baral, Sami; Botelho, Anthony; Santhanam, Abhishek; Gurung, Ashish; Cheng, Li; Heffernan, Neil – International Educational Data Mining Society, 2023
Teachers often rely on the use of a range of open-ended problems to assess students' understanding of mathematical concepts. Beyond traditional conceptions of student open-ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended…
Descriptors: Mathematics Instruction, Mathematical Concepts, Problem Solving, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Jung Youn, Soo – Language Testing, 2023
As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and…
Descriptors: Test Construction, Test Validity, Second Language Learning, Telecommunications
Peer reviewed Peer reviewed
Direct linkDirect link
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Marini, Jessica P.; Westrick, Paul A.; Young, Linda; Shaw, Emily J. – College Board, 2022
This study examines relationships between digital SAT scores and other relevant educational measures, such as high school grade point average (HSGPA), PSAT/NMSQT Total score, and Average AP Exam score, and compares those relationships to current paper and pencil SAT score relationships with the same measures. This information can provide…
Descriptors: Scores, College Entrance Examinations, Comparative Analysis, Test Format
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Ali Amjadi – Reading & Writing Quarterly, 2024
Over the last few years, technology has offered new ways of teaching and learning. Accordingly, educational systems are adopting what technology has purveyed to education. The abrupt upsurge of the COVID-19 pandemic also expedited this employment and impelled educational systems to shift to online teaching and learning. Consequently, the offline…
Descriptors: Test Format, Reading Comprehension, Computer Assisted Testing, Reading Strategies
Peer reviewed Peer reviewed
Direct linkDirect link
McLeod, Melissa; Cheng, Liying – Language Assessment Quarterly, 2023
The Canadian English Language Proficiency Index Program (CELPIP) Test was designed for immigration and citizenship in Canada. CELPIP is a computer-based English-language proficiency test which covers all four skills. This test review provides a description of the test and its construct, tasks, and delivery. Then, it appraises CELPIP for…
Descriptors: Language Tests, Language Proficiency, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Mertens, Ute; Finn, Bridgid; Lindner, Marlit Annalena – Journal of Educational Psychology, 2022
Feedback is one of the most important factors for successful learning. Contemporary computer-based learning and testing environments allow the implementation of automated feedback in a simple and efficient manner. Previous meta-analyses suggest that different types of feedback are not equally effective. This heterogeneity might depend on learner…
Descriptors: Computer Assisted Testing, Feedback (Response), Electronic Learning, Network Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Jessica; Adlof, Suzanne M.; Duff, Dawna; Mitchell, Alexis; Ragunathan, Maalavika; Ehrhorn, Anna M. – Language, Speech, and Hearing Services in Schools, 2022
Purpose: The purpose of this study was to examine the relationship between parent concerns about children's oral language, reading, and related skills and their children's performance on standardized assessments of language and reading, with a particular focus on whether those relationships differed between children recruited for in-school versus…
Descriptors: Language Skills, Reading Ability, Oral Language, Parents
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  38