NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Foster, Colin; Woodhead, Simon; Barton, Craig; Clark-Wilson, Alison – Educational Studies in Mathematics, 2022
In this paper, we analyse a large, opportunistic dataset of responses (N = 219,826) to online, diagnostic multiple-choice mathematics questions, provided by 6-16-year-old UK school mathematics students (N = 7302). For each response, students were invited to indicate on a 5-point Likert-type scale how confident they were that their response was…
Descriptors: Foreign Countries, Elementary School Students, Secondary School Students, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien – International Journal of Science and Mathematics Education, 2015
The purpose of this study was to develop a computer-based assessment for elementary school students' listening comprehension of science talk within an inquiry-oriented environment. The development procedure had 3 steps: a literature review to define the framework of the test, collecting and identifying key constructs of science talk, and…
Descriptors: Listening Comprehension, Science Education, Computer Assisted Testing, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming – International Journal of Testing, 2016
To investigate possible iPad related mode effect, we tested 403 8th graders in Indiana, Maryland, and New Jersey under three mode conditions through random assignment: a desktop computer, an iPad alone, and an iPad with an external keyboard. All students had used an iPad or computer for six months or longer. The 2-hour test included reading, math,…
Descriptors: Educational Testing, Computer Assisted Testing, Handheld Devices, Computers
Peer reviewed Peer reviewed
Direct linkDirect link
Brantmeier, Cindy; Callender, Aimee; McDaniel, Mark – Hispania, 2013
The present study utilizes readings taken from texts in social psychology to examine the effects by gender of embedded "what" questions and elaborative "why" questions on reading comprehension. During regular class time, 97 advanced second language (L2) learners of Spanish read two different vignettes, either with or without…
Descriptors: Reading Comprehension, Gender Differences, Spanish, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kay, Robin H.; Knaack, Liesel – Canadian Journal of Learning and Technology, 2009
The purpose of this study was to examine individual differences in attitudes toward Audience Response Systems (ARSs) in secondary school classrooms. Specifically, the impact of gender, grade, subject area, computer comfort level, participation level, and type of use were examined in 659 students. Males had significantly more positive attitudes…
Descriptors: Audience Response, Gender Differences, Secondary School Students, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Handwerk, Phil – ETS Research Report Series, 2007
Online high schools are growing significantly in number, popularity, and function. However, little empirical data has been published about the effectiveness of these institutions. This research examined the frequency of group work and extended essay writing among online Advanced Placement Program® (AP®) students, and how these tasks may have…
Descriptors: Advanced Placement Programs, Advanced Placement, Computer Assisted Testing, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolfe, Edward W.; Manalo, Jonathan R. – ETS Research Report Series, 2005
This study examined scores from 133,906 operationally scored Test of English as a Foreign Language™ (TOEFL®) essays to determine whether the choice of composition medium has any impact on score quality for subgroups of test-takers. Results of analyses demonstrate that (a) scores assigned to word-processed essays are slightly more reliable than…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scores