NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 751 to 765 of 3,089 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Reardon, Sean; Fahle, Erin; Kalogrides, Demetra; Podolsky, Anne; Zarate, Rosalia – Society for Research on Educational Effectiveness, 2016
Prior research demonstrates the existence of gender achievement gaps and the variation in the magnitude of these gaps across states. This paper characterizes the extent to which the variation of gender achievement gaps on standardized tests across the United States can be explained by differing state accountability test formats. A comprehensive…
Descriptors: Test Format, Gender Differences, Achievement Gap, Standardized Tests
Crabtree, Ashleigh R. – ProQuest LLC, 2016
The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties…
Descriptors: Psychometrics, Computer Assisted Testing, Test Items, Test Format
National Assessment Governing Board, 2016
Having a large-scale national assessment in the arts makes an important statement about the need for all children in our country to obtain the special benefits of learning that only the arts provide. In recognition of the importance of the arts in education, the National Assessment of Educational Progress (NAEP), also known as The Nation's Report…
Descriptors: Art Education, National Competency Tests, Guidelines, Test Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bae, Minryoung; Lee, Byungmin – English Teaching, 2018
This study examines the effects of text length and question type on Korean EFL readers' reading comprehension of the fill-in-the-blank items in Korean CSAT. A total of 100 Korean EFL college students participated in the study. After divided into three different proficiency groups, the participants took a reading comprehension test which consisted…
Descriptors: Test Items, Language Tests, Second Language Learning, Second Language Instruction
Deane, Paul; O'Reilly, Tenaha; Chao, Szu-Fu; Dreier, Kelsey – Grantee Submission, 2018
The purpose of the report is to explore some of the mechanisms involved in the writing process. In particular, we examine students' process data (keystroke log analysis) to uncover how students approach a knowledge-telling task using 2 different task types. In the first task, students were asked to list as many words as possible related to a…
Descriptors: Writing Processes, Prior Learning, Task Analysis, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Aviad-Levitzky, Tami; Laufer, Batia; Goldstein, Zahava – Language Assessment Quarterly, 2019
This article describes the development and validation of the new CATSS (Computer Adaptive Test of Size and Strength), which measures vocabulary knowledge in four modalities -- productive recall, receptive recall, productive recognition, and receptive recognition. In the first part of the paper we present the assumptions that underlie the test --…
Descriptors: Foreign Countries, Test Construction, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Jonick, Christine; Schneider, Jennifer; Boylan, Daniel – Accounting Education, 2017
The purpose of the research is to examine the effect of different response formats on student performance on introductory accounting exam questions. The study analyzes 1104 accounting students' responses to quantitative questions presented in two formats: multiple-choice and fill-in. Findings indicate that response format impacts student…
Descriptors: Introductory Courses, Accounting, Test Format, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Beserra, Vagner; Nussbaum, Miguel; Grass, Antonio – Interactive Learning Environments, 2017
When using educational video games, particularly drill-and-practice video games, there are several ways of providing an answer to a quiz. The majority of paper-based options can be classified as being either multiple-choice or constructed-response. Therefore, in the process of creating an educational drill-and-practice video game, one fundamental…
Descriptors: Multiple Choice Tests, Drills (Practice), Educational Games, Video Games
Peer reviewed Peer reviewed
Direct linkDirect link
Sangwin, Christopher J.; Jones, Ian – Educational Studies in Mathematics, 2017
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Descriptors: Mathematics Achievement, Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing
Karpicke, Jeffrey D. – Grantee Submission, 2017
Learning is often identified with the acquisition and encoding of new information. Reading a textbook, listening to a lecture, participating in a hands-on classroom activity, and studying a list of words in a laboratory experiment are all clear examples of learning events. Tests, on the other hand, are used to assess what was learned in a prior…
Descriptors: Learning Processes, Recall (Psychology), Testing, Retention (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Bowles, Ryan P.; Pentimonti, Jill M.; Gerde, Hope K.; Montroy, Janelle J. – Journal of Psychoeducational Assessment, 2014
Letter name knowledge in the preschool ages is a strong predictor of later reading ability, but little is known about the psychometric characteristics of uppercase and lowercase letters considered together. Data from 1,113 preschoolers from diverse backgrounds on both uppercase and lowercase letter name knowledge were analyzed using Item Response…
Descriptors: Item Response Theory, Preschool Children, Alphabets, Difficulty Level
Bowles, Ryan P.; Pentimonti, Jill M.; Gerde, Hope K.; Montroy, Janelle J. – Grantee Submission, 2014
Letter name knowledge in the preschool ages is a strong predictor of later reading ability, but little is known about the psychometric characteristics of uppercase and lowercase letters considered together. Data from 1,113 preschoolers from diverse backgrounds on both uppercase and lowercase letter name knowledge were analyzed using Item Response…
Descriptors: Item Response Theory, Preschool Children, Alphabets, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Guemin; Lee, Won-Chan – Applied Measurement in Education, 2016
The main purposes of this study were to develop bi-factor multidimensional item response theory (BF-MIRT) observed-score equating procedures for mixed-format tests and to investigate relative appropriateness of the proposed procedures. Using data from a large-scale testing program, three types of pseudo data sets were formulated: matched samples,…
Descriptors: Test Format, Multidimensional Scaling, Item Response Theory, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff – Journal of Educational Measurement, 2016
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Sequential Approach
Pages: 1  |  ...  |  47  |  48  |  49  |  50  |  51  |  52  |  53  |  54  |  55  |  ...  |  206