NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 31 to 45 of 813 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Crowther, Gregory J.; Knight, Thomas A. – Advances in Physiology Education, 2023
The past [approximately]15 years have seen increasing interest in defining disciplinary core concepts. Within the field of physiology, Michael, McFarland, Modell, and colleagues have published studies that defined physiology core concepts and have elaborated many of these as detailed conceptual frameworks. With such helpful definitions now in…
Descriptors: Test Format, Physiology, Higher Education, Concept Teaching
Peer reviewed Peer reviewed
Direct linkDirect link
Crisp, Victoria; Shaw, Stuart; Bramley, Tom – Assessment in Education: Principles, Policy & Practice, 2020
Item banking involves tests being constructed by selecting from a bank of pre-written questions. There are various examples of multiple-choice tests where item banking is used, but few examples involving other question types. This research explored the use of banking with structured questions. Three question writers were asked to construct…
Descriptors: Item Banks, Test Construction, Test Format, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cobern, William W.; Adams, Betty A. J. – International Journal of Assessment Tools in Education, 2020
What follows is a practical guide for establishing the validity of a survey for research purposes. The motivation for providing this guide is our observation that researchers, not necessarily being survey researchers per se, but wanting to use a survey method, lack a concise resource on validity. There is far more to know about surveys and survey…
Descriptors: Surveys, Test Validity, Test Construction, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Gerring, John; Pemstein, Daniel; Skaaning, Svend-Erik – Sociological Methods & Research, 2021
A key obstacle to measurement is the aggregation problem. Where indicators tap into common latent traits in theoretically meaningful ways, the problem may be solved by applying a data-informed ("inductive") measurement model, for example, factor analysis, structural equation models, or item response theory. Where they do not, researchers…
Descriptors: Test Construction, Measures (Individuals), Concept Formation, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Muhammad Yoga Prabowo; Sarah Rahmadian – TEFLIN Journal: A publication on the teaching and learning of English, 2023
The outbreak of the COVID-19 pandemic has transformed the educational landscape in a way unseen before. Educational institutions are navigating between offline and online learning worldwide. Computer-based testing is rapidly taking over paper-and-pencil testing as the dominant mode of assessment. In some settings, computer-based and…
Descriptors: English (Second Language), Second Language Learning, Test Format, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Spratto, Elisabeth M.; Bandalos, Deborah L. – Journal of Experimental Education, 2020
Research suggests that certain characteristics of survey items may impact participants' responses. In this study we investigated the impact of several of these characteristics: vague wording, question-versus-statement phrasing, and full-versus-partial labeling of response options. We manipulated survey items per these characteristics and randomly…
Descriptors: Attitude Measures, Test Format, Test Construction, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Yena; Lee, Senyung; Shin, Sun-Young – Language Testing, 2022
Despite consistent calls for authentic stimuli in listening tests for better construct representation, unscripted texts have been rarely adopted in high-stakes listening tests due to perceived inefficiency. This study details how a local academic listening test was developed using authentic unscripted audio-visual texts from the local target…
Descriptors: Listening Comprehension Tests, English for Academic Purposes, Test Construction, Foreign Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sayin, Ayfer; Sata, Mehmet – International Journal of Assessment Tools in Education, 2022
The aim of the present study was to examine Turkish teacher candidates' competency levels in writing different types of test items by utilizing Rasch analysis. In addition, the effect of the expertise of the raters scoring the items written by the teacher candidates was examined within the scope of the study. 84 Turkish teacher candidates…
Descriptors: Foreign Countries, Item Response Theory, Evaluators, Expertise
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Duru, Erdinc; Ozgungor, Sevgi; Yildirim, Ozen; Duatepe-Paksu, Asuman; Duru, Sibel – International Journal of Assessment Tools in Education, 2022
The aim of this study is to develop a valid and reliable measurement tool that measures critical thinking skills of university students. Pamukkale Critical Thinking Skills Scale was developed as two separate forms; multiple choice and open-ended. The validity and reliability studies of the multiple-choice form were constructed on two different…
Descriptors: Critical Thinking, Cognitive Measurement, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Jung Youn, Soo – Language Testing, 2023
As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and…
Descriptors: Test Construction, Test Validity, Second Language Learning, Telecommunications
Peer reviewed Peer reviewed
Direct linkDirect link
Davis-Berg, Elizabeth C.; Minbiole, Julie – School Science Review, 2020
The completion rates were compared for long-form questions where a large blank answer space is provided and for long-form questions where the answer space has bullet-points prompts corresponding to the parts of the question. It was found that students were more likely to complete a question when bullet points were provided in the answer space.…
Descriptors: Test Format, Test Construction, Academic Achievement, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Christine G. Casey, Editor – Centers for Disease Control and Prevention, 2024
The "Morbidity and Mortality Weekly Report" ("MMWR") series of publications is published by the Office of Science, Centers for Disease Control and Prevention (CDC), U.S. Department of Health and Human Services. Articles included in this supplement are: (1) Overview and Methods for the Youth Risk Behavior Surveillance System --…
Descriptors: High School Students, At Risk Students, Health Behavior, National Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  55