NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
Torrance Tests of Creative…1
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Grajzel, Katalin; Dumas, Denis; Acar, Selcuk – Journal of Creative Behavior, 2022
One of the best-known and most frequently used measures of creative idea generation is the Torrance Test of Creative Thinking (TTCT). The TTCT Verbal, assessing verbal ideation, contains two forms created to be used interchangeably by researchers and practitioners. However, the parallel forms reliability of the two versions of the TTCT Verbal has…
Descriptors: Test Reliability, Creative Thinking, Creativity Tests, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Springuel, R. Padraic; Wittmann, Michael C.; Thompson, John R. – Physical Review Physics Education Research, 2019
How data are collected and how they are analyzed is typically described in the literature, but how the data are encoded is often not described in detail. In this paper, we discuss how data typically gathered in PER are encoded and how the choice of encoding plays a role in data analysis. We describe the kinds of data that are found when using…
Descriptors: Physics, Educational Research, Science Education, Coding
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
DeMara, Ronald F.; Bacanli, Salih S.; Bidoki, Neda; Xu, Jun; Nassiff, Edwin; Donnelly, Julie; Turgut, Damla – Journal of Educational Technology Systems, 2020
This research developed an approach to integrate the complementary benefits of digitized assessments and peer learning. Its basic premise and associated hypotheses are that by using student assessments of correct and incorrect quiz answers using a fine-grained resolution to pair them into remediation peer-learning cohorts is an effective means of…
Descriptors: Undergraduate Students, Engineering Education, Computer Assisted Testing, Pilot Projects
Peer reviewed Peer reviewed
Direct linkDirect link
Boone, William J. – CBE - Life Sciences Education, 2016
This essay describes Rasch analysis psychometric techniques and how such techniques can be used by life sciences education researchers to guide the development and use of surveys and tests. Specifically, Rasch techniques can be used to document and evaluate the measurement functioning of such instruments. Rasch techniques also allow researchers to…
Descriptors: Item Response Theory, Psychometrics, Science Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Guemin; Lee, Won-Chan – Applied Measurement in Education, 2016
The main purposes of this study were to develop bi-factor multidimensional item response theory (BF-MIRT) observed-score equating procedures for mixed-format tests and to investigate relative appropriateness of the proposed procedures. Using data from a large-scale testing program, three types of pseudo data sets were formulated: matched samples,…
Descriptors: Test Format, Multidimensional Scaling, Item Response Theory, Equated Scores
Yüksel, Hidayet Suha; Gündüz, Nevin – Online Submission, 2017
The purpose of this study is to examine opinions of the instructors working in three different universities in Ankara regarding assessment in education and assessment methods they use in their courses within the summative assessment and formative assessment approaches. The population is formed by instructors lecturing in School of Physical…
Descriptors: Foreign Countries, Formative Evaluation, Summative Evaluation, College Faculty
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A. – Journal of Education and Training Studies, 2015
Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…
Descriptors: Interviews, Interrater Reliability, Qualitative Research, Semi Structured Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Post, Gerald V.; Hargis, Jace – Decision Sciences Journal of Innovative Education, 2012
Online education and computer-assisted instruction (CAI) have existed for years, but few general tools exist to help instructors create and evaluate lessons. Are these tools sufficient? Specifically, what elements do instructors want to see in online testing tools? This study asked instructors from various disciplines to identify and evaluate the…
Descriptors: Computer Assisted Testing, Computer Software, Test Construction, Design Preferences
Peer reviewed Peer reviewed
Direct linkDirect link
Pae, Tae-Il – Language Testing, 2012
This study tracked gender differential item functioning (DIF) on the English subtest of the Korean College Scholastic Aptitude Test (KCSAT) over a nine-year period across three data points, using both the Mantel-Haenszel (MH) and item response theory likelihood ratio (IRT-LR) procedures. Further, the study identified two factors (i.e. reading…
Descriptors: Aptitude Tests, Academic Aptitude, Language Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Scarpati, Stanley E.; Wells, Craig S.; Lewis, Christine; Jirka, Stephen – Journal of Special Education, 2011
The purpose of this study was to use differential item functioning (DIF) and latent mixture model analyses to explore factors that explain performance differences on a large-scale mathematics assessment between examinees allowed to use a calculator or who were afforded item presentation accommodations versus those who did not receive the same…
Descriptors: Testing Accommodations, Test Items, Test Format, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Pyle, Katie; Jones, Emily; Williams, Chris; Morrison, Jo – Educational Research, 2009
Background: All national curriculum tests in England are pre-tested as part of the development process. Differences in pupil performance between pre-test and live test are consistently found. This difference has been termed the pre-test effect. Understanding the pre-test effect is essential in the test development and selection processes and in…
Descriptors: Foreign Countries, Pretesting, Context Effect, National Curriculum
Previous Page | Next Page »
Pages: 1  |  2