NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research16
Journal Articles13
Speeches/Meeting Papers2
Tests/Questionnaires1
Audience
Researchers1
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Grajzel, Katalin; Dumas, Denis; Acar, Selcuk – Journal of Creative Behavior, 2022
One of the best-known and most frequently used measures of creative idea generation is the Torrance Test of Creative Thinking (TTCT). The TTCT Verbal, assessing verbal ideation, contains two forms created to be used interchangeably by researchers and practitioners. However, the parallel forms reliability of the two versions of the TTCT Verbal has…
Descriptors: Test Reliability, Creative Thinking, Creativity Tests, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
DeMara, Ronald F.; Bacanli, Salih S.; Bidoki, Neda; Xu, Jun; Nassiff, Edwin; Donnelly, Julie; Turgut, Damla – Journal of Educational Technology Systems, 2020
This research developed an approach to integrate the complementary benefits of digitized assessments and peer learning. Its basic premise and associated hypotheses are that by using student assessments of correct and incorrect quiz answers using a fine-grained resolution to pair them into remediation peer-learning cohorts is an effective means of…
Descriptors: Undergraduate Students, Engineering Education, Computer Assisted Testing, Pilot Projects
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Guemin; Lee, Won-Chan – Applied Measurement in Education, 2016
The main purposes of this study were to develop bi-factor multidimensional item response theory (BF-MIRT) observed-score equating procedures for mixed-format tests and to investigate relative appropriateness of the proposed procedures. Using data from a large-scale testing program, three types of pseudo data sets were formulated: matched samples,…
Descriptors: Test Format, Multidimensional Scaling, Item Response Theory, Equated Scores
Yüksel, Hidayet Suha; Gündüz, Nevin – Online Submission, 2017
The purpose of this study is to examine opinions of the instructors working in three different universities in Ankara regarding assessment in education and assessment methods they use in their courses within the summative assessment and formative assessment approaches. The population is formed by instructors lecturing in School of Physical…
Descriptors: Foreign Countries, Formative Evaluation, Summative Evaluation, College Faculty
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A. – Journal of Education and Training Studies, 2015
Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…
Descriptors: Interviews, Interrater Reliability, Qualitative Research, Semi Structured Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Post, Gerald V.; Hargis, Jace – Decision Sciences Journal of Innovative Education, 2012
Online education and computer-assisted instruction (CAI) have existed for years, but few general tools exist to help instructors create and evaluate lessons. Are these tools sufficient? Specifically, what elements do instructors want to see in online testing tools? This study asked instructors from various disciplines to identify and evaluate the…
Descriptors: Computer Assisted Testing, Computer Software, Test Construction, Design Preferences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fauskanger, Janne; Mosvold, Reidar – North American Chapter of the International Group for the Psychology of Mathematics Education, 2012
The mathematical knowledge for teaching (MKT) measures have become widely used among researchers both within and outside the U.S. Despite the apparent success, the MKT measures and underlying framework have been subject to criticism. The multiple-choice format of the items has been criticized, and some critics have suggested that opening up the…
Descriptors: Foreign Countries, Elementary School Teachers, Secondary School Teachers, Mathematics Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Scarpati, Stanley E.; Wells, Craig S.; Lewis, Christine; Jirka, Stephen – Journal of Special Education, 2011
The purpose of this study was to use differential item functioning (DIF) and latent mixture model analyses to explore factors that explain performance differences on a large-scale mathematics assessment between examinees allowed to use a calculator or who were afforded item presentation accommodations versus those who did not receive the same…
Descriptors: Testing Accommodations, Test Items, Test Format, Validity
Hart, Ray; Casserly, Michael; Uzzell, Renata; Palacios, Moses; Corcoran, Amanda; Spurgeon, Liz – Council of the Great City Schools, 2015
There has been little data collected on how much testing actually goes on in America's schools and how the results are used. So in the Spring of 2014, the Council staff developed and launched a survey of assessment practices. This report presents the findings from that survey and subsequent Council analysis and review of the data. It also offers…
Descriptors: Urban Schools, Student Evaluation, Testing Programs, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Pyle, Katie; Jones, Emily; Williams, Chris; Morrison, Jo – Educational Research, 2009
Background: All national curriculum tests in England are pre-tested as part of the development process. Differences in pupil performance between pre-test and live test are consistently found. This difference has been termed the pre-test effect. Understanding the pre-test effect is essential in the test development and selection processes and in…
Descriptors: Foreign Countries, Pretesting, Context Effect, National Curriculum
Peer reviewed Peer reviewed
Katz, Barry M.; McSweeney, Maryellen – Journal of Experimental Education, 1984
This paper developed and illustrated a technique to analyze categorical data when subjects can appear in any number of categories for multigroup designs. Post hoc procedures to be used in conjunction with the presented statistical test are also developed. The technique is a large sample technique whose small sample properties are as yet unknown.…
Descriptors: Data Analysis, Hypothesis Testing, Mathematical Models, Research Methodology
Previous Page | Next Page »
Pages: 1  |  2