NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 48 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huiming Ding; Matt Homer – Advances in Health Sciences Education, 2025
Summative assessments are often underused for feedback, despite them being rich with data of students' applied knowledge and clinical and professional skills. To better inform teaching and student support, this study aims to gain insights from summative assessments through profiling students' performance patterns and identify those students…
Descriptors: Summative Evaluation, Profiles, Statistical Analysis, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Yosuke Mikami – Language Testing in Asia, 2024
This study proposes a classification method for multiple text reading test formats in English language proficiency tests. A preliminary study involving 11 proficiency tests revealed two tests that fit the scope of the main study. Results show that multiple text reading test formats use complementary texts rather than conflicting texts. As for…
Descriptors: Language Proficiency, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Han, Suhwa; Kim, Doyoung; Kao, Shu-Chuan – Educational and Psychological Measurement, 2022
The development of technology-enhanced innovative items calls for practical models that can describe polytomous testlet items. In this study, we evaluate four measurement models that can characterize polytomous items administered in testlets: (a) generalized partial credit model (GPCM), (b) testlet-as-a-polytomous-item model (TPIM), (c)…
Descriptors: Goodness of Fit, Item Response Theory, Test Items, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Sahu, Archana; Bhowmick, Plaban Kumar – IEEE Transactions on Learning Technologies, 2020
In this paper, we studied different automatic short answer grading (ASAG) systems to provide a comprehensive view of the feature spaces explored by previous works. While the performance reported in previous works have been encouraging, systematic study of the features is lacking. Apart from providing systematic feature space exploration, we also…
Descriptors: Automation, Grading, Test Format, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid; Luo, Lan – Language Testing, 2023
This study reviewed conceptualizations and operationalizations of second language (L2) listening constructs. A total of 157 peer-reviewed papers published in 19 journals in applied linguistics were coded for (1) publication year, author, source title, location, language, and reliability and (2) listening subskills, cognitive processes, attributes,…
Descriptors: Test Format, Listening Comprehension Tests, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Mertens, Ute; Finn, Bridgid; Lindner, Marlit Annalena – Journal of Educational Psychology, 2022
Feedback is one of the most important factors for successful learning. Contemporary computer-based learning and testing environments allow the implementation of automated feedback in a simple and efficient manner. Previous meta-analyses suggest that different types of feedback are not equally effective. This heterogeneity might depend on learner…
Descriptors: Computer Assisted Testing, Feedback (Response), Electronic Learning, Network Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Delican, Burak – International Journal of Curriculum and Instruction, 2022
In this research, the questions in the Turkish Course (2,3,4) Worksheets were examined in terms of various classification systems. In this direction, the questions in the worksheets were evaluated with the document-material analysis technique in accordance with the structure of the qualitative research. During the research process, Turkish Course…
Descriptors: Worksheets, Elementary School Students, Turkish, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çiftçi, Sabahattin – International Electronic Journal of Elementary Education, 2019
Open-ended exams and multiple-choice exams are two types of examinations that are highly preferred in educational sciences. They have several advantages in terms of their characteristics, and they also have some limitations. These advantages and limitations affect the use of these exams both in national exams and in the exams administered by…
Descriptors: Multiple Choice Tests, Test Format, Preservice Teachers, Figurative Language
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tim Stoeckel; Tomoko Ishii – Vocabulary Learning and Instruction, 2024
In an upcoming coverage-comprehension study, we plan to assess learners' meaning-recall knowledge of words as they occur in the study's reading passage. As several meaning-recall test formats exist, the purpose of this small-scale study (N = 10) was to determine which of three formats was most similar to a criterion interview regarding mean score…
Descriptors: Vocabulary Development, Language Tests, Second Language Learning, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4