Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Raymond, Mark R.; Stevens, Craig; Bucak, S. Deniz – Advances in Health Sciences Education, 2019
Research suggests that the three-option format is optimal for multiple choice questions (MCQs). This conclusion is supported by numerous studies showing that most distractors (i.e., incorrect answers) are selected by so few examinees that they are essentially nonfunctional. However, nearly all studies have defined a distractor as nonfunctional if…
Descriptors: Multiple Choice Tests, Credentials, Test Format, Test Items
Farhat, Naha J.; Stanford, Courtney; Ruder, Suzanne M. – Journal of Chemical Education, 2019
Assessments can provide instructors and students with valuable information regarding student's level of knowledge and understanding, in order to improve both teaching and learning. In this study, we analyzed departmental assessment quizzes given to students at the start of Organic Chemistry 2, over an eight year period. This assessment quiz was…
Descriptors: Organic Chemistry, Teaching Methods, Science Instruction, Science Tests
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating
Petscher, Yaacov; Pfeiffer, Steven I. – Assessment for Effective Intervention, 2020
The authors evaluated measurement-level, factor-level, item-level, and scale-level revisions to the "Gifted Rating Scales-School Form" (GRS-S). Measurement-level considerations tested the extent to which treating the Likert-type scale rating as categorical or continuous produced different fit across unidimensional, correlated trait, and…
Descriptors: Psychometrics, Academically Gifted, Rating Scales, Factor Structure
Soto, Christian; Gutierrez de Blume, Antonio P.; Carrasco Bernal, Macarena Andrea; Contreras Castro, Marco Antonio – Journal of Research in Reading, 2020
We explored whether performance differences exist between proficient and poor readers on implicit text information. Next, we explored whether indices of meta-cognitive monitoring predicted reading performance. Finally, we examined whether poor and proficient readers exhibited distinct meta-cognitive profiles with respect to reading comprehension…
Descriptors: Cues, Metacognition, Reading Comprehension, Undergraduate Students
Michaelides, Michalis P.; Ivanova, Militsa; Nicolaou, Christiana – International Journal of Testing, 2020
The study examined the relationship between examinees' test-taking effort and their accuracy rate on items from the PISA 2015 assessment. The 10% normative threshold method was applied on Science multiple-choice items in the Cyprus sample to detect rapid guessing behavior. Results showed that the extent of rapid guessing across simple and complex…
Descriptors: Accuracy, Multiple Choice Tests, International Assessment, Achievement Tests
Zyzik, Eve – Language Learning, 2020
This article examines the performance of heritage speakers on a bimodal acceptability judgment task that targeted morphologically complex words. A major goal of the study was to compare participants' acceptance of conventional and creative words. Data were collected from 57 adult heritage speakers of Spanish who were subsequently divided into two…
Descriptors: Creativity, Bilingualism, Spanish, Comparative Analysis
Nakamura, Yoshie Tomozumi – European Journal of Training and Development, 2021
Purpose: The purpose of this study is to better understand what components impact the creation of organizational leaders' social capital. The study further seeks to illuminate the effects of participating in a leadership development seminar on the creation of social capital in global contexts. Design/methodology/approach: The data was collected…
Descriptors: Seminars, Social Capital, Leadership Training, Administrator Attitudes
Kuhlmann, Beatrice G.; Brubaker, Matthew S.; Pfeiffer, Theresa; Naveh-Benjamin, Moshe – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2021
Few studies have compared interference-based forgetting between item versus associative memory. The memory-system dependent forgetting hypothesis (Hardt, Nader, & Nadel, 2013) predicts that effects of interference on associative memory should be minimal because its hippocampal representation allows pattern separation even of highly similar…
Descriptors: Older Adults, Memory, Comparative Analysis, Interference (Learning)
Paul J. Walter; Edward Nuhfer; Crisel Suarez – Numeracy, 2021
We introduce an approach for making a quantitative comparison of the item response curves (IRCs) of any two populations on a multiple-choice test instrument. In this study, we employ simulated and actual data. We apply our approach to a dataset of 12,187 participants on the 25-item Science Literacy Concept Inventory (SLCI), which includes ample…
Descriptors: Item Analysis, Multiple Choice Tests, Simulation, Data Analysis
Ronen Kasperski; Merav E. Hemi – Assessment in Education: Principles, Policy & Practice, 2024
Educators' Social-Emotional Learning (SEL) is crucial for fostering positive, supportive, and effective learning environments. This study seeks to improve SEL assessment among educators by addressing limitations of the previous EduSEL questionnaire. Study 1 established convergent validity by comparing EduSEL with a validated SEL questionnaire.…
Descriptors: Social Emotional Learning, Factor Structure, Factor Analysis, Teacher Attitudes
Jechun An – ProQuest LLC, 2024
Students' responses to Word Dictation curriculum-based measurement (CBM) in writing tend to include a lot of missing values, especially items not reached due to the three-minute test time limit. A large amount of non-ignorable not-reached responses in Word Dictation can be considered using alternative item response theory (IRT) approaches. In…
Descriptors: Item Response Theory, Elementary School Students, Writing Difficulties, Writing Evaluation
Clark, D. Angus; Bowles, Ryan P. – Grantee Submission, 2018
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present…
Descriptors: Factor Analysis, Goodness of Fit, Factor Structure, Monte Carlo Methods
Dai, Shenghai – ProQuest LLC, 2017
This dissertation is aimed at investigating the impact of missing data and evaluating the performance of five selected methods for handling missing responses in the implementation of Cognitive Diagnostic Models (CDMs). The five methods are: a) treating missing data as incorrect (IN), b) person mean imputation (PM), c) two-way imputation (TW), d)…
Descriptors: Data, Research Problems, Research Methodology, Models

Peer reviewed
Direct link
