Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Thacker, Nathan L. – ProQuest LLC, 2023
Organic chemistry is a class well known to be difficult and necessary for many careers in the sciences, and as a result, has garnered interest in researching ways to improve student learning and comprehension. One potential way involves using eye tracking techniques to understand how students visually examine questions. Organic chemistry involves…
Descriptors: Science Instruction, Multiple Choice Tests, Organic Chemistry, Science Tests
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Beth Doll; Farya Haider; Jay Jeffries – Journal of Psychoeducational Assessment, 2025
More than a decade after two comprehensive examinations of the technical properties of the ClassMaps Survey (CMS), this study reexamined the structure and internal consistency of the scale and, for the first time, examined its measurement invariance across gender and school levels. Participants were 1,083 elementary and middle level students from…
Descriptors: Student Surveys, Attitude Measures, Student Attitudes, Elementary School Students
Thomas K. F. Chiu; Murat Çoban; Ismaila Temitayo Sanusi; Musa Adekunle Ayanwale – Educational Technology Research and Development, 2025
Nurturing student artificial intelligence (AI) competency is crucial in the future of K-12 education. Students with strong AI competency should be able to ethically, safely, healthily, and productively integrate AI into their learning. Research on student AI competency is still in its infancy, primarily focusing on theoretical and professional…
Descriptors: Artificial Intelligence, Digital Literacy, Competence, Self Efficacy
Gamze Erdem Cosgun – British Educational Research Journal, 2025
The role of artificial intelligence (AI) in education plays a crucial role in teacher training digitalisation. Although AI has enormous potential, not much is known about how pre-service teachers perceive and utilise AI tools in professional practice. Hence, this study, guided by the Unified Theory of Acceptance and Use of Technology framework,…
Descriptors: Artificial Intelligence, Digital Literacy, Preservice Teachers, Test Construction
Hauenstein, Clifford E.; Embretson, Susan E. – Journal of Cognitive Education and Psychology, 2020
The Concept Formation subtest of the Woodcock Johnson Tests of Cognitive Abilities represents a dynamic test due to continual provision of feedback from examiner to examinee. Yet, the original scoring protocol for the test largely ignores this dynamic structure. The current analysis applies a dynamic adaptation of an explanatory item response…
Descriptors: Test Items, Difficulty Level, Cognitive Tests, Cognitive Ability
Bayrakci, Mustafa; Karacaoglu, Ömer Cem – International Journal of Curriculum and Instruction, 2020
Learning outcomes are the first and most essential element of the curricula and the correct and rigorous determination of the learning outcomes is very important in order to ensure formal education in schools to be well-planned and to design and apply curriculums effectively. Because, the other elements of the curriculum which are content,…
Descriptors: Foreign Countries, Occupational Tests, Curriculum Development, Teaching (Occupation)
Myszkowski, Nils – Journal of Intelligence, 2020
Raven's Standard Progressive Matrices (Raven 1941) is a widely used 60-item long measure of general mental ability. It was recently suggested that, for situations where taking this test is too time consuming, a shorter version, comprised of only the last series of the Standard Progressive Matrices (Myszkowski and Storme 2018) could be used, while…
Descriptors: Intelligence Tests, Psychometrics, Nonparametric Statistics, Item Response Theory
Partchev, Ivailo – Journal of Intelligence, 2020
We analyze a 12-item version of Raven's Standard Progressive Matrices test, traditionally scored with the sum score. We discuss some important differences between assessment in practice and psychometric modelling. We demonstrate some advanced diagnostic tools in the freely available R package, dexter. We find that the first item in the test…
Descriptors: Intelligence Tests, Scores, Psychometrics, Diagnostic Tests
Metsämuuronen, Jari – International Journal of Educational Methodology, 2020
Kelley's Discrimination Index (DI) is a simple and robust, classical non-parametric short-cut to estimate the item discrimination power (IDP) in the practical educational settings. Unlike item-total correlation, DI can reach the ultimate values of +1 and -1, and it is stable against the outliers. Because of the computational easiness, DI is…
Descriptors: Test Items, Computation, Item Analysis, Nonparametric Statistics
Liu, Yue; Cheng, Ying; Liu, Hongyun – Educational and Psychological Measurement, 2020
The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item…
Descriptors: Item Response Theory, Test Wiseness, Response Style (Tests), Reaction Time
Leventhal, Brian; Ames, Allison – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of "Monte Carlo simulation studies" (MCSS) in "item response theory" (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because…
Descriptors: Item Response Theory, Monte Carlo Methods, Simulation, Test Items
Anderson, Daniel; Rowley, Brock; Stegenga, Sondra; Irvin, P. Shawn; Rosenberg, Joshua M. – Educational Measurement: Issues and Practice, 2020
Validity evidence based on test content is critical to meaningful interpretation of test scores. Within high-stakes testing and accountability frameworks, content-related validity evidence is typically gathered via alignment studies, with panels of experts providing qualitative judgments on the degree to which test items align with the…
Descriptors: Content Validity, Artificial Intelligence, Test Items, Vocabulary
Höhne, Jan Karem; Yan, Ting – International Journal of Social Research Methodology, 2020
Web surveys are an established data collection mode that use written language to provide information. The written language is accompanied by visual elements, such as presentation formats and shapes. However, research has shown that visual elements influence response behavior because respondents sometimes use interpretive heuristics to make sense…
Descriptors: Heuristics, Visual Aids, Online Surveys, Response Style (Tests)
O'Neill, Thomas R.; Gregg, Justin L.; Peabody, Michael R. – Applied Measurement in Education, 2020
This study addresses equating issues with varying sample sizes using the Rasch model by examining how sample size affects the stability of item calibrations and person ability estimates. A resampling design was used to create 9 sample size conditions (200, 100, 50, 45, 40, 35, 30, 25, and 20), each replicated 10 times. Items were recalibrated…
Descriptors: Sample Size, Equated Scores, Item Response Theory, Raw Scores

Direct link
Peer reviewed
