Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Haeruddin; Prasetyo, Zuhdan Kun; Supahar – International Journal of Instruction, 2020
This study aims at developing an instrument to measure metacognition in solving physics problems among college students. This study used Research and Development (R & D) model with the non-test instrument development model. The instrument was in the form of a questionnaire that consisted of four choices with the scale was ranging from 1 to 4.…
Descriptors: Physics, Problem Solving, Metacognition, Cognitive Tests
Carli, Marta; Lippiello, Stefania; Pantano, Ornella; Perona, Mario; Tormen, Giuseppe – Physical Review Physics Education Research, 2020
In this article, we discuss the development and the administration of a multiple-choice test, which we named "Test of Calculus and Vectors in Mathematics and Physics" (TCV-MP), aimed at comparing students' ability to answer questions on derivatives, integrals, and vectors in a purely mathematical context and in the context of physics.…
Descriptors: Mathematics Tests, Science Tests, Multiple Choice Tests, Calculus
Tsubaki, Michiko; Ogawara, Wataru; Tanaka, Kenta – International Electronic Journal of Mathematics Education, 2020
This study proposes and examines an analytical method with the aim of improving the quality of education and learning by situating the answers to full descriptive questions in probability and statistics to make variables of learners' comprehension of learned content as answer characteristics, based on actual student mistakes. First, we proposed…
Descriptors: Probability, Statistics, Comprehension, Learning Strategies
Ralph, Vanessa R.; Lewis, Scott E. – Journal of Chemical Education, 2020
Calls for assessments incorporating representations beyond the symbolic level (e.g., chemical reactions and formulas) have encouraged assessment designers to choose from a variety of representations in the design of chemistry assessments. This work expands on prior work in considering how representations are incorporated within assessments. First,…
Descriptors: Chemistry, Science Instruction, Science Tests, Test Construction
Council of Chief State School Officers, 2020
Any body of research evolves over time. Previous understandings become more nuanced, ideas are supported or rebuked, and, eventually we arrive at a clearer view of the issue. The research on score comparability across computerized devices is no exception. CCSSO [Council of Chief State School Officers] and the Center for Assessment have published…
Descriptors: Computer Assisted Testing, Scores, Intermode Differences, Influence of Technology
Jinjin Huang – ProQuest LLC, 2020
Measurement invariance is crucial for an effective and valid measure of a construct. Invariance holds when the latent trait varies consistently across subgroups; in other words, the mean differences among subgroups are only due to true latent ability differences. Differential item functioning (DIF) occurs when measurement invariance is violated.…
Descriptors: Robustness (Statistics), Item Response Theory, Test Items, Item Analysis
Arikan, Çigdem Akin – International Journal of Progressive Education, 2018
The main purpose of this study is to compare the test forms to the midi anchor test and the mini anchor test performance based on item response theory. The research was conducted with using simulated data which were generated based on Rasch model. In order to equate two test forms the anchor item nonequivalent groups (internal anchor test) was…
Descriptors: Equated Scores, Comparative Analysis, Item Response Theory, Tests
Timofte, Roxana S.; Siminiciuc, Laura – Acta Didactica Napocensia, 2018
The scope this article was to develop an instrument to measure Chemistry students' ability regarding 'physical bonding' and to validate it. A number of 24 items were developed by mapping items to cognitive levels described by the Marzano taxonomy. A number of N=73 students were evaluated. Four items exhibited a MNSQ >1.3 and were eliminated…
Descriptors: Item Response Theory, Test Construction, Science Tests, Taxonomy
Kelly, William E.; Daughtry, Don – College Student Journal, 2018
This study developed an abbreviated form of Barron's (1953) Ego Strength Scale for use in research among college student samples. A version of Barron's scale was administered to 100 undergraduate college students. Using item-total score correlations and internal consistency, the scale was reduced to 18 items (Es18). The Es18 possessed adequate…
Descriptors: Undergraduate Students, Self Concept Measures, Test Length, Scores
Ilhan, Mustafa; Guler, Nese – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to compare difficulty indices calculated for open-ended items in accordance with the classical test theory (CTT) and the Many-Facet Rasch Model (MFRM). Although theoretical differences between CTT and MFRM occupy much space in the literature, the number of studies empirically comparing the two theories is quite limited.…
Descriptors: Difficulty Level, Test Items, Test Theory, Item Response Theory
Braxton, John M.; Francis, Clay H. – New Directions for Higher Education, 2018
This chapter describes research findings that show a positive relationship between higher order examination questions and core concepts of empirically supported theories of college student persistence for both residential and commuter colleges and universities.
Descriptors: College Students, Academic Persistence, Student Experience, Thinking Skills
Vijver, Fons J. R. – Educational Measurement: Issues and Practice, 2018
A conceptual framework of measurement bias in cross-cultural comparisons, distinguishing between construct, method, and item bias (differential item functioning), is used to describe a methodological framework addressing assessment of noncognitive variables in international large-scale studies. It is argued that the treatment of bias, coming from…
Descriptors: Educational Assessment, Achievement Tests, Foreign Countries, International Assessment
Aksu Dunya, Beyza – International Journal of Testing, 2018
This study was conducted to analyze potential item parameter drift (IPD) impact on person ability estimates and classification accuracy when drift affects an examinee subgroup. Using a series of simulations, three factors were manipulated: (a) percentage of IPD items in the CAT exam, (b) percentage of examinees affected by IPD, and (c) item pool…
Descriptors: Adaptive Testing, Classification, Accuracy, Computer Assisted Testing
Silber, Henning; Roßmann, Joss; Gummer, Tobias – International Journal of Social Research Methodology, 2018
In this article, we present the results of three question design experiments on inter-item correlations, which tested a grid design against a single-item design. The first and second experiments examined the inter-item correlations of a set with five and seven items, respectively, and the third experiment examined the impact of the question design…
Descriptors: Foreign Countries, Online Surveys, Experiments, Correlation
Sünbül, Seçil Ömür; Asire, Semih – International Journal of Progressive Education, 2018
In this study it was aimed to evaluate the effects of various factors such as sample sizes, percentage of misfit items in the test and item quality (item discrimination) on item and model fit in case of misspecification of Q matrix. Data were generated in accordance with DINA model. Q matrix was specified for 4 attributes and 15 items. While data…
Descriptors: Clinical Diagnosis, Cognitive Ability, Problem Solving, Models

Peer reviewed
Direct link
