Publication Date
| In 2026 | 0 |
| Since 2025 | 190 |
| Since 2022 (last 5 years) | 1057 |
| Since 2017 (last 10 years) | 2567 |
| Since 2007 (last 20 years) | 4928 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Meike Akveld; George Kinnear – International Journal of Mathematical Education in Science and Technology, 2024
Many universities use diagnostic tests to assess incoming students' preparedness for mathematics courses. Diagnostic test results can help students to identify topics where they need more practice and give lecturers a summary of strengths and weaknesses in their class. We demonstrate a process that can be used to make improvements to a mathematics…
Descriptors: Mathematics Tests, Diagnostic Tests, Test Items, Item Analysis
Marta Montenegro-Rueda; José María Fernández-Batanero – European Journal of Special Needs Education, 2024
The instruments for the evaluation of teachers' digital competence are abundant, however, there is still a lack of instruments oriented to the context of Special Education. In this sense, this study presents the validation process of an instrument that aims to determine the level of knowledge and digital competence of Special Education teachers…
Descriptors: Teacher Competencies, Technological Literacy, Special Education Teachers, Test Construction
Lauritz Schewior; Marlit Annalena Lindner – Educational Psychology Review, 2024
Studies have indicated that pictures in test items can impact item-solving performance, information processing (e.g., time on task) and metacognition as well as test-taking affect and motivation. The present review aims to better organize the existing and somewhat scattered research on multimedia effects in testing and problem solving while…
Descriptors: Multimedia Materials, Computer Assisted Testing, Test Items, Pictorial Stimuli
Hannes M. Körner; Franz Faul; Antje Nuthmann – Cognitive Research: Principles and Implications, 2024
Observers' memory for a person's appearance can be compromised by the presence of a weapon, a phenomenon known as the weapon-focus effect (WFE). According to the unusual-item hypothesis, attention shifts from the perpetrator to the weapon because a weapon is an unusual object in many contexts. To test this assumption, we monitored participants'…
Descriptors: Weapons, Eye Movements, Observation, Familiarity
Marjolein Muskens; Willem E. Frankenhuis; Lex Borghans – npj Science of Learning, 2024
In many countries, standardized math tests are important for achieving academic success. Here, we examine whether content of items, the story that explains a mathematical question, biases performance of low-SES students. In a large-scale cohort study of Trends in International Mathematics and Science Studies (TIMSS)--including data from 58…
Descriptors: Mathematics Tests, Standardized Tests, Test Items, Low Income Students
Ondrej Klíma; Martin Lakomý; Ekaterina Volevach – International Journal of Social Research Methodology, 2024
We tested the impacts of Hofstede's cultural factors and mode of administration on item nonresponse (INR) for political questions in the European Values Study (EVS). We worked with the integrated European Values Study dataset, using descriptive analysis and multilevel binary logistic regression models. We concluded that (1) modes of administration…
Descriptors: Cultural Influences, Testing, Test Items, Responses
Svihla, Vanessa; Gallup, Amber – Practical Assessment, Research & Evaluation, 2021
In making validity arguments, a central consideration is whether the instrument fairly and adequately covers intended content, and this is often evaluated by experts. While common procedures exist for quantitatively assessing this, the effect of loss aversion--a cognitive bias that would predict a tendency to retain items--on these procedures has…
Descriptors: Content Validity, Anxiety, Bias, Test Items
Stemler, Steven E.; Naples, Adam – Practical Assessment, Research & Evaluation, 2021
When students receive the same score on a test, does that mean they know the same amount about the topic? The answer to this question is more complex than it may first appear. This paper compares classical and modern test theories in terms of how they estimate student ability. Crucial distinctions between the aims of Rasch Measurement and IRT are…
Descriptors: Item Response Theory, Test Theory, Ability, Computation
Edwards, Ashley A.; Joyner, Keanan J.; Schatschneider, Christopher – Educational and Psychological Measurement, 2021
The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach's alpha, omega, omega hierarchical, Revelle's omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying…
Descriptors: Reliability, Computation, Accuracy, Sample Size
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Cum, Sait – International Journal of Assessment Tools in Education, 2021
In this study, it was claimed that ROC analysis, which is used to determine to what extent medical diagnosis tests can be differentiated between patients and non-patients, can also be used to examine the discrimination of binary scored items in cognitive tests. In order to obtain various evidence for this claim, the 2x2 contingency table used in…
Descriptors: Test Items, Item Analysis, Discriminant Analysis, Item Response Theory
Zeynep Uzun; Tuncay Ögretmen – Large-scale Assessments in Education, 2025
This study aimed to evaluate the item model fit by equating the forms of the PISA 2018 mathematics subtest with concurrent common items equating in samples from Türkiye, the UK, and Italy. The answers given in mathematics subtest Forms 2, 8, and 12 were used in this context. Analyzes were performed using the Dichotomous Rasch Model in the WINSTEPS…
Descriptors: Item Response Theory, Test Items, Foreign Countries, Mathematics Tests
Mahdi Ghorbankhani; Keyvan Salehi – SAGE Open, 2025
Academic procrastination, the tendency to delay academic tasks without reasonable justification, has significant implications for students' academic performance and overall well-being. To measure this construct, numerous scales have been developed, among which the Academic Procrastination Scale (APS) has shown promise in assessing academic…
Descriptors: Psychometrics, Measures (Individuals), Time Management, Foreign Countries

Peer reviewed
Direct link
