Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 7 |
Descriptor
Source
| Journal of Creative Behavior | 2 |
| Grantee Submission | 1 |
| Journal of Educational Data… | 1 |
| Journal of Educational… | 1 |
| Journal of Educational and… | 1 |
| Large-scale Assessments in… | 1 |
Author
| B. Barbot | 1 |
| B. Goecke | 1 |
| Chun Wang | 1 |
| Clavel, Jose G. | 1 |
| David Kaplan | 1 |
| David Rutkowski | 1 |
| Flannery, Darragh | 1 |
| Gilleece, Lorraine | 1 |
| Gongjun Xu | 1 |
| Jing Lu | 1 |
| Jiwei Zhang | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 5 |
| Reports - Evaluative | 2 |
Education Level
| Secondary Education | 7 |
Audience
Location
| Ireland | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 7 |
What Works Clearinghouse Rating
Mingya Huang; David Kaplan – Journal of Educational and Behavioral Statistics, 2025
The issue of model uncertainty has been gaining interest in education and the social sciences community over the years, and the dominant methods for handling model uncertainty are based on Bayesian inference, particularly, Bayesian model averaging. However, Bayesian model averaging assumes that the true data-generating model is within the…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, Statistical Inference, Predictor Variables
Leslie Rutkowski; David Rutkowski – Journal of Creative Behavior, 2025
The Programme for International Student Assessment (PISA) introduced creative thinking as an innovative domain in 2022. This paper examines the unique methodological issues in international assessments and the implications of measuring creative thinking within PISA's framework, including stratified sampling, rotated form designs, and a distinct…
Descriptors: Creativity, Creative Thinking, Measurement, Sampling
B. Goecke; S. Weiss; B. Barbot – Journal of Creative Behavior, 2025
The present paper questions the content validity of the eight creativity-related self-report scales available in PISA 2022's context questionnaire and provides a set of considerations for researchers interested in using these indexes. Specifically, we point out some threats to the content validity of these scales (e.g., "creative thinking…
Descriptors: Creativity, Creativity Tests, Questionnaires, Content Validity
Flannery, Darragh; Gilleece, Lorraine; Clavel, Jose G. – Large-scale Assessments in Education, 2023
Background: The existence of a multiplier, compositional or social context effect is debated extensively in the literature on school effectiveness and also relates to the wider issue of equity in educational outcomes. However, comparatively little attention has been given to whether or not the association between student achievement and school…
Descriptors: Foreign Countries, Secondary School Students, International Assessment, Achievement Tests
Sainan Xu; Jing Lu; Jiwei Zhang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
With the growing attention on large-scale educational testing and assessment, the ability to process substantial volumes of response data becomes crucial. Current estimation methods within item response theory (IRT), despite their high precision, often pose considerable computational burdens with large-scale data, leading to reduced computational…
Descriptors: Educational Assessment, Bayesian Statistics, Statistical Inference, Item Response Theory
Lundgren, Erik – Journal of Educational Data Mining, 2022
Response process data have the potential to provide a rich description of test-takers' thinking processes. However, retrieving insights from these data presents a challenge for educational assessments and educational data mining as they are complex and not well annotated. The present study addresses this challenge by developing a computational…
Descriptors: Problem Solving, Classification, Accuracy, Foreign Countries
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests

Peer reviewed
Direct link
