Publication Date
In 2025 | 1 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 52 |
Descriptor
Statistical Analysis | 52 |
Item Response Theory | 35 |
Test Items | 21 |
Models | 14 |
Foreign Countries | 12 |
Computation | 8 |
Comparative Analysis | 7 |
Equated Scores | 7 |
Responses | 7 |
Sample Size | 7 |
Scores | 7 |
More ▼ |
Source
Author
Raykov, Tenko | 3 |
Molenaar, Dylan | 2 |
Pusic, Martin | 2 |
Singh, Sarjinder | 2 |
A. Sedory, Stephen | 1 |
Akin-Arikan, Çigdem | 1 |
Alahmadi, Sarah | 1 |
Ames, Allison | 1 |
Barry, Carol L. | 1 |
Bazán, Jorge L. | 1 |
Benjamin Kelcey | 1 |
More ▼ |
Publication Type
Journal Articles | 45 |
Reports - Research | 35 |
Reports - Evaluative | 8 |
Reports - Descriptive | 5 |
Dissertations/Theses -… | 3 |
Information Analyses | 2 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Location
Canada | 2 |
China | 2 |
Turkey | 2 |
United States | 2 |
Australia | 1 |
Belgium | 1 |
China (Guangzhou) | 1 |
Florida | 1 |
Germany | 1 |
Hong Kong | 1 |
Israel | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Trends in International… | 2 |
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Wu, Tong; Kim, Stella Y.; Westine, Carl – Educational and Psychological Measurement, 2023
For large-scale assessments, data are often collected with missing responses. Despite the wide use of item response theory (IRT) in many testing programs, however, the existing literature offers little insight into the effectiveness of various approaches to handling missing responses in the context of scale linking. Scale linking is commonly used…
Descriptors: Data Analysis, Responses, Statistical Analysis, Measurement
Molenaar, Dylan; Cúri, Mariana; Bazán, Jorge L. – Journal of Educational and Behavioral Statistics, 2022
Bounded continuous data are encountered in many applications of item response theory, including the measurement of mood, personality, and response times and in the analyses of summed item scores. Although different item response theory models exist to analyze such bounded continuous data, most models assume the data to be in an open interval and…
Descriptors: Item Response Theory, Data, Responses, Intervals
Jianbin Fu; TsungHan Ho; Xuan Tan – Practical Assessment, Research & Evaluation, 2025
Item parameter estimation using an item response theory (IRT) model with fixed ability estimates is useful in equating with small samples on anchor items. The current study explores the impact of three ability estimation methods (weighted likelihood estimation [WLE], maximum a posteriori [MAP], and posterior ability distribution estimation [PST])…
Descriptors: Item Response Theory, Test Items, Computation, Equated Scores
Smith, Ben O.; White, Dustin R.; Wagner, Jamie; Kuzyk, Patricia; Prera, Alex – Studies in Higher Education, 2023
Student Evaluations of Teaching (SETs) are an integral part of evaluating course outcomes. They are routinely used to evaluate teaching quality for the purposes of reappointment, promotion, and tenure (RPT), annual review, and the rehiring of adjunct faculty and lecturers. These evaluations are often based almost entirely on the mean or proportion…
Descriptors: Student Evaluation of Teacher Performance, Statistical Analysis, Response Rates (Questionnaires), Evaluation Methods
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Selena Wang – ProQuest LLC, 2022
A research question that is of interest across many disciplines is whether and how relationships in a network are related to the attributes of the nodes of the network. In this dissertation, we propose two joint frameworks for modeling the relationship between the network and attributes. In the joint latent space model in Chapter 2, shared latent…
Descriptors: Networks, Item Response Theory, Models, Statistical Analysis
Terry A. Beehr; Minseo Kim; Ian W. Armstrong – International Journal of Social Research Methodology, 2024
Previous research extensively studied reasons for and ways to avoid low response rates, but it largely ignored the primary research issue of the degree to which response rates matter, which we address. Methodological survey research on response rates has been concerned with how to increase responsiveness and with the effects of response rates on…
Descriptors: Surveys, Response Rates (Questionnaires), Effect Size, Research Methodology
Javed Iqbal; Tanweer Ul Islam – Educational Research and Evaluation, 2024
Economic efficiency demands accurate assessment of individual ability for selection purposes. This study investigates Classical Test Theory (CTT) and Item Response Theory (IRT) for estimating true ability and ranking individuals. Two Monte Carlo simulations and real data analyses were conducted. Results suggest a slight advantage for IRT, but…
Descriptors: Item Response Theory, Monte Carlo Methods, Ability, Statistical Analysis
Chalmers, R. Philip – Journal of Educational Measurement, 2023
Several marginal effect size (ES) statistics suitable for quantifying the magnitude of differential item functioning (DIF) have been proposed in the area of item response theory; for instance, the Differential Functioning of Items and Tests (DFIT) statistics, signed and unsigned item difference in the sample statistics (SIDS, UIDS, NSIDS, and…
Descriptors: Test Bias, Item Response Theory, Definitions, Monte Carlo Methods
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Rachatasumrit, Napol; Koedinger, Kenneth R. – International Educational Data Mining Society, 2021
Student modeling is useful in educational research and technology development due to a capability to estimate latent student attributes. Widely used approaches, such as the Additive Factors Model (AFM), have shown satisfactory results, but they can only handle binary outcomes, which may yield potential information loss. In this work, we propose a…
Descriptors: Models, Student Characteristics, Feedback (Response), Error Correction
Weese, James D.; Turner, Ronna C.; Liang, Xinya; Ames, Allison; Crawford, Brandon – Educational and Psychological Measurement, 2023
A study was conducted to implement the use of a standardized effect size and corresponding classification guidelines for polytomous data with the POLYSIBTEST procedure and compare those guidelines with prior recommendations. Two simulation studies were included. The first identifies new unstandardized test heuristics for classifying moderate and…
Descriptors: Effect Size, Classification, Guidelines, Statistical Analysis
Zapata, Zakry; Sedory, Stephen A.; Singh, Sarjinder – Sociological Methods & Research, 2022
In this article, we consider the use of the zero-truncated binomial distribution as a randomization device while estimating the population proportion of a sensitive characteristic. The resultant new estimator based on the zero-truncated binomial distribution is then compared to its competitors from both the efficiency and the protection point of…
Descriptors: Social Science Research, Research Methodology, Comparative Analysis, Statistical Analysis
Cole, Ki; Paek, Insu – Measurement: Interdisciplinary Research and Perspectives, 2022
Statistical Analysis Software (SAS) is a widely used tool for data management analysis across a variety of fields. The procedure for item response theory (PROC IRT) is one to perform unidimensional and multidimensional item response theory (IRT) analysis for dichotomous and polytomous data. This review provides a summary of the features of PROC…
Descriptors: Item Response Theory, Computer Software, Item Analysis, Statistical Analysis
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification