Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 22 |
Since 2006 (last 20 years) | 27 |
Descriptor
Item Analysis | 27 |
Test Items | 11 |
Item Response Theory | 8 |
Comparative Analysis | 7 |
Correlation | 7 |
Factor Analysis | 6 |
Measures (Individuals) | 6 |
Psychometrics | 6 |
Test Validity | 6 |
Accuracy | 5 |
Evaluation Methods | 5 |
More ▼ |
Source
Grantee Submission | 27 |
Author
Chun Wang | 5 |
Gongjun Xu | 5 |
Bowles, Ryan P. | 3 |
Catherine P. Bradshaw | 2 |
Dedrick, Robert F. | 2 |
Ferron, John | 2 |
Jing Lu | 2 |
Jingchen Liu | 2 |
Justice, Laura M. | 2 |
Khan, Kiren S. | 2 |
Piasta, Shayne B. | 2 |
More ▼ |
Publication Type
Reports - Research | 25 |
Journal Articles | 10 |
Speeches/Meeting Papers | 4 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Secondary Education | 8 |
Elementary Education | 6 |
Middle Schools | 6 |
Intermediate Grades | 5 |
Elementary Secondary Education | 4 |
Grade 4 | 4 |
Junior High Schools | 4 |
Early Childhood Education | 3 |
Grade 5 | 3 |
High Schools | 3 |
Grade 3 | 2 |
More ▼ |
Audience
Location
Florida | 3 |
United States | 2 |
Canada | 1 |
Colorado (Denver) | 1 |
Ireland | 1 |
Japan | 1 |
Netherlands | 1 |
New York (New York) | 1 |
North Carolina (Charlotte) | 1 |
Tennessee (Memphis) | 1 |
Texas | 1 |
More ▼ |
Laws, Policies, & Programs
Every Student Succeeds Act… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Jiaying Xiao; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Accurate item parameters and standard errors (SEs) are crucial for many multidimensional item response theory (MIRT) applications. A recent study proposed the Gaussian Variational Expectation Maximization (GVEM) algorithm to improve computational efficiency and estimation accuracy (Cho et al., 2021). However, the SE estimation procedure has yet to…
Descriptors: Error of Measurement, Models, Evaluation Methods, Item Analysis
Chenchen Ma; Jing Ouyang; Gongjun Xu – Grantee Submission, 2023
Cognitive Diagnosis Models (CDMs) are a special family of discrete latent variable models that are widely used in educational and psychological measurement. A key component of CDMs is the Q-matrix characterizing the dependence structure between the items and the latent attributes. Additionally, researchers also assume in many applications certain…
Descriptors: Psychological Evaluation, Clinical Diagnosis, Item Analysis, Algorithms
Martha L. Epstein; Hamza Malik; Kun Wang; Chandra Hawley Orrill – Grantee Submission, 2022
Response Process Validity (RPV) reflects the degree to which items are interpreted as intended by item developers. In this study, teacher responses to constructed response (CR) items to assess pedagogical content knowledge (PCK) of middle school mathematics teachers were evaluated to determine what types of teacher responses signaled weak RPV. We…
Descriptors: Teacher Response, Test Items, Pedagogical Content Knowledge, Mathematics Teachers
Chun Wang; Ruoyi Zhu; Gongjun Xu – Grantee Submission, 2022
Differential item functioning (DIF) analysis refers to procedures that evaluate whether an item's characteristic differs for different groups of persons after controlling for overall differences in performance. DIF is routinely evaluated as a screening step to ensure items behavior the same across groups. Currently, the majority DIF studies focus…
Descriptors: Models, Item Response Theory, Item Analysis, Comparative Analysis
Susu Zhang; Xueying Tang; Qiwei He; Jingchen Liu; Zhiliang Ying – Grantee Submission, 2024
Computerized assessments and interactive simulation tasks are increasingly popular and afford the collection of process data, i.e., an examinee's sequence of actions (e.g., clickstreams, keystrokes) that arises from interactions with each task. Action sequence data contain rich information on the problem-solving process but are in a nonstandard,…
Descriptors: Correlation, Problem Solving, Computer Assisted Testing, Prediction
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
A Sequential Bayesian Changepoint Detection Procedure for Aberrant Behaviors in Computerized Testing
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Bowles, Ryan P.; Justice, Laura M.; Khan, Kiren S.; Piasta, Shayne B.; Skibbe, Lori E.; Foster, Tricia D. – Grantee Submission, 2020
Purpose: Narrative skill, a child's ability to create a temporally sequenced account of an experience or event, is considered an important domain of children's language development. Narrative skill is strongly predictive of later language and literacy and is emphasized in curricula and educational standards. However, the need to transcribe a…
Descriptors: Narration, Video Technology, Language Acquisition, Literacy
Gin, Brian; Sim, Nicholas; Skrondal, Anders; Rabe-Hesketh, Sophia – Grantee Submission, 2020
We propose a dyadic Item Response Theory (dIRT) model for measuring interactions of pairs of individuals when the responses to items represent the actions (or behaviors, perceptions, etc.) of each individual (actor) made within the context of a dyad formed with another individual (partner). Examples of its use include the assessment of…
Descriptors: Item Response Theory, Generalization, Item Analysis, Problem Solving
Clark, D. Angus; Bowles, Ryan P. – Grantee Submission, 2018
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present…
Descriptors: Factor Analysis, Goodness of Fit, Factor Structure, Monte Carlo Methods
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating
Stephens, Ana; Stroud, Rena; Strachota, Susanne; Stylianou, Despina; Blanton, Maria; Knuth, Eric; Gardiner, Angela – Grantee Submission, 2021
This research focuses on the retention of students' algebraic understandings 1 year following a 3-year early algebra intervention. Participants included 1,455 Grade 6 students who had taken part in a cluster randomized trial in Grades 3-5. The results show that, as was the case at the end of Grades 3, 4, and 5, treatment students significantly…
Descriptors: Algebra, Mathematics Instruction, Intervention, Comparative Analysis
Yunxiao Chen; Xiaoou Li; Jingchen Liu; Gongjun Xu; Zhiliang Ying – Grantee Submission, 2017
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class…
Descriptors: Item Analysis, Classification, Graphs, Test Items
Sarah Lindstrom Johnson; Ray E. Reichenberg; Kathan Shukla; Tracy E. Waasdorp; Catherine P. Bradshaw – Grantee Submission, 2019
The United States government has become increasingly focused on school climate, as recently evidenced by its inclusion as an accountability indicator in the "Every Student Succeeds Act". Yet, there remains considerable variability in both conceptualizing and measuring school climate. To better inform the research and practice related to…
Descriptors: Item Response Theory, Educational Environment, Accountability, Educational Legislation
Meredith P. Franco; Jessika H. Bottiani; Katrina J. Debnam; Wes Bonifay; Toshna Pandey; Juliana Karras; Catherine P. Bradshaw – Grantee Submission, 2024
There is growing interest in improving and assessing teachers' use of culturally responsive practices (CRP) in the classroom, yet relatively few research-based approaches exist to address these measurement gaps. This article presents findings on the psychometric properties of a newly developed classroom observation measure of CRP, called the CARES…
Descriptors: Culturally Relevant Education, Classroom Observation Techniques, Construct Validity, Educational Practices
Previous Page | Next Page ยป
Pages: 1 | 2