NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 72 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Terry A. Ackerman; Deborah L. Bandalos; Derek C. Briggs; Howard T. Everson; Andrew D. Ho; Susan M. Lottridge; Matthew J. Madison; Sandip Sinharay; Michael C. Rodriguez; Michael Russell; Alina A. Davier; Stefanie A. Wind – Educational Measurement: Issues and Practice, 2024
This article presents the consensus of an National Council on Measurement in Education Presidential Task Force on Foundational Competencies in Educational Measurement. Foundational competencies are those that support future development of additional professional and disciplinary competencies. The authors develop a framework for foundational…
Descriptors: Educational Assessment, Competence, Skill Development, Communication Skills
Tianci Liu; Chun Wang; Gongjun Xu – Grantee Submission, 2022
Multidimensional Item Response Theory (MIRT) is widely used in educational and psychological assessment and evaluation. With the increasing size of modern assessment data, many existing estimation methods become computationally demanding and hence they are not scalable to big data, especially for the multidimensional three-parameter and…
Descriptors: Item Response Theory, Computation, Monte Carlo Methods, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Sainan Xu; Jing Lu; Jiwei Zhang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
With the growing attention on large-scale educational testing and assessment, the ability to process substantial volumes of response data becomes crucial. Current estimation methods within item response theory (IRT), despite their high precision, often pose considerable computational burdens with large-scale data, leading to reduced computational…
Descriptors: Educational Assessment, Bayesian Statistics, Statistical Inference, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Orit Hazzan; Yael Erez – ACM Transactions on Computing Education, 2025
In this opinion piece, we explore the idea that GenAI has the potential to fundamentally disrupt computer science education (CSE) by drawing insights from 10 pedagogical and cognitive theories and models. We highlight how GenAI improves CSE by making educational practices more effective and requires less effort and time, and all at a lower cost,…
Descriptors: Computer Science Education, Artificial Intelligence, Technology Uses in Education, Educational Change
Bailey, Paul; Emad, Ahmad; Zhang, Ting; Xie, Qingshu; Sikali, Emmanuel – American Institutes for Research, 2018
Correlation analysis has been used widely by researchers and analysts when analyzing large-scale assessment data. Limit research provided reliable methods to estimate various correlations and their standard errors with the complex sampling design and multiple plausible values taken into account. This report introduces the methodology used by the…
Descriptors: Correlation, Educational Assessment, Measurement, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Roschelle, Jeremy, Ed.; Lester, James, Ed.; Fusco, Judi, Ed. – Digital Promise, 2020
This report is based on the discussion that emerged from a convening of a panel of 22 experts in artificial intelligence (AI) and in learning. It introduces three layers that can frame the meaning of AI for educators. First, AI can be seen as "computational intelligence" and capability can be brought to bear on educational challenges as…
Descriptors: Artificial Intelligence, Learning, Computation, Futures (of Society)
Peer reviewed Peer reviewed
Direct linkDirect link
Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim – Journal of Educational and Behavioral Statistics, 2018
Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…
Descriptors: Computation, Error of Measurement, Models, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Hongyu; Sireci, Stephen G. – Journal of Applied Testing Technology, 2018
Whenever classification decisions are made on educational tests, such as pass/fail, or basic, proficient, or advanced, the consistency and accuracy of those decisions should be estimated and reported. Methods for estimating the reliability of classification decisions made on the basis of educational tests are well-established (e.g., Rudner, 2001;…
Descriptors: Classification, Item Response Theory, Accuracy, Reliability
Gongjun Xu; Zhuoran Shang – Grantee Submission, 2018
This article focuses on a family of restricted latent structure models with wide applications in psychological and educational assessment, where the model parameters are restricted via a latent structure matrix to reflect prespecified assumptions on the latent attributes. Such a latent matrix is often provided by experts and assumed to be correct…
Descriptors: Psychological Evaluation, Educational Assessment, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wendy – Journal of Educational and Behavioral Statistics, 2018
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
Descriptors: Computation, Generalization, Probability, Sample Size
Peter Organisciak; Michele Newman; David Eby; Selcuk Acar; Denis Dumas – Grantee Submission, 2023
Purpose: Most educational assessments tend to be constructed in a close-ended format, which is easier to score consistently and more affordable. However, recent work has leveraged computation text methods from the information sciences to make open-ended measurement more effective and reliable for older students. This study asks whether such text…
Descriptors: Learning Analytics, Child Language, Semantics, Age Differences
Falk, Carl F.; Cai, Li – Grantee Submission, 2016
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Item Response Theory, Guessing (Tests), Mathematics Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine – Applied Measurement in Education, 2015
In generalizability theory studies in large-scale testing contexts, sometimes a facet is very sparsely crossed with the object of measurement. For example, when assessments are scored by human raters, it may not be practical to have every rater score all students. Sometimes the scoring is systematically designed such that the raters are…
Descriptors: Educational Assessment, Measurement, Data, Generalizability Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Cai, Li – Journal of Educational Measurement, 2016
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Item Response Theory, Guessing (Tests), Mathematics Tests, Simulation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5