NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Bolsinova; Jesper Tijmstra; Leslie Rutkowski; David Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Profile analysis is one of the main tools for studying whether differential item functioning can be related to specific features of test items. While relevant, profile analysis in its current form has two restrictions that limit its usefulness in practice: It assumes that all test items have equal discrimination parameters, and it does not test…
Descriptors: Test Items, Item Analysis, Generalizability Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Shafipoor, Mahdieh; Ravand, Hamdollah; Maftoon, Parviz – Language Testing in Asia, 2021
The current study compared the model fit indices, skill mastery probabilities, and classification accuracy of six Diagnostic Classification Models (DCMs): a general model (G-DINA) against five specific models (LLM, RRUM, ACDM, DINA, and DINO). To do so, the response data to the grammar and vocabulary sections of a General English Achievement Test,…
Descriptors: Goodness of Fit, Models, Classification, Grammar
Peer reviewed Peer reviewed
Direct linkDirect link
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Jing-Wen; Yu, Ruan-Ching – Asia Pacific Journal of Education, 2022
Modelling ability is one of the essential elements of the latest educational reforms, and Trends in International Mathematics and Science Study (TIMSS) is a curriculum-based assessment which allows educational systems worldwide to inspect the curricular influences. The aims of this study were to examine the role of modelling ability in the…
Descriptors: Grade 8, Educational Change, Cross Cultural Studies, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Rushton, Nicky – Educational Research, 2019
Background: The development of a set of questions is a central element of examination development, with the validity of an examination resting to a large extent on the quality of the questions that it comprises. This paper reports on the methods and findings of a project that explores how educational examination question writers engage in the…
Descriptors: Writing (Composition), Test Construction, Specialists, Protocol Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McIntosh, James – Scandinavian Journal of Educational Research, 2019
This article examines whether the way that PISA models item outcomes in mathematics affects the validity of its country rankings. As an alternative to PISA methodology a two-parameter model is applied to PISA mathematics item data from Canada and Finland for the year 2012. In the estimation procedure item difficulty and dispersion parameters are…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Oon, Pey-Tee; Fan, Xitao – International Journal of Science Education, 2017
Students' attitude towards science (SAS) is often a subject of investigation in science education research. Survey of rating scale is commonly used in the study of SAS. The present study illustrates how Rasch analysis can be used to provide psychometric information of SAS rating scales. The analyses were conducted on a 20-item SAS scale used in an…
Descriptors: Item Response Theory, Psychometrics, Attitude Measures, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Verhelst, Norman D. – Scandinavian Journal of Educational Research, 2012
When using IRT models in Educational Achievement Testing, the model is as a rule too simple to catch all the relevant dimensions in the test. It is argued that a simple model may nevertheless be useful but that it can be complemented with additional analyses. Such an analysis, called profile analysis, is proposed and applied to the reading data of…
Descriptors: Multidimensional Scaling, Profiles, Item Response Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Grosswald, Jules – 1975
Much of the intrinsic wealth of planning and instructional information available from achievement testing programs goes untapped in typical reporting procedures. Large-scale programs reporting only pupil scores and the results of aggregating those scores stop far short of the purposes intended and fail to realize the potential of such information.…
Descriptors: Achievement Tests, Data Analysis, Decision Making, Evaluation Methods
Bejar, Isaac I.; And Others – 1977
The applicability of item characteristic curve (ICC) theory to a multiple choice test item pool used to measure achievement is described. The rationale for attempting to use ICC theory in an achievement framework is summarized, and the adequacy for adaptive testing of a classroom achievement test item pool in a college biology class is studied.…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Biology
Friedman, David – 1972
Arguments which suggest that improved prediction of multiple criteria can be achieved employing pattern scoring of responses, as opposed to conventional methods, are examined. Models for improving prediction of single and multiple criteria were examined. The findings are: (1) simple linear combinations of predictor variables perform as well…
Descriptors: Academic Achievement, Achievement Tests, Factor Analysis, Factor Structure
Previous Page | Next Page ยป
Pages: 1  |  2