Publication Date
| In 2026 | 0 |
| Since 2025 | 4 |
| Since 2022 (last 5 years) | 16 |
| Since 2017 (last 10 years) | 35 |
| Since 2007 (last 20 years) | 59 |
Descriptor
| Accuracy | 59 |
| Test Length | 59 |
| Test Items | 38 |
| Item Response Theory | 35 |
| Sample Size | 23 |
| Computer Assisted Testing | 21 |
| Computation | 20 |
| Adaptive Testing | 18 |
| Simulation | 15 |
| Correlation | 14 |
| Classification | 13 |
| More ▼ | |
Source
Author
| Cheng, Ying | 4 |
| Wang, Chun | 3 |
| Baris Pekmezci, Fulya | 2 |
| Bradshaw, Laine | 2 |
| He, Wei | 2 |
| Huggins-Manley, Anne Corinne | 2 |
| Hull, Michael M. | 2 |
| Lathrop, Quinn N. | 2 |
| Mae, Naohiro | 2 |
| Svetina, Dubravka | 2 |
| Wolfe, Edward W. | 2 |
| More ▼ | |
Publication Type
| Reports - Research | 46 |
| Journal Articles | 45 |
| Dissertations/Theses -… | 10 |
| Reports - Evaluative | 3 |
| Speeches/Meeting Papers | 2 |
| Numerical/Quantitative Data | 1 |
Education Level
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 2 |
| Force Concept Inventory | 1 |
| MacArthur Communicative… | 1 |
| National Assessment of… | 1 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Svetina, Dubravka – Educational and Psychological Measurement, 2013
The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in noncompensatory multidimensional item response models using dimensionality assessment procedures based on DETECT (dimensionality evaluation to enumerate contributing traits) and NOHARM (normal ogive harmonic analysis robust method). Five…
Descriptors: Item Response Theory, Statistical Analysis, Computation, Test Length
Wang, Chun – Educational and Psychological Measurement, 2013
Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to combine the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models aim at classifying examinees into the correct mastery profile group so as to pinpoint the strengths and weakness of each examinee whereas CAT algorithms choose items to determine those…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Lee, Eunjung – ProQuest LLC, 2013
The purpose of this research was to compare the equating performance of various equating procedures for the multidimensional tests. To examine the various equating procedures, simulated data sets were used that were generated based on a multidimensional item response theory (MIRT) framework. Various equating procedures were examined, including…
Descriptors: Equated Scores, Tests, Comparative Analysis, Item Response Theory
Su, Yu-Lan – ProQuest LLC, 2013
This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…
Descriptors: Models, Diagnostic Tests, Cognitive Processes, Thinking Skills
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Analysis
Md Desa, Zairul Nor Deana – ProQuest LLC, 2012
In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…
Descriptors: Item Response Theory, Computation, Reliability, Classification
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Wyse, Adam E.; Hao, Shiqi – Applied Psychological Measurement, 2012
This article introduces two new classification consistency indices that can be used when item response theory (IRT) models have been applied. The new indices are shown to be related to Rudner's classification accuracy index and Guo's classification accuracy index. The Rudner- and Guo-based classification accuracy and consistency indices are…
Descriptors: Item Response Theory, Classification, Accuracy, Reliability
Shin, Chingwei David; Chien, Yuehmei; Way, Walter Denny – Pearson, 2012
Content balancing is one of the most important components in the computerized adaptive testing (CAT) especially in the K to 12 large scale tests that complex constraint structure is required to cover a broad spectrum of content. The purpose of this study is to compare the weighted penalty model (WPM) and the weighted deviation method (WDM) under…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Test Content, Models
Fu, Qiong – ProQuest LLC, 2010
This research investigated how the accuracy of person ability and item difficulty parameter estimation varied across five IRT models with respect to the presence of guessing, targeting, and varied combinations of sample sizes and test lengths. The data were simulated with 50 replications under each of the 18 combined conditions. Five IRT models…
Descriptors: Item Response Theory, Guessing (Tests), Accuracy, Computation
Deng, Nina – ProQuest LLC, 2011
Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…
Descriptors: Item Response Theory, Test Theory, Computation, Classification
Oranje, Andreas; Li, Deping; Kandathil, Mathew – ETS Research Report Series, 2009
Several complex sample standard error estimators based on linearization and resampling for the latent regression model of the National Assessment of Educational Progress (NAEP) are studied with respect to design choices such as number of items, number of regressors, and the efficiency of the sample. This paper provides an evaluation of the extent…
Descriptors: Error of Measurement, Computation, Regression (Statistics), National Competency Tests
Qian, Hong – ProQuest LLC, 2013
This dissertation includes three essays: one essay focuses on the effect of teacher preparation programs on teacher knowledge while the other two focus on test-takers' response times on test items. Essay One addresses the problem of how opportunities to learn in teacher preparation programs influence future elementary mathematics teachers'…
Descriptors: Teacher Education Programs, Pedagogical Content Knowledge, Preservice Teacher Education, Preservice Teachers

Peer reviewed
Direct link
