Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 31 |
| Since 2017 (last 10 years) | 79 |
| Since 2007 (last 20 years) | 136 |
Descriptor
Source
Author
| Wainer, Howard | 6 |
| Hambleton, Ronald K. | 4 |
| Wang, Wen-Chung | 4 |
| Berk, Ronald A. | 3 |
| Burton, Richard F. | 3 |
| Cohen, Allan S. | 3 |
| Huggins-Manley, Anne Corinne | 3 |
| Lee, Won-Chan | 3 |
| Lee, Yi-Hsuan | 3 |
| Pommerich, Mary | 3 |
| Reckase, Mark D. | 3 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 9 |
| Administrators | 1 |
| Community | 1 |
| Practitioners | 1 |
Location
| Turkey | 2 |
| Alabama | 1 |
| Asia | 1 |
| Australia | 1 |
| Germany | 1 |
| Illinois (Chicago) | 1 |
| Indiana | 1 |
| Iran | 1 |
| Israel | 1 |
| Japan | 1 |
| Netherlands | 1 |
| More ▼ | |
Laws, Policies, & Programs
| Job Training Partnership Act… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Sunbul, Onder; Yormaz, Seha – International Journal of Evaluation and Research in Education, 2018
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
Descriptors: Difficulty Level, Cheating, Duplication, Test Length
Precision of Single-Skill Math CBM Time-Series Data: The Effect of Probe Stratification and Set Size
Solomon, Benjamin G.; Payne, Lexy L.; Campana, Kayla V.; Marr, Erin A.; Battista, Carmela; Silva, Alex; Dawes, Jillian M. – Journal of Psychoeducational Assessment, 2020
Comparatively little research exists on single-skill math (SSM) curriculum-based measurements (CBMs) for the purpose of monitoring growth, as may be done in practice or when monitoring intervention effectiveness within group or single-case research. Therefore, we examined a common variant of SSM-CBM: 1 digit × 1 digit multiplication. Reflecting…
Descriptors: Curriculum Based Assessment, Mathematics Tests, Mathematics Skills, Multiplication
Ames, Allison J.; Leventhal, Brian C.; Ezike, Nnamdi C. – Measurement: Interdisciplinary Research and Perspectives, 2020
Data simulation and Monte Carlo simulation studies are important skills for researchers and practitioners of educational and psychological measurement, but there are few resources on the topic specific to item response theory. Even fewer resources exist on the statistical software techniques to implement simulation studies. This article presents…
Descriptors: Monte Carlo Methods, Item Response Theory, Simulation, Computer Software
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Manna, Venessa F.; Gu, Lixiong – ETS Research Report Series, 2019
When using the Rasch model, equating with a nonequivalent groups anchor test design is commonly achieved by adjustment of new form item difficulty using an additive equating constant. Using simulated 5-year data, this report compares 4 approaches to calculating the equating constants and the subsequent impact on equating results. The 4 approaches…
Descriptors: Item Response Theory, Test Items, Test Construction, Sample Size
Qiu, Yuxi; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
This study aimed to assess the accuracy of the empirical item characteristic curve (EICC) preequating method given the presence of test speededness. The simulation design of this study considered the proportion of speededness, speededness point, speededness rate, proportion of missing on speeded items, sample size, and test length. After crossing…
Descriptors: Accuracy, Equated Scores, Test Items, Nonparametric Statistics
Lance M. Kruse – ProQuest LLC, 2019
This study explores six item-reduction methodologies used to shorten an existing complex problem-solving non-objective test by evaluating how each shortened form performs across three sources of validity evidence (i.e., test content, internal structure, and relationships with other variables). Two concerns prompted the development of the present…
Descriptors: Educational Assessment, Comparative Analysis, Test Format, Test Length
Hamby, Tyler – Journal of Psychoeducational Assessment, 2018
In this study, the author examined potential mediators of the negative relationship between the absolute difference in items' lengths and their inter-item correlation size. Fifty-two randomly ordered items from five personality scales were administered to 622 university students, and 46 respondents from a survey website rated the items'…
Descriptors: Correlation, Personality Traits, Undergraduate Students, Difficulty Level
Bao, Yu; Bradshaw, Laine – Measurement: Interdisciplinary Research and Perspectives, 2018
Diagnostic classification models (DCMs) can provide multidimensional diagnostic feedback about students' mastery levels of knowledge components or attributes. One advantage of using DCMs is the ability to accurately and reliably classify students into mastery levels with a relatively small number of items per attribute. Combining DCMs with…
Descriptors: Test Items, Selection, Adaptive Testing, Computer Assisted Testing
Kelly, William E.; Daughtry, Don – College Student Journal, 2018
This study developed an abbreviated form of Barron's (1953) Ego Strength Scale for use in research among college student samples. A version of Barron's scale was administered to 100 undergraduate college students. Using item-total score correlations and internal consistency, the scale was reduced to 18 items (Es18). The Es18 possessed adequate…
Descriptors: Undergraduate Students, Self Concept Measures, Test Length, Scores
Gu, Lixiong; Ling, Guangming; Qu, Yanxuan – ETS Research Report Series, 2019
Research has found that the "a"-stratified item selection strategy (STR) for computerized adaptive tests (CATs) may lead to insufficient use of high a items at later stages of the tests and thus to reduced measurement precision. A refined approach, unequal item selection across strata (USTR), effectively improves test precision over the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Use, Test Items
Chai, Jun Ho; Lo, Chang Huan; Mayor, Julien – Journal of Speech, Language, and Hearing Research, 2020
Purpose: This study introduces a framework to produce very short versions of the MacArthur-Bates Communicative Development Inventories (CDIs) by combining the Bayesian-inspired approach introduced by Mayor and Mani (2019) with an item response theory-based computerized adaptive testing that adapts to the ability of each child, in line with…
Descriptors: Bayesian Statistics, Item Response Theory, Measures (Individuals), Language Skills
Gawliczek, Piotr; Krykun, Viktoriia; Tarasenko, Nataliya; Tyshchenko, Maksym; Shapran, Oleksandr – Advanced Education, 2021
The article deals with the innovative, cutting age solution within the language testing realm, namely computer adaptive language testing (CALT) in accordance with the NATO Standardization Agreement 6001 (NATO STANAG 6001) requirements for further implementation in foreign language training of personnel of the Armed Forces of Ukraine (AF of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Language Tests, Second Language Instruction
Fu, Jianbin; Feng, Yuling – ETS Research Report Series, 2018
In this study, we propose aggregating test scores with unidimensional within-test structure and multidimensional across-test structure based on a 2-level, 1-factor model. In particular, we compare 6 score aggregation methods: average of standardized test raw scores (M1), regression factor score estimate of the 1-factor model based on the…
Descriptors: Comparative Analysis, Scores, Correlation, Standardized Tests

Peer reviewed
Direct link
