Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 12 |
Descriptor
| Educational Testing | 12 |
| Simulation | 12 |
| Item Response Theory | 9 |
| Test Items | 7 |
| Models | 5 |
| Correlation | 4 |
| Comparative Analysis | 3 |
| Measurement | 3 |
| Measurement Techniques | 3 |
| Test Construction | 3 |
| Academic Ability | 2 |
| More ▼ | |
Source
| ProQuest LLC | 12 |
Author
| Chen, Tzu-An | 1 |
| Cheng, Yi-Ling | 1 |
| Kim, Jihye | 1 |
| Lau, Abigail | 1 |
| McGuire, Leah Walker | 1 |
| O'Neil, Timothy P. | 1 |
| Ozge Ersan Cinar | 1 |
| Smith, Jessalyn | 1 |
| Sukin, Tia M. | 1 |
| Tian, Feng | 1 |
| Topczewski, Anna Marie | 1 |
| More ▼ | |
Publication Type
| Dissertations/Theses -… | 12 |
Education Level
| Elementary Secondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Cheng, Yi-Ling – ProQuest LLC, 2016
The present study explored the dimensionality of cognitive structure from two approaches. The first approach used a famous relation between Visual Spatial Working Memory (VSWM) and calculation to demonstrate the multidimensional item response analyses when true dimensions are unknown. The second approach explored the detectability of dimensions by…
Descriptors: Cognitive Structures, Scores, Correlation, Spatial Ability
Zheng, Chunmei – ProQuest LLC, 2013
Educational and psychological constructs are normally measured by multifaceted dimensions. The measured construct is defined and measured by a set of related subdomains. A bifactor model can accurately describe such data with both the measured construct and the related subdomains. However, a limitation of the bifactor model is the orthogonality…
Descriptors: Educational Testing, Measurement Techniques, Test Items, Models
Topczewski, Anna Marie – ProQuest LLC, 2013
Developmental score scales represent the performance of students along a continuum, where as students learn more they move higher along that continuum. Unidimensional item response theory (UIRT) vertical scaling has become a commonly used method to create developmental score scales. Research has shown that UIRT vertical scaling methods can be…
Descriptors: Item Response Theory, Scaling, Scores, Student Development
Sukin, Tia M. – ProQuest LLC, 2010
The presence of outlying anchor items is an issue faced by many testing agencies. The decision to retain or remove an item is a difficult one, especially when the content representation of the anchor set becomes questionable by item removal decisions. Additionally, the reason for the aberrancy is not always clear, and if the performance of the…
Descriptors: Simulation, Science Achievement, Sampling, Data Analysis
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Chen, Tzu-An – ProQuest LLC, 2010
This simulation study compared the performance of two multilevel measurement testlet (MMMT) models: Beretvas and Walker's (2008) two-level MMMT model and Jiao, Wang, and Kamata's (2005) three-level model. Several conditions were manipulated (including testlet length, sample size, and the pattern of the testlet effects) to assess the impact on the…
Descriptors: Simulation, Item Response Theory, Comparative Analysis, Models
O'Neil, Timothy P. – ProQuest LLC, 2010
With scant research to draw upon with respect to the maintenance of vertical scales over time, decisions around the creation and performance of vertical scales over time necessarily suffers due to the lack of information. Undetected item parameter drift (IPD) presents one of the greatest threats to scale maintenance within an item response theory…
Descriptors: Scaling, Measures (Individuals), Item Response Theory, Educational Assessment
Kim, Jihye – ProQuest LLC, 2010
In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment.…
Descriptors: Test Bias, Test Length, Simulation, Testing
McGuire, Leah Walker – ProQuest LLC, 2010
Growth modeling using longitudinal data seems to be a promising direction for improving the methodology associated with the accountability movement. Longitudinal modeling requires that the measurements of ability are comparable over time and on the same scale. One way to create the vertical scale is through concurrent estimation with…
Descriptors: Simulation, Information Management, Personality, Measures (Individuals)
Smith, Jessalyn – ProQuest LLC, 2009
Currently, standardized tests are widely used as a method to measure how well schools and students meet academic standards. As a result, measurement issues have become an increasingly popular topic of study. Unidimensional item response models are used to model latent abilities and specific item characteristics. This class of models makes…
Descriptors: Item Response Theory, Models, Educational Testing, Guessing (Tests)
Lau, Abigail – ProQuest LLC, 2009
Test-takers can be required to complete a test form, but cannot be forced to demonstrate their knowledge. Even if an authority mandates completion of a test, examinees can still opt to enter responses randomly. When a test has important consequences for individuals, examinees are unlikely to behave this way. However, random responding becomes more…
Descriptors: Test Items, Simulation, Item Response Theory, Academic Ability

Direct link
