Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computer Assisted Testing | 11 |
Test Format | 11 |
Test Items | 7 |
Test Construction | 6 |
Adaptive Testing | 5 |
Comparative Analysis | 4 |
Item Response Theory | 3 |
Multiple Choice Tests | 3 |
Scores | 3 |
Automation | 2 |
Comparative Testing | 2 |
More ▼ |
Source
Journal of Educational… | 11 |
Author
van der Linden, Wim J. | 2 |
Bergstrom, Betty A. | 1 |
Borglum, Joshua | 1 |
Braun, Henry I. | 1 |
Bridgeman, Brent | 1 |
Cai, Yan | 1 |
Chang, Hua-Hua | 1 |
Diao, Qi | 1 |
Douglas, Jeff | 1 |
Jodoin, Michael G. | 1 |
Jones, Paul | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 8 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Jones, Paul; Tong, Ye; Liu, Jinghua; Borglum, Joshua; Primoli, Vince – Journal of Educational Measurement, 2022
This article studied two methods to detect mode effects in two credentialing exams. In Study 1, we used a "modal scale comparison approach," where the same pool of items was calibrated separately, without transformation, within two TC cohorts (TC1 and TC2) and one OP cohort (OP1) matched on their pool-based scale score distributions. The…
Descriptors: Scores, Credentials, Licensing Examinations (Professions), Computer Assisted Testing
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
Liu, Shuchang; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2018
This study applied the mode of on-the-fly assembled multistage adaptive testing to cognitive diagnosis (CD-OMST). Several and several module assembly methods for CD-OMST were proposed and compared in terms of measurement precision, test security, and constrain management. The module assembly methods in the study included the maximum priority index…
Descriptors: Adaptive Testing, Monte Carlo Methods, Computer Security, Clinical Diagnosis
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff – Journal of Educational Measurement, 2016
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Sequential Approach
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks

Jodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory

Lunz, Mary E.; Bergstrom, Betty A. – Journal of Educational Measurement, 1994
The impact of computerized adaptive test (CAT) administration formats on student performance was studied with 645 medical technology students who also took a paper-and-pencil test. Analysis of covariance indicates no significant interactions among test administration formats and provides evidence for adjusting CAT test to more familiar modalities.…
Descriptors: Academic Achievement, Adaptive Testing, Analysis of Covariance, Computer Assisted Testing

Spray, Judith A.; And Others – Journal of Educational Measurement, 1989
Findings of studies examining the effects of presentation media on item characteristics are reviewed. The effect of medium of presentation independent of adaptive methodology in computer-assisted testing was studied through tests of 763 Marine trainees. Conditions necessary for score equivalence between item presentation media are described. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Military Personnel

Braun, Henry I.; And Others – Journal of Educational Measurement, 1990
The accuracy with which expert systems (ESs) score a new nonmultiple-choice free-response test item was investigated, using 734 high school students who were administered an advanced-placement computer science examination. ESs produced scores for 82 percent to 95 percent of the responses and displayed high agreement with a human reader on the…
Descriptors: Advanced Placement, Computer Assisted Testing, Computer Science, Constructed Response

Bridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing

Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level