Descriptor
| Computer Assisted Testing | 13 |
| Adaptive Testing | 9 |
| Test Items | 9 |
| Responses | 6 |
| Timed Tests | 6 |
| Algorithms | 4 |
| Test Construction | 4 |
| Ability | 3 |
| Difficulty Level | 3 |
| Item Banks | 3 |
| Item Response Theory | 3 |
| More ▼ | |
Author
| Schnipke, Deborah L. | 13 |
| Scrams, David J. | 6 |
| Reese, Lynda M. | 4 |
| van der Linden, Wim J. | 3 |
| Luebke, Stephen W. | 1 |
| McLeod, Lori D. | 1 |
| Pashley, Peter J. | 1 |
Publication Type
| Reports - Research | 6 |
| Speeches/Meeting Papers | 6 |
| Reports - Evaluative | 5 |
| Journal Articles | 2 |
| Reports - Descriptive | 2 |
| Opinion Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Law School Admission Test | 2 |
| Armed Services Vocational… | 1 |
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Reese, Lynda M.; Schnipke, Deborah L.; Luebke, Stephen W. – 1999
Most large-scale testing programs facing computerized adaptive testing (CAT) must face the challenge of maintaining extensive content requirements, but content constraints in computerized adaptive testing (CAT) can compromise the precision and efficiency that could be achieved by a pure maximum information adaptive testing algorithm. This…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Simulation
Schnipke, Deborah L.; Reese, Lynda M. – 1999
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test taker ability. This study incorporated testlets (bundles of items) into two-stage and multistage designs, and compared the precision of the ability estimates derived from these designs with those derived from a standard computerized adaptive test (CAT)…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 2003
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Linear Programming
Peer reviewedvan der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – Applied Psychological Measurement, 1999
Proposes an item-selection algorithm for neutralizing the differential effects of time limits on computerized adaptive test scores. Uses a statistical model for distributions of examinees' response times on items in a bank that is updated each time an item is administered. Demonstrates the method using an item bank from the Armed Services…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Reese, Lynda M.; Schnipke, Deborah L. – 1999
A two-stage design provides a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and based on their scores, they are routed to tests of different difficulty levels in the second stage. This design provides some of the benefits of standard computer adaptive testing (CAT), such as increased…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
McLeod, Lori D.; Schnipke, Deborah L. – 1999
Because scores on high-stakes tests influence many decisions, tests need to be secure. Decisions based on scores affected by preknowledge of items are unacceptable. New methods are needed to detect the new cheating strategies used for computer-administered tests because item pools are typically used over time, providing the potential opportunity…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, High Stakes Tests
Peer reviewedSchnipke, Deborah L.; Scrams, David J. – Journal of Educational Measurement, 1997
A method to measure speededness on tests is presented that reflects the tendency of examinees to guess rapidly on items as time expires. The method models response times with a two-state mixture model, as demonstrated with data from a computer-administered reasoning test taken by 7,218 examinees. (SLD)
Descriptors: Adults, Computer Assisted Testing, Guessing (Tests), Item Response Theory
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 1998
An item-selection algorithm to neutralize the differential effects of time limits on scores on computerized adaptive tests is proposed. The method is based on a statistical model for the response-time distributions of the examinees on items in the pool that is updated each time a new item has been administered. Predictions from the model are used…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Foreign Countries
Making Use of Response Times in Standardized Tests: Are Accuracy and Speed Measuring the Same Thing?
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Schnipke, Deborah L. – 1995
Time limits on tests often prevent some examinees from finishing all of the items on the test; the extent of this effect has been called the "speededness" of the test. Traditional speededness indices focus on the number of unreached items. Other examinees in the same situation rapidly fill in answers in the hope of getting some of the…
Descriptors: Computer Assisted Testing, Educational Assessment, Evaluation Methods, Guessing (Tests)
Schnipke, Deborah L.; Pashley, Peter J. – 1997
Differences in test performance on time-limited tests may be due in part to differential response-time rates between subgroups, rather than real differences in the knowledge, skills, or developed abilities of interest. With computer-administered tests, response times are available and may be used to address this issue. This study investigates…
Descriptors: Computer Assisted Testing, Data Analysis, English, High Stakes Tests
Schnipke, Deborah L.; Scrams, David J. – 1999
The availability of item response times made possible by computerized testing represents an entirely new type of information about test items. This study explores the issue of how to represent response-time information in item banks. Empirical response-time distribution functions can be fit with statistical distribution functions with known…
Descriptors: Adaptive Testing, Admission (School), Arithmetic, College Entrance Examinations


