NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 736 to 750 of 1,058 results Save | Export
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Giraud, Gerald; Smith, Russel – Online Submission, 2005
This study examines the effect of item response time across 30 items on ability estimates in a high stakes computer adaptive graduate admissions examination. Examinees were categorized according to 4 item response time patterns, and the categories are compared in terms of ability estimates. Significant differences between response time patterns…
Descriptors: Reaction Time, Test Items, Time Management, Adaptive Testing
Capar, Nilufer K.; Thompson, Tony; Davey, Tim – 2000
Information provided for computerized adaptive test (CAT) simulees was compared under two conditions on two moderately correlated trait composites, mathematics and reading comprehension. The first condition used information provided by in-scale items alone, while the second condition used information provided by in- and out-of-scale items together…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Zhu, Renbang; Yu, Feng; Liu, Su – 2002
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Lau, C. Allen; Wang, Tianyou – 2000
This paper proposes a new Information-Time index as the basis for item selection in computerized classification testing (CCT) and investigates how this new item selection algorithm can help improve test efficiency for item pools with mixed item types. It also investigates how practical constraints such as item exposure rate control, test…
Descriptors: Algorithms, Classification, Computer Assisted Testing, Elementary Secondary Education
Glas, Cees A. W.; Vos, Hans J. – 2000
This paper focuses on a version of sequential mastery testing (i.e., classifying students as a master/nonmaster or continuing testing and administering another item or testlet) in which response behavior is modeled by a multidimensional item response theory (IRT) model. First, a general theoretical framework is outlined that is based on a…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Drasgow, Fritz, Ed.; Olson-Buchanan, Julie B., Ed. – 1999
Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow);…
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Scoring
Signer, Barbara – Computing Teacher, 1982
Describes computer program designed to diagnose student arithmetic achievement in following categories: number concepts, addition, subtraction, multiplication, and division. Capabilities of the program are discussed, including immediate diagnosis, tailored testing, test security (unique tests generated), generative responses (nonmultiple choice),…
Descriptors: Computer Assisted Testing, Computer Programs, Diagnostic Tests, Elementary Secondary Education
Peer reviewed Peer reviewed
Ban, Jae-Chun; Hanson, Bradley A.; Yi, Qing; Harris, Deborah J. – Journal of Educational Measurement, 2002
Compared three online pretest calibration scaling methods through simulation: (1) marginal maximum likelihood with one expectation maximization (EM) cycle (OEM) method; (2) marginal maximum likelihood with multiple EM cycles (MEM); and (3) M. Stocking's method B. MEM produced the smallest average total error in parameter estimation; OEM yielded…
Descriptors: Computer Assisted Testing, Error of Measurement, Maximum Likelihood Statistics, Online Systems
Peer reviewed Peer reviewed
Berger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Jodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Adema, Jos J. – Journal of Educational Measurement, 1990
Mixed integer linear programing models for customizing two-stage tests are presented. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. The models can be modified for use in the construction of multistage tests. (Author/TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Linear Programing
Peer reviewed Peer reviewed
Hetter, Rebecca D.; And Others – Applied Psychological Measurement, 1994
Effects on computerized adaptive test score of using a paper-and-pencil (P&P) calibration to select items and estimate scores were compared with effects of using computer calibration. Results with 2,999 Navy recruits support the use of item parameters calibrated from either P&P or computer administrations. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Armstrong, Ronald D.; And Others – Journal of Educational Statistics, 1994
A network-flow model is formulated for constructing parallel tests based on classical test theory while using test reliability as the criterion. Practitioners can specify a test-difficulty distribution for values of item difficulties as well as test-composition requirements. An empirical study illustrates the reliability of generated tests. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Pages: 1  |  ...  |  46  |  47  |  48  |  49  |  50  |  51  |  52  |  53  |  54  |  ...  |  71