NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 796 to 810 of 1,058 results Save | Export
Lazarte, Alejandro A. – 1999
Two experiments reproduced in a simulated computerized test-taking situation the effect of two of the main determinants in answering an item in a test: the difficulty of the item and the time available to answer it. A model is proposed for the time to respond or abandon an item and for the probability of abandoning it or answering it correctly. In…
Descriptors: Computer Assisted Testing, Difficulty Level, Higher Education, Probability
Green, Bert F. – New Directions for Testing and Measurement, 1983
Computerized adaptive testing allows us to create a unique personalized test that matches the ability and knowledge of the test taker. (Author)
Descriptors: Adaptive Testing, Computer Assisted Testing, Individual Needs, Individual Testing
Peer reviewed Peer reviewed
Roid, Gale; Haladyna, Tom – Review of Educational Research, 1980
A continuum of item-writing methods is proposed ranging from informal-subjective methods to algorithmic-objective methods. Examples of techniques include objective-based item writing, amplified objectives, item forms, facet design, domain-referenced concept testing, and computerized techniques. (Author/CP)
Descriptors: Achievement Tests, Algorithms, Computer Assisted Testing, Criterion Referenced Tests
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing
Peer reviewed Peer reviewed
Kobrin, Jennifer L.; Young, John W. – Applied Measurement in Education, 2003
Studied the cognitive equivalence of computerized and paper-and-pencil reading comprehension tests using verbal protocol analysis. Results for 48 college students indicate that the only significant difference between the computerized and paper-and-pencil tests was in the frequency of identifying important information in the passage. (SLD)
Descriptors: Cognitive Processes, College Students, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Rocklin, Thomas R. – Applied Measurement in Education, 1994
Effects of self-adapted testing (SAT), in which examinees choose the difficulty of items themselves, on ability estimates, precision, and efficiency, mechanisms of SAT effects, and examinee reactions to SAT are reviewed. SAT, which is less efficient than computer-adapted testing, is more efficient than fixed-item testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Brown, James Dean – Language Learning & Technology, 1997
Explores recent developments in the use of computers in language testing in four areas: (1) item banking; (2) computer-assisted language testing; (3) computerized-adaptive language testing; and (4) research on the effectiveness of computers in language testing. Examines educational measurement literature in an attempt to forecast the directions…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Language Research, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2005
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…
Descriptors: Item Banks, Computer Assisted Testing, Monte Carlo Methods, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Ritchie, Timothy – Journal of Educational Computing Research, 2004
This study investigated the statistical and practical significance of context effects for items randomized within testlets for administration during a series of computerized non-adaptive tests. One hundred and twenty-five items from four primary school reading tests were studied. Logistic regression analyses identified from one to four items for…
Descriptors: Psychometrics, Context Effect, Effect Size, Primary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Stocking, Martha L. – 1996
The interest in the application of large-scale computerized adaptive testing has served to focus attention on issues that arise when theoretical advances are made operational. Some of these issues stem less from changes in testing conditions and more from changes in testing paradigms. One such issue is that of the order in which questions are…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Bejar, Isaac I. – 1996
Generative response modeling is an approach to test development and response modeling that calls for the creation of items in such a way that the parameters of the items on some response model can be anticipated through knowledge of the psychological processes and knowledge required to respond to the item. That is, the computer would not merely…
Descriptors: Ability, Computer Assisted Testing, Cost Effectiveness, Estimation (Mathematics)
Glas, Cees A. W. – 1998
In computerized adaptive testing, updating parameter estimates using adaptive testing data is often called online calibration. In this paper, how to evaluate whether the adaptive testing model used for online calibration fits the item response model used sufficiently is studied. Three approaches are investigated, based on a Lagrange multiplier…
Descriptors: Adaptive Testing, Computer Assisted Testing, Foreign Countries, Item Response Theory
van der Linden, Wim J. – 1996
R. J. Owen (1975) proposed an approximate empirical Bayes procedure for item selection in adaptive testing. The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach, but…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computation
Pages: 1  |  ...  |  50  |  51  |  52  |  53  |  54  |  55  |  56  |  57  |  58  |  ...  |  71