NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,366 to 1,380 of 3,126 results Save | Export
Spray, Judith; Lin, Chuan-Ju; Chen, Troy T. – 2002
Automated test assembly is a technology for producing multiple, equivalent test forms from an item pool. An important consideration for test security in automated test assembly is the inclusion of the same items on these multiple forms. Although it is possible to use item selection as a formal constraint in assembling forms, the number of…
Descriptors: Computer Assisted Testing, Item Banks, Test Construction, Test Format
van der Linden, Wim J. – 2001
This report contains a review of procedures for computerized assembly of linear, sequential, and adaptive tests. The common approach to these test assembly problems is to view them as instances of constrained combinatorial optimization. For each testing format, several potentially useful objective functions and types of constraints are discussed.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Test Format
van der Linden, Wim J.; Adema, Jos J. – 1997
An algorithm for the assembly of multiple test forms is proposed in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. At each step one form is assembled to its true specifications; the other form is a dummy assembled only to maintain a balance between the quality of the current form and the…
Descriptors: Algorithms, Foreign Countries, Higher Education, Linear Programming
Henson, Robin K. – 2000
The purpose of this paper is to highlight some psychometric cautions that should be observed when seeking to develop short form versions of tests. Several points are made: (1) score reliability is impacted directly by the characteristics of the sample and testing conditions; (2) sampling error has a direct influence on reliability and factor…
Descriptors: Factor Structure, Psychometrics, Reliability, Sampling
Li, Yuan H.; Lissitz, Robert W.; Yang, Yu Nu – 1999
Recent years have seen growing use of tests with mixed item formats, e.g., partly containing dichotomously scored items and partly consisting of polytomously scored items. A matching two test characteristic curves method (CCM) for placing these mixed format items on the same metric is described and evaluated in this paper under a common-item…
Descriptors: Equated Scores, Estimation (Mathematics), Item Response Theory, Test Format
Papanastasiou, Elena C. – 2002
Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Review (Reexamination)
Peer reviewed Peer reviewed
Berndt, David J.; And Others – Journal of Consulting and Clinical Psychology, 1983
Obtained reading grade levels for depression scales by use of two empirically based readability formulae. Results showed Kovacs children's measure had the easiest reading level, the General Behavior Inventory was appropriate for college-level reading, and most other measures clustered at a fifth- to ninth-grade reading level. (WAS)
Descriptors: Affective Measures, Depression (Psychology), Readability, Readability Formulas
Peer reviewed Peer reviewed
Ouellette, Sue E.; Sendelbaugh, Joseph W. – American Annals of the Deaf, 1982
Fifteen deaf students (18 to 24 years old) who received the standard written form of a reading comprehension test performed significantly better than 15 deaf Ss who received an American Sign Language version. There were no differences between Ss receiving the standard form and Ss receiving a Manually Coded English videotaped form. (CL)
Descriptors: College Students, Deafness, Performance Factors, Reading Comprehension
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
Results in the engineering literature on "k out of n system reliability" can be used to characterize tests based on estimates of the probability of correctly determining whether the examinee knows the correct response. In particular, the minimum number of distractors required for multiple-choice tests can be empirically determined.…
Descriptors: Achievement Tests, Mathematical Models, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Lange, Bob – Journal of Reading, 1981
Reviews materials in the ERIC data base that can be used to teach students test-wiseness skills such as cue- using strategies, time using, error avoidance, guessing, deductive reasoning, answering analogies, and answering multiple choice questions. (MKM)
Descriptors: Achievement, Guessing (Tests), Instructional Materials, Teaching Methods
Peer reviewed Peer reviewed
Hoosain, Rumjahn – Perceptual and Motor Skills, 1980
In this study with bilingual Chinese college students, faster subjects in a word-judging task had lower Raven's Progressive Matrices scores. The distinction between test response time and actual mental speed as a correlate of intelligence is discussed. (Author/SJL)
Descriptors: Adults, Bilingual Students, Conceptual Tempo, Correlation
Peer reviewed Peer reviewed
Robin, Frederic; Sireci, Stephen G.; Hambleton, Ronald K. – International Journal of Testing, 2003
Illustrates how multidimensional scaling (MDS) and differential item functioning (DIF) procedures can be used to evaluate the equivalence of different language versions of an examination. Presents examples of structural differences and DIF across languages. (SLD)
Descriptors: Item Bias, Licensing Examinations (Professions), Multidimensional Scaling, Multilingual Materials
Peer reviewed Peer reviewed
Gerber, Ben; Smith, Everett V., Jr.; Girotti, Mariela; Pelaez, Lourdes; Lawless, Kimberly; Smolin, Louanne; Brodsky, Irwin; Eiser, Arnold – Journal of Applied Measurement, 2002
Used Rasch measurement to study the psychometric properties of data obtained from a newly developed Diabetes Questionnaire designed to measure diabetes knowledge, attitudes, and self-care. Responses of 26 diabetes patients to the English version of the questionnaire and 24 patients to the Spanish version support the cross-form equivalence and…
Descriptors: Diabetes, English, Item Response Theory, Patients
Peer reviewed Peer reviewed
Rushton, Patricia; Eggett, Dennis – Journal of Professional Nursing, 2003
Of four groups of medical-surgical nurses, 55 took one final and three midterm written exams, 150 took one each (written), 45 took an oral final, 92 took both written and oral, and 47 took a written test with licensure questions and an oral final. Oral exams resulted in higher scores, more effective study habits, and increased application. (SK)
Descriptors: Evaluation Methods, Higher Education, Nursing Education, Study Habits
Peer reviewed Peer reviewed
Reilly, Carol A. – English in Texas, 1994
Discusses ways of designing a review for a test, or a test itself, as a treasure hunt. Offers suggestions for how to set up the game. Presents 10 sample clues for a grammar test and 10 sample clues for a literature test. (RS)
Descriptors: Class Activities, Educational Games, Grammar, Secondary Education
Pages: 1  |  ...  |  88  |  89  |  90  |  91  |  92  |  93  |  94  |  95  |  96  |  ...  |  209