NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 106 to 120 of 336 results Save | Export
Henson, Robin K. – 2000
The purpose of this paper is to highlight some psychometric cautions that should be observed when seeking to develop short form versions of tests. Several points are made: (1) score reliability is impacted directly by the characteristics of the sample and testing conditions; (2) sampling error has a direct influence on reliability and factor…
Descriptors: Factor Structure, Psychometrics, Reliability, Sampling
Peer reviewed Peer reviewed
Robin, Frederic; Sireci, Stephen G.; Hambleton, Ronald K. – International Journal of Testing, 2003
Illustrates how multidimensional scaling (MDS) and differential item functioning (DIF) procedures can be used to evaluate the equivalence of different language versions of an examination. Presents examples of structural differences and DIF across languages. (SLD)
Descriptors: Item Bias, Licensing Examinations (Professions), Multidimensional Scaling, Multilingual Materials
Peer reviewed Peer reviewed
Reilly, Carol A. – English in Texas, 1994
Discusses ways of designing a review for a test, or a test itself, as a treasure hunt. Offers suggestions for how to set up the game. Presents 10 sample clues for a grammar test and 10 sample clues for a literature test. (RS)
Descriptors: Class Activities, Educational Games, Grammar, Secondary Education
Peer reviewed Peer reviewed
Ritter, Leonora – Assessment & Evaluation in Higher Education, 2000
Describes and evaluates a "controlled assessment procedure" as a holistic approach to avoiding problems of administering and evaluating traditional exams. Key characteristics include: the question is known well in advance and is broad and open-ended, students are encouraged to respond in any self-selected written format, and the rationale and…
Descriptors: Alternative Assessment, Higher Education, Student Evaluation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Armstrong, Ronald D.; Jones, Douglas H.; Koppel, Nicole B.; Pashley, Peter J. – Applied Psychological Measurement, 2004
A multiple-form structure (MFS) is an ordered collection or network of testlets (i.e., sets of items). An examinee's progression through the network of testlets is dictated by the correctness of an examinee's answers, thereby adapting the test to his or her trait level. The collection of paths through the network yields the set of all possible…
Descriptors: Law Schools, Adaptive Testing, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Hoyt, Kenneth B. – Journal of Counseling & Development, 1986
The microcomputer version of the Ohio Vocational Interest Survey (OVIS II) differs from the machine-scored version in its ability to incorporate data from the OVIS II:Career Planner in its printed report. It differs from the hand-scored version in its ability to include data from the OVIS II:Work Characteristic Analysis in its printed report.…
Descriptors: Comparative Analysis, Computer Assisted Testing, Microcomputers, Test Format
Shick, Jacqueline – Health Education (Washington D.C.), 1989
This article focuses on common errors associated with true-false, matching, completion, and essay questions as presented in textbook test manuals. Teachers should be able to select and/or adapt test questions which would be applicable to the content of their courses and which meet minimal standards for test construction. (JD)
Descriptors: Health Education, Higher Education, Secondary Education, Test Construction
Stape, Christopher J. – Performance and Instruction, 1995
Suggests methods for developing higher level objective test questions. Taxonomies that define learning outcomes are discussed; and examples for various test types are presented, including multiple correct answers; more complex forms, including classification and multiple true-false; relations and correlates; and interpretive exercises. (LRW)
Descriptors: Classification, Objective Tests, Outcomes of Education, Test Construction
Draaijer, S.; Hartog, R. J. M. – E-Journal of Instructional Science and Technology, 2007
A set of design patterns for digital item types has been developed in response to challenges identified in various projects by teachers in higher education. The goal of the projects in question was to design and develop formative and summative tests, and to develop interactive learning material in the form of quizzes. The subject domains involved…
Descriptors: Higher Education, Instructional Design, Test Format, Biological Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Andreas; Hartig, Johannes; Rupp, Andre A. – Educational Measurement: Issues and Practice, 2009
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Descriptors: Measures (Individuals), Test Construction, Theory Practice Relationship, Design
Peay, Edmund R. – 1982
The method for questionnaire construction described in this paper makes it convenient to generate as many different forms for a questionnaire as there are respondents. The method is based on using the computer to produce the questionnaire forms themselves. In this way the items or subgroups of items of the questionnaire may be randomly ordered or…
Descriptors: Computer Assisted Testing, Computer Software, Questionnaires, Sampling
Peer reviewed Peer reviewed
Jeffrey, I. M. W.; Grieve, A. R. – Medical Teacher, 1987
The format of the final examinations in Conservative Dentistry in the Dental Schools of Great Britain and Ireland was investigated by means of a questionnaire sent to each of the dental schools in each country. Results are reported and the value of different parts of the examinations are discussed. (Author/RH)
Descriptors: Dentistry, Foreign Countries, Higher Education, Medical Education
Habick, Timothy – 1999
With the advent of computer-based testing (CBT) and the need to increase the number of items available in computer adaptive test pools, the idea of item variants was conceived. An item variant can be defined as an item with content based on an existing item to a greater or lesser degree. Item variants were first proposed as a way to enhance test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Peer reviewed Peer reviewed
Simon, Alan J.; Joiner, Lee M. – Journal of Educational Measurement, 1976
The purpose of this study was to determine whether a Mexican version of the Peabody Picture Vocabulary Test could be improved by directly translating both forms of the American test, then using decision procedures to select the better item of each pair. The reliability of the simple translations suffered. (Author/BW)
Descriptors: Early Childhood Education, Spanish, Test Construction, Test Format
Peer reviewed Peer reviewed
Kim, Jee-Seon; Hanson, Bradley A. – Applied Psychological Measurement, 2002
Presents a characteristic curve procedure for comparing transformations of the item response theory ability scale assuming the multiple-choice model. Illustrates the use of the method with an example equating American College Testing mathematics tests. (SLD)
Descriptors: Ability, Equated Scores, Item Response Theory, Mathematics Tests
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  23