Publication Date
In 2025 | 34 |
Since 2024 | 128 |
Since 2021 (last 5 years) | 467 |
Since 2016 (last 10 years) | 873 |
Since 2006 (last 20 years) | 1353 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 195 |
Teachers | 159 |
Researchers | 92 |
Administrators | 49 |
Students | 34 |
Policymakers | 14 |
Parents | 12 |
Counselors | 2 |
Community | 1 |
Media Staff | 1 |
Support Staff | 1 |
More ▼ |
Location
Canada | 62 |
Turkey | 59 |
Germany | 40 |
United Kingdom | 36 |
Australia | 35 |
Japan | 35 |
China | 32 |
United States | 32 |
California | 25 |
United Kingdom (England) | 25 |
Netherlands | 24 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
DiBattista, David; Mitterer, John O.; Gosse, Leanne – Teaching in Higher Education, 2004
Undergraduates completed a questionnaire after using the Immediate Feedback Assessment Technique (IFAT), a commercially available answer form for multiple-choice (MC) testing that can be used easily and conveniently with large classes. This simple new technique for MC testing provides immediate feedback for each item in an answer-until-correct…
Descriptors: Multiple Choice Tests, Testing, Feedback, Guessing (Tests)
Prestera, Gustavo E.; Clariana, Roy; Peck, Andrew – Journal of Educational Multimedia and Hypermedia, 2005
In this experimental study, 44 undergraduates completed five computer-based instructional lessons and either two multiplechoice tests or two fill-in-the-blank tests. Color-coded borders were displayed during the lesson, adjacent to the screen text and illustrations. In the experimental condition, corresponding border colors were shown at posttest.…
Descriptors: Experimental Groups, Computer Assisted Instruction, Instructional Effectiveness, Multiple Choice Tests
Osterlind, Steven J.; Miao, Danmin; Sheng, Yanyan; Chia, Rosina C. – International Journal of Testing, 2004
This study investigated the interaction between different cultural groups and item type, and the ensuing effect on construct validity for a psychological inventory, the Myers-Briggs Type Indicator (MBTI, Form G). The authors analyzed 94 items from 2 Chinese-translated versions of the MBTI (Form G) for factorial differences among groups of…
Descriptors: Test Format, Undergraduate Students, Cultural Differences, Test Validity
Liao, Yan; Fukuya, Yoshinori J. – Language Learning, 2004
This study investigates the avoidance of English phrasal verbs by Chinese learners. Six groups of Chinese learners (intermediate and advanced; a total of 70) took one of 3 tests multiplechoice, translation, or recall, which included literal and figurative phrasal verbs, while 15 native speakers took the multiple-choice test. The results show that…
Descriptors: Test Format, Semantics, Native Speakers, Interlanguage
Erin, Jane N.; Hong, Sunggye; Schoch, Christina; Kuo, YaJu – Journal of Visual Impairment & Blindness, 2006
This study compared the test scores and time required by high school students who are blind, sighted, or have low vision to complete tests administered in written and oral formats. The quantitative results showed that the blind students performed better on multiple-choice tests in braille and needed more time while taking tests in braille. The…
Descriptors: High School Students, Testing, Multiple Choice Tests, Braille
East, Martin – Assessing Writing, 2006
Writing assessment essentially juxtaposes two elements: how "good writing" is to be defined, and how "good measurement" of that writing is to be carried out. The timed test is often used in large-scale L2 writing assessments because it is considered to provide reliable measurement. It is, however, highly inauthentic. One way of enhancing…
Descriptors: Writing Evaluation, Writing Tests, Timed Tests, Dictionaries
Ketterlin-Geller, Leanne R.; McCoy, Jan D.; Twyman, Todd; Tindal, Gerald – Assessment for Effective Intervention, 2006
Curriculum-based measurement is a system for monitoring students' progress and formatively evaluating instruction backed by 25 years of validation research. Most of this research has been conducted in elementary schools. In middle and high school classrooms, where there is an emphasis on mastering content knowledge, elementary-level measurements…
Descriptors: Curriculum Based Assessment, Academic Achievement, Cloze Procedure, Program Validation
Freedle, Roy; Kostin, Irene – 1993
Prediction of the difficulty (equated delta) of a large sample (n=213) of reading comprehension items from the Test of English as a Foreign Language (TOEFL) was studied using main idea, inference, and supporting statement items. A related purpose was to examine whether text and text-related variables play a significant role in predicting item…
Descriptors: Construct Validity, Difficulty Level, Multiple Choice Tests, Prediction
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Lunz, Mary E.; Bergstrom, Betty A. – 1995
The Board of Registry (BOR) certifies medical technologists and other laboratory personnel. The BOR has studied adaptive testing for over 6 years and now administers all 17 BOR certification examinations using computerized adaptive testing (CAT). This paper presents an overview of the major research efforts from 1989 to the present related to test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Decision Making, Equated Scores
Wainer, Howard; And Others – 1991
A series of computer simulations was run to measure the relationship between testlet validity and the factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Results confirmed the generality of earlier empirical findings of H. Wainer and others (1991) that making a testlet adaptive yields only marginal…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Mislevy, Robert J.; Wu, Pao-Kuei – 1988
The basic equations of item response theory provide a foundation for inferring examinees' abilities and items' operating characteristics from observed responses. In practice, though, examinees will usually not have provided a response to every available item--for reasons that may or may not have been intended by the test administrator, and that…
Descriptors: Ability, Adaptive Testing, Equations (Mathematics), Estimation (Mathematics)
Council of Chief State School Officers, Washington, DC. – 1997
This booklet presents the Reading Framework for the 1992, 1994, and 1998 National Assessment of Educational Progress (NAEP). Developed through a national consensus process as part of an effort to move assessment forward, the framework presented in the booklet is more consistent with contemporary knowledge about reading and more relevant to the…
Descriptors: Intermediate Grades, Literacy, Reading Achievement, Reading Skills

Faggen, Jane; And Others – 1995
The objective of this study was to determine the degree to which recommendations for passing scores, calculated on the basis of a traditional standard-setting methodology, might be affected by the mode (paper versus computer-screen prints) in which test items were presented to standard setting panelists. Results were based on the judgments of 31…
Descriptors: Computer Assisted Testing, Cutting Scores, Difficulty Level, Evaluators
Morrison, Carol A.; Fitzpatrick, Steven J. – 1992
An attempt was made to determine which item response theory (IRT) equating method results in the least amount of equating error or "scale drift" when equating scores across one or more test forms. An internal anchor test design was employed with five different test forms, each consisting of 30 items, 10 in common with the base test and 5…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Error of Measurement