Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Scoring Formulas | 12 |
| Test Format | 12 |
| Multiple Choice Tests | 6 |
| Difficulty Level | 4 |
| Higher Education | 3 |
| Test Anxiety | 3 |
| Test Items | 3 |
| Test Reliability | 3 |
| Test Validity | 3 |
| Adults | 2 |
| Comparative Analysis | 2 |
| More ▼ | |
Source
Author
| Plake, Barbara S. | 2 |
| Albanese, Mark A. | 1 |
| Arkin, Robert M. | 1 |
| Austin, Joe Dan | 1 |
| Bauer, Daniel | 1 |
| Fischer, Martin R. | 1 |
| Floyd, Harlee S. | 1 |
| Guttormsen, Sissel | 1 |
| Huwendiek, Sören | 1 |
| Isonio, Steven | 1 |
| Krebs, René | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 12 |
| Journal Articles | 8 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| State Trait Anxiety Inventory | 1 |
What Works Clearinghouse Rating
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Walstad, William B.; Miller, Laurie A. – Journal of Economic Education, 2016
Survey results from a national sample of economics instructors describe the grading policies and practices in principles of economics courses. The survey results provide insights about absolute and relative grading systems used by instructors, the course components and their weights that determine grades, and the type of assessment items used for…
Descriptors: Grades (Scholastic), Grading, Economics Education, Educational Policy
Peer reviewedAustin, Joe Dan – Psychometrika, 1981
On distractor-identification tests students mark as many distractors as possible on each test item. A grading scale is developed for this type testing. The score is optimal in that it yields an unbiased estimate of the student's score as if no guessing had occurred. (Author/JKS)
Descriptors: Guessing (Tests), Item Analysis, Measurement Techniques, Scoring Formulas
Peer reviewedRakowski, William; And Others – Perceptual and Motor Skills, 1980
Techniques for obtaining time perspective data were examined. Undergraduates responded to a questionnaire containing one of three formats for reporting anticipated future life-events, varying in structure imposed on respondents. Temporal estimates of life-event occurence were coded using two procedures, both permitting a near and a far value.…
Descriptors: Adults, Attitude Measures, College Students, Expectation
Peer reviewedAlbanese, Mark A. – Evaluation and the Health Professions, 1982
Findings regarding formats and scoring formulas for multiple-choice test items with more than one correct response are presented. Strong cluing effects in the Type K format, increasing the correct score percentage and reducing test reliability, recommend using the Type X format. Alternative scoring methods are discussed. (Author/CM)
Descriptors: Health Occupations, Multiple Choice Tests, Professional Education, Response Style (Tests)
Peer reviewedPlake, Barbara S.; And Others – Journal of Experimental Education, 1981
Number right and elimination scores were analyzed on a college level mathematics exam assembled from pretest data. Anxiety measures were administered along with the experimental forms to undergraduates. Results suggest that neither test scores nor attitudes are influenced by item order knowledge thereof, or anxiety level. (Author/GK)
Descriptors: College Mathematics, Difficulty Level, Higher Education, Multiple Choice Tests
Suhadolnik, Debra; Weiss, David J. – 1983
The present study was an attempt to alleviate some of the difficulties inherent in multiple-choice items by having examinees respond to multiple-choice items in a probabilistic manner. Using this format, examinees are able to respond to each alternative and to provide indications of any partial knowledge they may possess concerning the item. The…
Descriptors: Confidence Testing, Multiple Choice Tests, Probability, Response Style (Tests)
Plake, Barbara S.; And Others – 1980
Number right and elimination scores were analyzed on a 48-item college level mathematics test that was assembled from pretest data in three forms by varying the item orderings: easy-hard, uniform, or random. Half of the forms contained information explaining the item arrangement and suggesting strategies for taking the test. Several anxiety…
Descriptors: Difficulty Level, Higher Education, Multiple Choice Tests, Quantitative Tests
Peer reviewedArkin, Robert M.; Walts, Elizabeth A. – Journal of Educational Psychology, 1983
The effects of corrective testing and how such feedback might affect high- and low-test-anxious students differently are indicated. Subjects were 286 college students in three classes--one using mastery testing and two using multiple choice tests. (Author/PN)
Descriptors: Attribution Theory, Feedback, Higher Education, Mastery Tests
Wolfe, Edward; And Others – 1993
The two studies described here compare essays composed on word processors with those composed with pen and paper for a standardized writing assessment. The following questions guided these studies: (1) Are there differences in test administration and writing processes associated with handwritten versus word-processor writing assessments? (2) Are…
Descriptors: Adults, Comparative Analysis, Computer Uses in Education, Essays
Isonio, Steven – 1993
During spring 1992, the Combined English Language Skills Assessment (CELSA) test was piloted with a sample of English-as-a-Second-Language (ESL) classes at Golden West College (GWC) in Huntington Beach, California. The CELSA, which utilizes a cloze format including parts of conversations and short dialogues, combines items from beginning,…
Descriptors: Community Colleges, Cutting Scores, English (Second Language), Language Tests

Direct link
