Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Multiple Choice Tests | 22 |
| Scores | 22 |
| Testing Problems | 22 |
| Higher Education | 8 |
| Guessing (Tests) | 7 |
| Item Analysis | 5 |
| Response Style (Tests) | 5 |
| Standardized Tests | 5 |
| Test Items | 5 |
| Test Validity | 5 |
| Test Construction | 4 |
| More ▼ | |
Source
| Journal of Educational… | 2 |
| Illinois School Research and… | 1 |
| Journal of Economic Education | 1 |
| Journal of Educational… | 1 |
| LEARN Journal: Language… | 1 |
| Measurement:… | 1 |
| NASSP Bulletin | 1 |
Author
| Anderson, Paul S. | 1 |
| Boldt, Robert F. | 1 |
| Bolus, Roger | 1 |
| Breland, Hunter M. | 1 |
| Bresnock, Anne E. | 1 |
| Capell, Frank | 1 |
| Denoyer, Richard A. | 1 |
| Drasgow, Fritz | 1 |
| Durost, Walter N. | 1 |
| Frary, Robert B. | 1 |
| Gilmer, Jerry S. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 13 |
| Journal Articles | 7 |
| Speeches/Meeting Papers | 6 |
| Reports - Evaluative | 4 |
| Opinion Papers | 3 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 4 |
Location
| New Hampshire | 1 |
| Thailand | 1 |
Laws, Policies, & Programs
| Elementary and Secondary… | 1 |
Assessments and Surveys
| SAT (College Admission Test) | 2 |
| Graduate Record Examinations | 1 |
| National Assessment of… | 1 |
| Test of Standard Written… | 1 |
What Works Clearinghouse Rating
Imsa-ard, Pariwat – LEARN Journal: Language Education and Acquisition Research Network, 2020
The Ordinary National Educational Test (O-NET), the national examination in Thailand, plays as a high-stakes test at an upper secondary school level as it can be used as a tool for several purposes in education such as gatekeepers for the university entry and measures for the teaching quality evaluation. English, out of the five core subjects in…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Teachers
Peer reviewedDenoyer, Richard A.; White, Michael – NASSP Bulletin, 1990
Presuming that test scores can accurately reflect educational quality is naive and potentially dangerous. Sophisticated statistical procedures cannot fully separate the effects of confounding background variables (ethnicity, language proficiency, or poverty) from test scores. A broad-based assessment model relying on multiple indices and…
Descriptors: Academic Achievement, Multiple Choice Tests, Scores, Secondary Education
Wang, Jianjun – 1995
Effects of blind guessing on the success of passing true-false and multiple-choice tests are investigated under a stochastic binomial model. Critical values of guessing are thresholds which signify when the effect of guessing is negligible. By checking a table of critical values assembled in this paper, one can make a decision with 95% confidence…
Descriptors: Bayesian Statistics, Grading, Guessing (Tests), Models
Frary, Robert B.; And Others – 1985
Students in an introductory college course (n=275) responded to equivalent 20-item halves of a test under number-right and formula-scoring instructions. Formula scores of those who omitted items overaged about one point lower than their comparable (formula adjusted) scores on the test half administered under number-right instructions. In contrast,…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Questionnaires
Kennedy, Rob – 1994
The purpose of this study was to investigate the relationship between the scores students earned on multiple choice tests and the number of minutes students required to complete the tests. The 5 tests were made up of 20 randomly drawn questions from a large pool of questions about research methods. Students were allowed an unlimited amount of time…
Descriptors: Graduate Students, Graduate Study, Higher Education, Multiple Choice Tests
Peer reviewedReiling, Eldon; Taylor, Ryland – Journal of Educational Measurement, 1972
The hypothesis that it is unwise to change answers to multiple choice questions was tested using multiple regression analysis. The hypothesis was rejected as results showed that there are gains to be made by changing responses. (Author/CK)
Descriptors: Guessing (Tests), Hypothesis Testing, Measurement Techniques, Multiple Choice Tests
Klein, Stephen P.; Bolus, Roger – 1983
A solution to reduce the likelihood of one examinee copying another's answers on large scale tests that require all examinees to answer the same set of questions is to use multiple test forms that differ in terms of item ordering. This study was conducted to determine whether varying the sequence in which blocks of items were presented to…
Descriptors: Adults, Cheating, Cost Effectiveness, Item Analysis
National Center for Fair and Open Testing (FairTest), Cambridge, MA. – 1989
This paper is prompted by the Joint Statement issued from the Education Summit, and addresses the need to change testing policy from reliance on standardized, multiple-choice testing to the use of more authentic methods of assessing educational performance and progress. The governors are warned against increasing the use of standardized tests and…
Descriptors: Achievement Tests, Educational Objectives, Educational Policy, Elementary Secondary Education
Boldt, Robert F. – 1971
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation, a linear transformation of the logarithm of the correct response is maximized if the examinee reports accurately his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Quellmalz, Edys; Capell, Frank – 1979
The purpose of this study was to examine the stability of measures of student writing performance across types of discourse (genres) and across response modes (selected response: multiple choice; constructed response: single paragraph, and full length essay). The study addressed the following: (1) the relationship/stability of writing scores…
Descriptors: Correlation, Essay Tests, Literary Genres, Models
Peer reviewedAnderson, Paul S.; And Others – Illinois School Research and Development, 1985
Concludes that the Multi-Digit Test stimulates better retention than multiple choice tests while offering the advantage of computerized scoring and analysis. (FL)
Descriptors: Comparative Analysis, Computer Assisted Testing, Educational Research, Higher Education
Peer reviewedBreland, Hunter M.; Griswold, Philip A. – Journal of Educational Psychology, 1982
The relationships among scores on traditional college entrance tests and scores on an essay placement test for women and men and four ethnic groups were examined. The tests correlated highly with essay performance. However, women tended to be underestimated and men and ethnic minorities overestimated by these measures. (Author/PN)
Descriptors: College Entrance Examinations, Essay Tests, Higher Education, Multiple Choice Tests
Peer reviewedSmith, Malbert, III; And Others – Journal of Educational Measurement, 1979
Results of multiple-choice tests in educational psychology were examined to discover the effects on students' scores of changing their original answer choices after reconsideration. Eighty-six percent of the students changed one or more answers, and six out of seven students who made changes improved their scores by doing so. (Author/CTM)
Descriptors: Academic Ability, Difficulty Level, Error Patterns, Guessing (Tests)
Peer reviewedBresnock, Anne E.; And Others – Journal of Economic Education, 1989
Investigates the effects on multiple choice test performance of altering the order and placement of questions and responses. Shows that changing the response pattern appears to alter significantly the apparent degree of difficulty. Response patterns become more dissimilar under certain types of response alterations. (LS)
Descriptors: Cheating, Economics Education, Educational Research, Grading
Lenel, Julia C.; Gilmer, Jerry S. – 1986
In some testing programs an early item analysis is performed before final scoring in order to validate the intended keys. As a result, some items which are flawed and do not discriminate well may be keyed so as to give credit to examinees no matter which answer was chosen. This is referred to as allkeying. This research examined how varying the…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Licensing Examinations (Professions)
Previous Page | Next Page ยป
Pages: 1 | 2

