NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Holster, Trevor A.; Lake, J. – Language Assessment Quarterly, 2016
Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…
Descriptors: Guessing (Tests), Item Response Theory, Vocabulary, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, Jeffrey; White, David A. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…
Descriptors: Guessing (Tests), Scoring Formulas, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Albanese, Mark A. – Journal of Educational Measurement, 1988
Estimates of the effects of use of formula scoring on the individual examinee's score are presented. Results for easy, moderate, and hard tests are examined. Using test characteristics from several studies shows that some examinees would increase scores substantially if they were to answer items omitted under formula directions. (SLD)
Descriptors: Difficulty Level, Guessing (Tests), Scores, Scoring Formulas
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1980
Technical problems in achievement testing associated with using latent structure models to estimate the probability of guessing correct responses by examinees is studied; also the lack of problems associated with using Wilcox's formula score. Maximum likelihood estimates are derived which may be applied when items are hierarchically related.…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Reid, Frank J. – Journal of Economic Education, 1976
Examines the conventional scoring formula for multiple-choice tests and proposes an alternative scoring formula which takes into account the situation in which the student does not know the right answer but is able to eliminate one or more of the incorrect alternatives. (Author/AV)
Descriptors: Economics Education, Guessing (Tests), Higher Education, Multiple Choice Tests
Kane, Michael T.; Moloney, James M. – 1974
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Descriptors: Feedback, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Boldt, Robert F. – 1971
This paper presents the development of scoring functions for use in conjunction with standard multiple-choice items. In addition to the usual indication of the correct alternative, the examinee is to indicate his personal probability of the correctness of his response. Both linear and quadratic polynomial scoring functions are examined for…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Boldt, Robert F. – 1974
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation a linear transformation of the logarithm of the correct response is maximized if the examinee accurately reports his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Peer reviewed Peer reviewed
Penfield, Douglas A.; Koffler, Stephen L. – Journal of Experimental Education, 1978
Three nonparametric alternatives to the parametric Bartlett test are presented for handling the K-sample equality of variance problem. The two-sample Siegel-Tukey test, Mood test, and Klotz test are extended to the multisample situation by Puri's methods. These K-sample scale tests are illustrated and compared. (Author/GDC)
Descriptors: Comparative Analysis, Guessing (Tests), Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Frary, Robert B.; Hutchinson, T.P. – Educational and Psychological Measurement, 1982
Alternate versions of Hutchinson's theory were compared, and one which implies the existence of partial knowledge was found to be better than one which implies that an appropriate measure of ability is obtained by applying the conventional correction for guessing. (Author/PN)
Descriptors: Guessing (Tests), Latent Trait Theory, Multiple Choice Tests, Scoring Formulas
Lowry, Stephen R. – 1977
The effects of luck and misinformation on ability of multiple-choice test scores to estimate examinee ability were investigated. Two measures of examinee ability were defined. Misinformation was shown to have little effect on ability of raw scores and a substantial effect on ability of corrected-for-guessing scores to estimate examinee ability.…
Descriptors: Ability, College Students, Guessing (Tests), Multiple Choice Tests
Cross, Lawrence H.; Frary, Robert B. – 1976
It has been demonstrated that corrected-for-guessing scores will be superior to number-right scores in providing estimates of examinee standing on the trait measured by a multiple-choice test, if it can be assumed that examinees can and will comply with the appropriate directions. The purpose of the present study was to test the validity of that…
Descriptors: Achievement Tests, Guessing (Tests), Individual Characteristics, Multiple Choice Tests
Bruno, James E. – Journal of Computer-Based Instruction, 1987
Reports preliminary findings of a study which used a modified Admissible Probability Measurement (APM) test scoring system in the design of computer based instructional management systems. The use of APM for curriculum analysis is discussed, as well as its value in enhancing individualized learning. (Author/LRW)
Descriptors: Computer Assisted Testing, Computer Managed Instruction, Curriculum Evaluation, Design
Cross, Lawrence H. – 1975
A novel scoring procedure was investigated in order to obtain scores from a conventional multiple-choice test that would be free of the guessing component or contain a known guessing component even though examinees were permitted to guess at will. Scores computed with the experimental procedure are based not only on the number of items answered…
Descriptors: Algebra, Comparative Analysis, Guessing (Tests), High Schools
Donlon, Thomas F. – 1975
This study empirically determined the optimizing weight to be applied to the Wrongs Total Score in scoring rubrics of the general form = R - kW, where S is the Score, R the Rights Total, k the weight and W the Wrongs Total, if reliability is to be maximized. As is well known, the traditional formula score rests on a theoretical framework which is…
Descriptors: Achievement Tests, Comparative Analysis, Guessing (Tests), Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1  |  2