Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 15 |
Descriptor
Source
Author
| Boldt, Robert F. | 2 |
| Cross, Lawrence H. | 2 |
| Frary, Robert B. | 2 |
| Albanese, Mark A. | 1 |
| Bayuk, Robert J. | 1 |
| Berk, Ronald A. | 1 |
| Bormuth, John R. | 1 |
| Brown, Thomas A. | 1 |
| Bruno, James E. | 1 |
| Campbell, Brian | 1 |
| Cetin, Bayram | 1 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 2 |
Location
| United Kingdom | 3 |
| Germany | 2 |
| Japan | 2 |
| Bosnia and Herzegovina… | 1 |
| Costa Rica | 1 |
| Ecuador | 1 |
| Estonia | 1 |
| Italy | 1 |
| Nicaragua | 1 |
| Poland | 1 |
| Portugal | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nunes, Miguel Baptista, Ed.; McPherson, Maggie, Ed. – International Association for Development of the Information Society, 2016
These proceedings contain the papers of the International Conference e-Learning 2016, which was organised by the International Association for Development of the Information Society, 1-3 July, 2016. This conference is part of the Multi Conference on Computer Science and Information Systems 2016, 1-4 July. The e-Learning (EL) 2016 conference aims…
Descriptors: Professional Associations, Conferences (Gatherings), Electronic Learning, Computer Science Education
Peer reviewedLord, Frederic M. – Educational and Psychological Measurement, 1973
A group of 21 students was tested under a time limit considerably shorter than should have been allowed. This report describes a tryout of a method for estimating the power'' scores that would have been obtained if the students had had enough time to finish. (Author/CB)
Descriptors: Mathematical Models, Scoring Formulas, Statistical Analysis, Theories
Peer reviewedPandey, Tej N.; Shoemaker, David M. – Educational and Psychological Measurement, 1975
Described herein are formulas and computational procedures for estimating the mean and second through fourth central moments of universe scores through multiple matrix sampling. Additionally, procedures are given for approximating the standard error associated with each estimate. All procedures are applicable when items are scored either…
Descriptors: Error of Measurement, Item Sampling, Matrices, Scoring Formulas
Peer reviewedAlbanese, Mark A. – Journal of Educational Measurement, 1988
Estimates of the effects of use of formula scoring on the individual examinee's score are presented. Results for easy, moderate, and hard tests are examined. Using test characteristics from several studies shows that some examinees would increase scores substantially if they were to answer items omitted under formula directions. (SLD)
Descriptors: Difficulty Level, Guessing (Tests), Scores, Scoring Formulas
Peer reviewedCampbell, Brian; Wilson, Bradley J. – Journal of School Psychology, 1986
Investigated Kaufman's procedures for determining intersubtest scatter on the Wechsler Intelligence Scale for Children-Revised by means of Sattler's revised tables for determining significant subtest fluctuations. Results indicated that Sattler's revised tables yielded more conservative estimates of subtest scatter than those originally reported…
Descriptors: Intelligence Tests, Scoring Formulas, Statistical Analysis, Statistical Distributions
Peer reviewedClampit, M. K.; Silver, Stephen J. – Journal of School Psychology, 1986
Presents four tables for the statistical interpretation of factor scores on the Wechsler Intelligence Scale for Children-Revised. Provides the percentile equivalents of factor scores; the significance of differences between factor scores; the frequency with which specified discrepancies occur; the significance of differences between a factor score…
Descriptors: Factor Analysis, Intelligence Tests, Scores, Scoring Formulas
Peer reviewedZimmerman, Donald W. – Educational and Psychological Measurement, 1972
Although a great deal of attention has been devoted over a period of years to the estimation of reliability from item statistics, there are still gaps in the mathematical derivation of the Kuder-Richardson results. The main purpose of this paper is to fill some of these gaps, using language consistent with modern probability theory. (Author)
Descriptors: Mathematical Applications, Probability, Scoring Formulas, Statistical Analysis
Berk, Ronald A. – 1980
Seventeen statistics for measuring the reliability of criterion-referenced tests were critically reviewed. The review was organized into two sections: (1) a discussion of preliminary considerations to provide a foundation for choosing the appropriate category of "reliability" (threshold loss function, squared-error loss-function, or…
Descriptors: Criterion Referenced Tests, Cutting Scores, Scoring Formulas, Statistical Analysis
Peer reviewedGordon, Leonard V. – Educational and Psychological Measurement, 1971
Results indicate that extremeness response sets at the two ends of the continuum differentially contribute to scale validity. (MS)
Descriptors: Attitude Measures, Rating Scales, Response Style (Tests), Scoring Formulas
Veldhuijzen, Niels H. – Evaluation in Education: International Progress, 1982
Setting a cutting score is a key problem in criterion-referenced measurement which is discussed within a decision theoretic approach when just one student is considered. A minimum information solution is given and compared with approaches when there is information about a group of students. Formulas illustrate the discussion. (CM)
Descriptors: Criterion Referenced Tests, Cutting Scores, Educational Testing, Measurement Techniques
Peer reviewedWilcox, Rand R. – Educational and Psychological Measurement, 1980
Technical problems in achievement testing associated with using latent structure models to estimate the probability of guessing correct responses by examinees is studied; also the lack of problems associated with using Wilcox's formula score. Maximum likelihood estimates are derived which may be applied when items are hierarchically related.…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Maximum Likelihood Statistics
Peer reviewedReid, Frank J. – Journal of Economic Education, 1976
Examines the conventional scoring formula for multiple-choice tests and proposes an alternative scoring formula which takes into account the situation in which the student does not know the right answer but is able to eliminate one or more of the incorrect alternatives. (Author/AV)
Descriptors: Economics Education, Guessing (Tests), Higher Education, Multiple Choice Tests
Kane, Michael T.; Moloney, James M. – 1974
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Descriptors: Feedback, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Doppelt, Jerome E. – Test Service Bulletin, 1956
The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…
Descriptors: Bulletins, Error of Measurement, Measurement Techniques, Reliability
Peer reviewedMarks, Edmond; Martin, Charles G. – American Educational Research Journal, 1973
Purpose of this study was to examine the effects of the true change-true initial score correlation on one aspect of the true simple change estimate, namely its error variance. (Authors/CB)
Descriptors: Analysis of Variance, Mathematical Applications, Measurement Techniques, Scoring Formulas


