NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)5
Since 2007 (last 20 years)15
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 46 to 60 of 68 results Save | Export
Cross, Lawrence H.; Frary, Robert B. – 1976
It has been demonstrated that corrected-for-guessing scores will be superior to number-right scores in providing estimates of examinee standing on the trait measured by a multiple-choice test, if it can be assumed that examinees can and will comply with the appropriate directions. The purpose of the present study was to test the validity of that…
Descriptors: Achievement Tests, Guessing (Tests), Individual Characteristics, Multiple Choice Tests
Bruno, James E. – Journal of Computer-Based Instruction, 1987
Reports preliminary findings of a study which used a modified Admissible Probability Measurement (APM) test scoring system in the design of computer based instructional management systems. The use of APM for curriculum analysis is discussed, as well as its value in enhancing individualized learning. (Author/LRW)
Descriptors: Computer Assisted Testing, Computer Managed Instruction, Curriculum Evaluation, Design
Cross, Lawrence H. – 1975
A novel scoring procedure was investigated in order to obtain scores from a conventional multiple-choice test that would be free of the guessing component or contain a known guessing component even though examinees were permitted to guess at will. Scores computed with the experimental procedure are based not only on the number of items answered…
Descriptors: Algebra, Comparative Analysis, Guessing (Tests), High Schools
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Peer reviewed Peer reviewed
Donnelly, Michael B.; And Others – Journal of Medical Education, 1983
A statistical comparison of Medical College Admission Test scores with National Board of Medical Examiners (NBME) Part I scores of one school's students supported simple averaging of subtest scores rather than weighting as a predictor of NBME Part I performance. Similar results were obtained using basic science course examination scores. (MSE)
Descriptors: College Entrance Examinations, Comparative Analysis, Higher Education, Medical Education
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Powell, J. C. – 1979
The educational significance of wrong answers on multiple choice tests was investigated in over 4,000 subjects, aged 7 to 20. Gorham's Proverbs Test--which requires the interpretation of a proverb sentence--was administered and repeated five months later. Four questions were addressed: (1) what can the pattern of answer choice, across age, using…
Descriptors: Age Differences, Cognitive Development, Cognitive Processes, Elementary Secondary Education
Donlon, Thomas F. – 1975
This study empirically determined the optimizing weight to be applied to the Wrongs Total Score in scoring rubrics of the general form = R - kW, where S is the Score, R the Rights Total, k the weight and W the Wrongs Total, if reliability is to be maximized. As is well known, the traditional formula score rests on a theoretical framework which is…
Descriptors: Achievement Tests, Comparative Analysis, Guessing (Tests), Multiple Choice Tests
Sibley, William L. – 1974
The use of computers in areas of testing, selection, and placement processes for those in military services' training programs are viewed in this paper. Also discussed is a review of the motivational and theoretical foundation of admissible probability testing, the role of the computer in admissible probability testing, and the authors' experience…
Descriptors: Computer Oriented Programs, Computers, Interaction, Military Training
Echternacht, Gary; Plas, Jeanne M. – NCME, 1977
While most school districts believe they understand grade equivalent scores, teachers, parents, and measurement specialists frequently misinterpret this apparently simple statistical expression. Echternacht's article describes the construction, application, and interpretation of grade equivalent scores from the test publisher's perspective.…
Descriptors: Achievement Rating, Achievement Tests, Elementary Education, Grade Equivalent Scores
Lawrence, Ida M.; Schmidt, Amy Elizabeth – College Entrance Examination Board, 2001
The SAT® I: Reasoning Test is administered seven times a year. Primarily for security purposes, several different test forms are given at each administration. How is it possible to compare scores obtained from different test forms and from different test administrations? The purpose of this paper is to provide an overview of the statistical…
Descriptors: Scores, Comparative Analysis, Standardized Tests, College Entrance Examinations
McNeil, Judy T. – 1975
The method of using person vectors in regression analysis to test repeated measures hypotheses or questions is discussed. These hypotheses involve designs with pre and post scores with one group and with multiple groups. Based on these analyses with person vectors, there are two major focuses of the paper: a proposed solution to the problem of…
Descriptors: Analysis of Covariance, Criterion Referenced Tests, Data Processing, Hypothesis Testing
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations
Legg, Sue M. – 1982
A case study of the Florida Teacher Certification Examination (FTCE) program was described to assist others launching the development of large scale item banks. FTCE has four subtests: Mathematics, Reading, Writing, and Professional Education. Rasch calibrated item banks have been developed for all subtests except Writing. The methods used to…
Descriptors: Cutting Scores, Difficulty Level, Field Tests, Item Analysis
Sands, William A. – 1975
In order to develop tools for use in the selection and vocational-educational guidance of U.S. Naval Academy midshipmen, three empirically-based scales, designed using the Strong Vocational Interest Blank (SVIB), were developed to predict three criteria: (1) disenrollment for academic reasons, (2) disenrollment for motivational reasons, and (3)…
Descriptors: Admission (School), Career Guidance, College Students, Comparative Analysis
Pages: 1  |  2  |  3  |  4  |  5