NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,096 to 4,110 of 5,170 results Save | Export
King, Robert – 1981
The preparation and use of an objective strength analysis report (OSAR) from the Iowa Test of Basic Skills (ITBS) are described. The OSAR was designed to answer two needs: (1) a format which allows the school to easily answer the question "which reading objectives need more emphasis and/or better materials or techniques in order to raise its…
Descriptors: Achievement Tests, Educational Assessment, Elementary Education, Evaluation Methods
Schratz, Mary K.; Wellens, Barrie – 1981
This study describes the role of an advisory panel of minority educators in the development of the seventh edition of Stanford Achievement Test (SAT) and the second editions of Stanford Early School Achievement Test (SESAT) and Stanford Test of Academic Skills (TASK). Detailed information on the formation of this panel and the review process in…
Descriptors: Advisory Committees, Ethnic Bias, Item Analysis, Language Usage
Quinn, Jimmy L. – 1978
A logistic model was used to generate data to serve as a proxy for an immediate retest from item responses to a fourth grade standardized reading comprehension test of 45 items. Assuming that the actual test may be considered a pretest and the proxy data may be considered a retest, the effect of regression was investigated using a percentage of…
Descriptors: Correlation, Error of Measurement, Intermediate Grades, Item Analysis
Berk, Ronald A. – 1978
Sixteen item statistics recommended for use in the development of criterion-referenced tests were evaluated. There were two major criteria: (1) practicability in terms of ease of computation and interpretation and (2) meaningfulness in the context of the development process. Most of the statistics were based on a comparison of performance changes…
Descriptors: Achievement Tests, Criterion Referenced Tests, Difficulty Level, Guides
Ree, Malcolm James – 1978
The computer can assist test construction in the following four ways: (1) storage or banking of test items; (2) banking of item attributes; (3) test construction; and (4) test printing. Automated Item Banking (AIB) is a computerized item storage and test construction system which illustrates these capabilities. It was developed, implemented, and…
Descriptors: Aptitude Tests, Computer Assisted Testing, Computers, Higher Education
Haebara, Tomokazu – 1979
It is proposed that statistical tests for item bias, when they are based on the three-parameter logistic item characteristic curve, should begin by testing for differences in the third parameter, the pseudo-chance level or "guessing" parameter. Other item-bias studies that have not considered the third parameter are discussed. Asymptotic…
Descriptors: Achievement Tests, Black Students, Grade 9, Item Analysis
Secolsky, Charles – 1980
Undergraduates responded to an objective test in electronics and classified each item by domain (one of 14 topics covered in their text), and by type of knowledge (definition, fact, principle, or interpretation). These judgments were compared to their instructor's "standard" judgments. From these data, an index of item-domain divergence…
Descriptors: Ambiguity, Criterion Referenced Tests, Electronics, Higher Education
Ree, Malcom James; Jensen, Harald E. – 1980
By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…
Descriptors: Computer Assisted Testing, Equated Scores, Error of Measurement, Item Analysis
Cohen, Allan S.; Kappy, Kathleen A. – 1980
The ability of the Rasch model to provide item difficulties and achievement test scores which are invariant is studied. Data for the study were obtained from students in grades 3 through 7 who took the Sequential Tests of Educational Progress (STEP III) Reading and Mathematics Concepts tests during a spring norming study. Each test contained 50…
Descriptors: Achievement Tests, Difficulty Level, Elementary Education, Item Analysis
BROYLES, DAVID – 1966
TO ENCOURAGE POLITICAL SCIENCE STUDENTS TO READ PROPERLY AND CONTINUOUSLY, THE AUTHOR GIVES FREQUENT SHORT QUIZZES BASED ON THE ASSIGNED READINGS. FOR EASE IN ADMINISTRATION AND SCORING, HE USES MARK-SENSE CARDS, ON WHICH THE STUDENT MARKS DESIGNATED AREAS TO INDICATE HIS NUMBER AND HIS CHOICE OF ANSWERS. TO EMPHASIZE THE VALUE OF CONTINUED HIGH…
Descriptors: Computer Oriented Programs, Data Processing, Educational Testing, Item Analysis
Woodford, Protase E. – 1980
The Test of English for International Communication (TOEIC) is a multiple-choice test of English for non-native speakers of English. It consists of two sections: listening comprehension and reading. There are 100 questions in each section. Separate scaled scores are provided for each section, the part score scales ranging from five to 495 and the…
Descriptors: English (Second Language), Foreign Countries, Item Analysis, Language Tests
Leary, Mark R.; And Others – 1980
Since its appearance in 1974, the Snyder Self-Monitoring Scale has been employed in research dealing with self-presentation, attribution, and attitude expression. The Scale was developed to measure the degree to which people are concerned with the social appropriateness of their behavior, are aware of relevant social cues, and regulate their…
Descriptors: Adults, Attribution Theory, Behavior Rating Scales, Factor Analysis
Mauger, Paul A.; And Others – 1978
The differences between aggressiveness and assertiveness were examined using the Interpersonal Behavior Survey (IBS), a 136-item self-report questionnaire which was developed to distinguish between assertive and aggressive behaviors. Item level factor analysis was used in scale construction. Results indicated that: (1) the correlation between the…
Descriptors: Adults, Assertiveness, Behavior Patterns, Behavior Rating Scales
Benson, Jeri – 1979
Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…
Descriptors: Comparative Analysis, Computer Programs, Costs, Efficiency
Fruchter, Dorothy A.; Ree, Malcolm James – 1977
In order to meet the needs of all the Armed Services, new forms of the Armed Services Vocational Aptitude Battery (ASVAB) must periodically be developed, refined, and standardized on an appropriate normative sample. Since one of the uses of the ASVAB is to determine candidate suitability for military service, it is necessary for the…
Descriptors: Aptitude Tests, Armed Forces, Equated Scores, Item Analysis
Pages: 1  |  ...  |  270  |  271  |  272  |  273  |  274  |  275  |  276  |  277  |  278  |  ...  |  345