NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,286 to 3,300 of 5,131 results Save | Export
Rosner, Jerome; Simon, Dorothea P. – 1970
A new test for auditory perception (Auditory Analysis Test) was given to 284 kindergarten through grade 6 children. The instrument, consisting of 40 items, asks the testee to repeat a spoken word, then to repeat it again without certain specified phonemic elements--such as a beginning, ending or medially-positioned sound. Seven categories of item…
Descriptors: Auditory Perception, Auditory Tests, Correlation, Elementary School Students
Villano, Maurice W. – 1973
The purpose of this study was to construct-validate the REPI. The judgmental data of 25 content specialists were examined. The response data of 416 students were item analyzed and submitted to principal components analyses with orthogonal and oblique rotations. The items were generally representative of the content universe and 50 items optimally…
Descriptors: Attitude Measures, Educational Philosophy, Factor Analysis, Item Analysis
Emrick, John A. – 1971
The validity of an evaluation model for mastery testing applications was investigated. Three variables were tested in an experiment using 96 third grade subjects--amount of training, number of alternates in an item, and number of items. The concept hierarchy involved an orderly progression from a concept involving one relevant of three varying…
Descriptors: Achievement Tests, Cognitive Measurement, Item Analysis, Mathematical Models
Lynch, Daniel O.; Smith, Billie C. – 1972
Results of research conducted to ascertain the effect on test grades of changing answer choices are presented. The main questions that were examined were: (1) Does the changing of responses to test items (presumably based upon item reconsideration) result in better test scores?; (2) Is the amount of changes related to the score a person receives…
Descriptors: College Students, Educational Research, Item Analysis, Objective Tests
Edmonston, Leon P.; Randall, Robert S. – 1972
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
Descriptors: Criterion Referenced Tests, Decision Making, Evaluation Methods, Item Analysis
Scheuneman, Janice – 1976
Test bias has traditionally been defined in terms of an outside criterion measure of the performance being predicted by the test. In test construction, where criterion-related validity data are usually not collected until after the test is completed, assessment of bias in the absence of outside criteria had become a vital issue. Here, an unbiased…
Descriptors: Evaluation Criteria, Item Analysis, Response Style (Tests), Statistical Analysis
PDF pending restoration PDF pending restoration
Denson, Teri A. – 1976
Cognitive style research is hindered by a variety of poorly described measures and style classifications and the use of small samples. This study describes and compares three measures of three postulated dimensions of cognitive style: impulsivity/reflectivity; internal/external locus of control; and field dependence/field independence to determine…
Descriptors: Cognitive Style, Comparative Analysis, Correlation, Grade 7
Frederiksen, Norman – 1976
A number of different ways of ascertaining whether or not a test measures the same thing in different cultures are examined. Methods range from some that are obvious and simple to those requiring statistical and psychological sophistication. Simpler methods include such things as having candidates "think aloud" and interviewing them about how they…
Descriptors: Analysis of Covariance, Culture Fair Tests, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Dwyer, Carol A. – Reading Teacher, 1976
Descriptors: Educational Research, Elementary Education, Item Analysis, Reading Ability
Peer reviewed Peer reviewed
Sheehan, Daniel S.; Davis, Robbie G. – School Science and Mathematics, 1979
The steps discussed are (a) stating the purpose of the battery, (b) specifying performance objectives, (c) generating an item pool, (d) item analysis, (e) item selection, (f) determining cut-off scores, and (g) validating the battery. (MP)
Descriptors: Criterion Referenced Tests, Elementary Secondary Education, Item Analysis, Mastery Tests
Peer reviewed Peer reviewed
Browning, Robert; And Others – Psychology in the Schools, 1979
Effects that item order and basal and ceiling rules have on test means, variances, and internal consistency estimates for the Peabody Individual Achievement Test mathematics and reading recognition subtests were examined. Items on the math and reading recognition subtests were significantly easier or harder than test placements indicated. (Author)
Descriptors: Achievement Tests, Elementary Education, Individual Testing, Item Analysis
Peer reviewed Peer reviewed
Strang, Harold R. – Journal of Educational Measurement, 1977
The effects of option familiarity, length, and technicality on guessing or multiple choice items were investigated in two experiments. Generally, these college undergraduates tended to favor familiar, non-technical, and longer options when guessing on multiple choice tests. (JKS)
Descriptors: Cues, Females, Guessing (Tests), Higher Education
Peer reviewed Peer reviewed
Weber, Margaret B. – Educational and Psychological Measurement, 1977
Bilevel dimensionality of probability was examined via factor analysis, Rasch latent trait analysis, and classical item analysis. Results suggest that when nonstandardized measures are the criteria for achievement, relying solely on estimates of content validity may lead to erroneous interpretation of test score data. (JKS)
Descriptors: Achievement, Achievement Tests, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Dengerink, Joan E.; Bean, Roxanne E. – Language, Speech, and Hearing Services in Schools, 1988
Author-supplied item labels for two common speech discrimination tests were compared with those given spontaneously by 40 children (median age 5:5). Agreement between subjects' and authors' labels was 76.3 percent on the Word Intelligibility by Picture Identification test and 75 percent on the Northwestern University Children's Perception of…
Descriptors: Auditory Discrimination, Expressive Language, Item Analysis, Language Handicaps
Peer reviewed Peer reviewed
Luecht, Richard M. – Educational and Psychological Measurement, 1987
Test Pac, a test scoring and analysis computer program for moderate-sized sample designs using dichotomous response items, performs comprehensive item analyses and multiple reliability estimates. It also performs single-facet generalizability analysis of variance, single-parameter item response theory analyses, test score reporting, and computer…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Item Analysis
Pages: 1  |  ...  |  216  |  217  |  218  |  219  |  220  |  221  |  222  |  223  |  224  |  ...  |  343