NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,936 to 1,950 of 2,533 results Save | Export
Mandeville, Garrett K.
Results of a comparative study of F and Q tests, in a randomized block design with one replication per cell, are presented. In addition to these two procedures, a multivariate test was also considered. The model and test statistics, data generation and parameter selection, results, summary and conclusions are presented. Ten tables contain the…
Descriptors: Comparative Analysis, Data Analysis, Mathematical Models, Models
Willoughby, Lee; And Others – 1976
This study compared a domain referenced approach with a traditional psychometric approach in the construction of a test. Results of the December, 1975 Quarterly Profile Exam (QPE) administered to 400 examinees at a university were the source of data. The 400 item QPE is a five alternative multiple choice test of information a "safe"…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Norm Referenced Tests, Statistical Analysis
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Rounds, James B., Jr.; And Others – Applied Psychological Measurement, 1978
Two studies compared multiple rank order and paired comparison methods in terms of psychometric characteristics and user reactions. Individual and group item responses, preference counts, and Thurstone normal transform scale values obtained by the multiple rank order method were found to be similar to those obtained by paired comparisons.…
Descriptors: Higher Education, Measurement, Rating Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Morris, John D. – Educational and Psychological Measurement, 1978
Three algorithms for selecting a subset of originally available items, to maximize coefficient alpha, were compared on the size of the resulting alpha and computation time required with nine sets of data. The characteristics of a computer program to perform these item analyses are described. (Author/JKS)
Descriptors: Comparative Analysis, Computer Programs, Item Analysis, Measurement Techniques
Peer reviewed Peer reviewed
Rossi, Joseph S. – Teaching of Psychology, 1987
Reports a class exercise which requires students to recalculate the Chi-squares, t-tests, and one-way ANOVAs found in published psychological research articles. Describes students' reaction to the exercise and provides data on the 13% error rate they discovered. (Author/JDH)
Descriptors: Error Patterns, Higher Education, Learning Activities, Psychology
Peer reviewed Peer reviewed
Kirton, Michael – Journal of Applied Psychology, 1976
Describes development of the Kirton Adaption Innovation Inventory (KAI) for rating respondents on a continuum of adaptiveness-innovativeness, discusses tests of the validity and utility of the KAI model, and evaluates the KAI model's characteristics. For availability see EA 507 670. (Author/JG)
Descriptors: Administrators, Behavior Rating Scales, Innovation, Models
Peer reviewed Peer reviewed
Warren, Richard D.; And Others – Home Economics Research Journal, 1973
Descriptors: Attitude Measures, Cluster Analysis, Item Analysis, Rating Scales
Peer reviewed Peer reviewed
Shoemaker, David M. – Educational and Psychological Measurement, 1972
Descriptors: Difficulty Level, Error of Measurement, Item Sampling, Simulation
Peer reviewed Peer reviewed
Caughran, Alex M.; Lindlof, John A. – Journal of Reading, 1972
Descriptors: Data Collection, Job Application, Literacy, National Surveys
Peer reviewed Peer reviewed
Asche, F. Marion – Journal of Industrial Teacher Education, 1983
This paper provides a review of two volumes of the "Journal of Industrial Teacher Education" and the "Journal of Vocational Education Research." One of the major findings of this review was that insufficient information was provided to make an informed judgment about internal validity. (SSH)
Descriptors: Educational Assessment, Educational Development, Educational Research, Reliability
Peer reviewed Peer reviewed
O'Reilly, Patrick A. – Journal of Industrial Teacher Education, 1983
Research manuscripts recently published in the "Journal of Industrial Teacher Education" (JITE) and the "Journal of Vocational Education Research" (JVER) were reviewed to determine how effectively external validity had been handled. The main issue of concern when discussing validity is the generalizability of findings. (SSH)
Descriptors: Educational Assessment, Educational Development, Educational Research, Reliability
Peer reviewed Peer reviewed
Terwilliger, James S.; Lele, Kaustubh – Journal of Educational Measurement, 1979
Different indices for the internal consistency, reproducibility, or homogeneity of a test are based upon highly similar conceptual frameworks. Illustrations are presented to demonstrate how the maximum and minimum values of KR20 are influenced by test difficulty and the shape of the distribution of test scores. (Author/CTM)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Statistical Analysis
Pages: 1  |  ...  |  126  |  127  |  128  |  129  |  130  |  131  |  132  |  133  |  134  |  ...  |  169