NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,191 to 2,205 of 3,124 results Save | Export
Steiner, Dirk D.; Rain, Jeffrey S. – 1988
Many empirical studies have examined factors that influence ratings of performance. This study examined the rating variable performance of a single individual. Serial position of a single poor or good performance in a series of otherwise good or poor performances was manipulated to examine its effects on both ratings and recommended actions toward…
Descriptors: Behavior Patterns, College Students, Higher Education, Interrater Reliability
Peer reviewed Peer reviewed
McLeod, P. J. – Journal of Medical Education, 1987
A study of interrater reliability among 17 faculty members assessing medical student case reports revealed marked disparities in the criteria raters felt to be important and an unacceptable spread in the ratings given. A standardized assessment instrument is recommended instead. (MSE)
Descriptors: Higher Education, Interrater Reliability, Medical Case Histories, Medical Education
Peer reviewed Peer reviewed
Borich, Gary; Klinzing, Garhard – Journal of Classroom Interaction, 1984
Problems in studying teacher effectiveness through the use of classroom observation are discussed. Four assumptions in the observation of classroom process are offered and ways in which these assumptions can be dealt with in designing an observation study are suggested. (DF)
Descriptors: Classroom Observation Techniques, Error of Measurement, Experimenter Characteristics, Interrater Reliability
Peer reviewed Peer reviewed
Northam, Elizabeth; And Others – Merrill-Palmer Quarterly, 1987
Two studies concerned with agreement in ratings of temperament are reported. Ratings of the mothers of toddlers versus daycare workers were compared on the Toddler Temperament Scale (Study 1), and on ratings of a videotape of a 2-year-old child for responses relevant to six dimensions of temperament (Study 2). (Author/BN)
Descriptors: Affective Behavior, Behavior Rating Scales, Interrater Reliability, Mothers
Peer reviewed Peer reviewed
Campion, Michael A.; And Others – Personnel Psychology, 1988
Proposes a highly structured six-step employment interviewing technique which includes asking the same questions, consistently administering the process to all candidates, and having an interview panel. Results of a field study of 243 job applicants using this technique demonstrated interrater reliability, predictive validity, test fairness for…
Descriptors: Employment Interviews, Interrater Reliability, Job Applicants, Measures (Individuals)
Peer reviewed Peer reviewed
Phelps, Le Adelle; And Others – Journal of Teacher Education, 1986
A performance-based student teacher evaluation process was investigated to see if halo and leniency errors could be eliminated. Results are presented. (MT)
Descriptors: Cooperating Teachers, Evaluation Criteria, Higher Education, Interrater Reliability
Peer reviewed Peer reviewed
Collier, Michael – Assessment and Evaluation in Higher Education, 1986
A study revealing wide variation in the grading of electronics engineering test items by different evaluators has implications for evaluator and test item selection, analysis and manipulation of grades, and the use of numerical methods of assessment. (MSE)
Descriptors: Electronics, Engineering Education, Evaluation Methods, Evaluators
Peer reviewed Peer reviewed
Wilson, F. Robert; Griswold, Mary Lynn – Measurement and Evaluation in Counseling and Development, 1985
Type and comprehensiveness of training were experimentally manipulated (N=128) to study their effects on the reliability and validity of rated counselor empathy. Implications for observer training are discussed. (Author)
Descriptors: College Students, Counselor Characteristics, Empathy, Interrater Reliability
Hertz, Norman R.; Chinn, Roberta N. – 2002
Nearly all of the research on standard setting focuses on different standard setting methods rather than the interaction of group members and the instructions given to group members. This study explored the effect of deliberation style and the requirement to reach consensus on the passing score, on rater satisfaction, and on postdecision…
Descriptors: Decision Making, Evaluation Methods, Evaluators, Interaction
O'Neill, Thomas R.; Lunz, Mary E. – 1997
This paper illustrates a method to study rater severity across exam administrations. A multi-facet Rasch model defined the ratings as being dominated by four facets: examinee ability, rater severity, project difficulty, and task difficulty. Ten years of data from administrations of a histotechnology performance assessment were pooled and analyzed…
Descriptors: Ability, Comparative Analysis, Equated Scores, Interrater Reliability
Taherbhai, Husein; Young, Michael James – 2000
This empirical study used data from the Reading: Basic Understanding section of the New Standards English Language Arts Examination. Data were collected for 3,200 high school students randomly selected from those who took the examination. The resulting sample had 16 raters who scored 200 students each, with each student rated by only 1 rater. The…
Descriptors: Evaluators, High School Students, High Schools, Interrater Reliability
Wang, Ning; Wiser, Randall F.; Newman, Larry S. – 2001
This paper provides both logical and empirical evidence to justify the use of an item mapping method for establishing passing scores for multiple-choice licensure and certification examinations. After describing the item-mapping standard setting process, the paper discusses the theoretical basis and rationale for this newly developed method and…
Descriptors: Certification, Cutting Scores, Interrater Reliability, Item Response Theory
Peer reviewed Peer reviewed
Fleishman, Rachel; And Others – Evaluation Review, 1996
An interjudge reliability test was conducted to evaluate questionnaires used in the surveillance of residential care institutions in Israel. Results from 32 institutions (evaluated by two surveyor teams--one social worker and 1 nurse per team) and the variance in reliability were used to improve the questionnaires and their administration. (SLD)
Descriptors: Evaluators, Foreign Countries, Institutional Characteristics, Interrater Reliability
Peer reviewed Peer reviewed
Lombard, Matthew; Snyder-Duch, Jennifer; Bracken, Cheryl Campanella – Human Communication Research, 2002
Reviews the importance of intercoder agreement for content analysis in mass communication research. Describes several indices for calculating this type of reliability (varying in appropriateness, complexity, and apparent prevalence of use). Presents a content analysis of content analyses reported in communication journals to establish how…
Descriptors: Communication Research, Content Analysis, Higher Education, Interrater Reliability
Peer reviewed Peer reviewed
Kolevzon, Michael S.; And Others – Journal of Marital and Family Therapy, 1988
Employed triangulation strategy for assessing family interaction, involving family members, therapist, and coders independently viewing videotapes. Found weak agreement between paired assessments within family triad, and within therapist-coder dyad. Findings suggest that methodological and/or scaling strategies designed to maximize agreement may…
Descriptors: Counselor Attitudes, Evaluation Criteria, Evaluation Methods, Evaluation Problems
Pages: 1  |  ...  |  143  |  144  |  145  |  146  |  147  |  148  |  149  |  150  |  151  |  ...  |  209