NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Jennifer W. M.; Bower, Matt; De Nobile, John; Breyer, Yvonne – Journal of Computer Assisted Learning, 2022
Background: There is a lack of critical or empirical work interrogating the nature and purpose of evaluating technology use in education. Objectives: In this study, we examine the values underpinning the evaluation of technology use in education through field specialist perceptions. The study also poses critical reflections about the rigour of…
Descriptors: Technology Uses in Education, Educational Technology, Program Evaluation, Content Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Shaharim, Saidatul Ainoor; Ishak, Nor Asniza; Zaharudin, Rozniza – Journal of Science and Mathematics Education in Southeast Asia, 2021
Purpose: This study aims to assess the validity and reliability of the Psycho-B'GREAT Module developed according to the ASSURE's Module Development Model. Method: The content validity of the Psycho-B'GREAT Module was assessed by ten experts in the fields related in teaching and learning (T&L) and biology education. In testing the…
Descriptors: Content Validity, Models, Reliability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kunkle, Wanda M.; Allen, Robert B. – ACM Transactions on Computing Education, 2016
Learning to program, especially in the object-oriented paradigm, is a difficult undertaking for many students. As a result, computing educators have tried a variety of instructional methods to assist beginning programmers. These include developing approaches geared specifically toward novices and experimenting with different introductory…
Descriptors: Teaching Methods, Programming, Programming Languages, Computer Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sriram, Rishi – Journal of Student Affairs Research and Practice, 2014
The study of competencies in student affairs began more than 4 decades ago, but no instrument currently exists to measure competencies broadly. This study builds upon previous research by developing an instrument to measure student affairs competencies. Results not only validate the competencies espoused by NASPA and ACPA, but also suggest adding…
Descriptors: Reliability, Psychometrics, Student Personnel Services, Student Personnel Workers
Peer reviewed Peer reviewed
Direct linkDirect link
Algozzine, Bob; Horner, Robert H.; Todd, Anne W.; Newton, J. Stephen; Algozzine, Kate; Cusumano, Dale – Journal of Psychoeducational Assessment, 2016
Although there is a strong legislative base and perceived efficacy for multidisciplinary team decision making, limited evidence supports its effectiveness or consistency of implementation in practice. In recent research, we used the Decision Observation, Recording, and Analysis (DORA) tool to document activities and adult behaviors during positive…
Descriptors: Problem Solving, Participative Decision Making, Positive Behavior Supports, Meetings
Peer reviewed Peer reviewed
Direct linkDirect link
Monbaliu, E.; Ortibus, E.; Roelens, F.; Desloovere, K.; Deklerck, J.; Prinzie, P.; De Cock, P.; Feys, H. – Developmental Medicine & Child Neurology, 2010
Aim: This study investigated the reliability and validity of the Barry-Albright Dystonia Scale (BADS), the Burke-Fahn-Marsden Movement Scale (BFMMS), and the Unified Dystonia Rating Scale (UDRS) in patients with bilateral dystonic cerebral palsy (CP). Method: Three raters independently scored videotapes of 10 patients (five males, five females;…
Descriptors: Content Validity, Cerebral Palsy, Validity, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Archer, Julian; Norcini, John; Southgate, Lesley; Heard, Shelley; Davies, Helena – Advances in Health Sciences Education, 2008
Purpose: To design, implement and evaluate a multisource feedback instrument to assess Foundation trainees across the UK. Methods: mini-PAT (Peer Assessment Tool) was modified from SPRAT (Sheffield Peer Review Assessment Tool), an established multisource feedback (360 [degree]) instrument to assess more senior doctors, as part of a blueprinting…
Descriptors: Feedback (Response), Evaluation Methods, Content Validity, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D.; Miller, Jeffrey M. – Applied Measurement in Education, 2004
As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…
Descriptors: Student Evaluation, Evaluation Methods, Content Validity, Scoring
Peer reviewed Peer reviewed
Swanson, David B.; And Others – Academic Medicine, 1990
This study is the National Board of Medical Examiners exploration of content-based techniques (standard-setting techniques in which pass/fail decisions are based upon the performance of examinees in relation to test content). Two content-based techniques (Angoff and Ebel) and three methods of evaluating examinee performance were studied. (MLW)
Descriptors: Content Validity, Evaluation Methods, Higher Education, Medical Education
Bezruczko, Nikolaus – 1992
Internal structure and external validity of 39 multiple-choice visual arts achievement test items were examined. These items were developed to assess grade 3 visual arts achievement for a statewide model of a fine arts curriculum. Item responses were evaluated in terms of: (1) fit to the one-parameter Rasch measurement model; (2) item-total…
Descriptors: Achievement Tests, Art Education, Content Validity, Correlation