NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers20
Practitioners3
Counselors1
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Benjawan Plengkham; Sonthaya Rattanasak; Patsawut Sukserm – Journal of Education and Learning, 2025
This academic article provides the essential steps for designing an effective English questionnaire in social science research, with a focus on ensuring clarity, cultural sensitivity and ethical integrity. Developed from key insights from related studies, it outlines potential practice in questionnaire design, item development and the importance…
Descriptors: Guidelines, Test Construction, Questionnaires, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Gloria; Bonner, Cavan V.; Tay, Louis – International Journal of Testing, 2022
With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are…
Descriptors: Psychometrics, Computer Assisted Testing, Adaptive Testing, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Youn-Jeng; Asilkalkan, Abdullah – Measurement: Interdisciplinary Research and Perspectives, 2019
About 45 R packages to analyze data using item response theory (IRT) have been developed over the last decade. This article introduces these 45 R packages with their descriptions and features. It also describes possible advanced IRT models using R packages, as well as dichotomous and polytomous IRT models, and R packages that contain applications…
Descriptors: Item Response Theory, Data Analysis, Computer Software, Test Bias
Luke W. Miratrix; Jasjeet S. Sekhon; Alexander G. Theodoridis; Luis F. Campos – Grantee Submission, 2018
The popularity of online surveys has increased the prominence of using weights that capture units' probabilities of inclusion for claims of representativeness. Yet, much uncertainty remains regarding how these weights should be employed in analysis of survey experiments: Should they be used or ignored? If they are used, which estimators are…
Descriptors: Online Surveys, Weighted Scores, Data Interpretation, Robustness (Statistics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boller, Kimberly; Kisker, Ellen Eliason – Regional Educational Laboratory, 2014
This guide is designed to help researchers make sure that their research reports include enough information about study measures so that readers can assess the quality of the study's methods and results. The guide also provides examples of write-ups about measures and suggests resources for learning more about these topics. The guide assumes…
Descriptors: Research Reports, Research Methodology, Educational Research, Check Lists
Peer reviewed Peer reviewed
Hernon, Peter; McClure, Charles R. – Library and Information Science Research, 1987
Discusses issues relating to the reliability, validity, utility, and information value of unobtrusive testing of library reference services; provides suggestions for practical applications of these criteria; applies study findings to library decision making and planning; and identifies topics for further methodological refinement. (Author/CLB)
Descriptors: Data Analysis, Data Collection, Data Interpretation, Experimenter Characteristics
Santmire, Toni E. – 1984
The purpose of this paper is to discuss ways in which developmental psychology suffers from the lack of an appropriate technology of measurement and statistical analysis. The paper begins by noting that developmental psychology is the study of change; that individuals develop through a succession of "stages" which are separated by…
Descriptors: Data Analysis, Data Collection, Developmental Psychology, Developmental Stages
Linkous, L. W.; And Others – 1986
A reliability study of the Brigance Comprehensive Inventory of Basic Skills (CIBS) is presented. All test-data collectors were provided onsite training in the administration and scoring of the CIBS. The sample included 85 black and 319 white students from grades two through eight, from parochial and private schools, in a southeastern metropolitan…
Descriptors: Basic Skills, Black Students, Data Analysis, Elementary Education
Subkoviak, Michael J. – 1985
Current methods of obtaining reliability coefficients for mastery tests are laborious from a practitioner's perspective. Some methods require two test administrations; while others require access to computer facilities and/or advanced measurement and statistical procedures. This report provides tables from which practitioners can read such…
Descriptors: Estimation (Mathematics), Mastery Tests, Statistical Studies, Tables (Data)
Peer reviewed Peer reviewed
Mueller, Horst H.; And Others – Alberta Journal of Educational Research, 1984
Because diagnostic capability of the WISC-R has remained in doubt, its diagnostic suitability was assessed by applying Kelley's method of estimating the proportion of score differences in excess of chance to the original subscales, Bannatyne clusters, and Kaufman's three factor groupings. Caution should be used when applying WISC-R diagnostically.…
Descriptors: Clinical Diagnosis, Comparative Analysis, Evaluation Criteria, Tables (Data)
Peer reviewed Peer reviewed
Schwarz, J. Conrad; And Others – Child Development, 1985
Examines the reliability and validity of the scores of diverse informants from the Child's Report of Parental Behavior Inventory (CRPBI). Also considers the utility of aggregating scores of parental behavior derived from multiple observers. CRPBI items were adapted to obtain mother's, father's, sibling's, and subject's ratings of parental behavior…
Descriptors: Child Rearing, Data Collection, Measures (Individuals), Parent Child Relationship
Peer reviewed Peer reviewed
Simonson, Michael R.; And Others – Journal of Educational Computing Research, 1987
Describes the process used to develop two examinations, an achievement test of computer literacy and a computer anxiety index. Highlights include a definition of computer literacy, determination of the validity and reliability of the tests, and a study to evaluate the final versions of the tests. (Author/LRW)
Descriptors: Achievement Tests, Computer Assisted Instruction, Computer Literacy, Correlation
Bruno, James E. – Journal of Computer-Based Instruction, 1987
Reports preliminary findings of a study which used a modified Admissible Probability Measurement (APM) test scoring system in the design of computer based instructional management systems. The use of APM for curriculum analysis is discussed, as well as its value in enhancing individualized learning. (Author/LRW)
Descriptors: Computer Assisted Testing, Computer Managed Instruction, Curriculum Evaluation, Design
Peer reviewed Peer reviewed
Evans, Julia L.; Craig, Holly K. – Journal of Speech and Hearing Research, 1992
Analysis of spontaneous language samples of 10 children (ages 8-9) with specific language impairments found that interviews were a reliable, valid, and efficient assessment context, eliciting the same profile of behaviors as a freeplay context without altering diagnostic classifications. (Author/JDD)
Descriptors: Data Collection, Discourse Analysis, Educational Diagnosis, Efficiency
Yap, Kueh Chin; Capie, William – 1985
The purpose of this study was to compare the relative magnitude of the variance components and generalizability coefficients derived from the Teacher Performance Assessment Instruments (TPAI) data using two different methods of data collection: (1) occasions when observers were in the classroom for simultaneous observation and (2) occasions when…
Descriptors: Analysis of Variance, Classroom Observation Techniques, Data Collection, Elementary Secondary Education
Previous Page | Next Page ยป
Pages: 1  |  2