NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 88 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Silva, Thanuci; Santos, Regiane dos; Mallet, Débora – Journal of Education for Business, 2023
Assuring the quality of education is a concern of learning institutions. To do so, it is necessary to have assertive learning management, with consistent data on students' outcomes. This research provides associate deans and researchers, a roadmap with which to gather evidence to improve the quality of open-ended assessments. Based on statistical…
Descriptors: Student Evaluation, Evaluation Methods, Business Education, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sen, Sedat – Creativity Research Journal, 2022
The purpose of this study was to estimate the overall reliability values for the scores produced by Runco Ideational Behavior Scale (RIBS) and explore the variability of RIBS score reliability across studies. To achieve this, a reliability generalization meta-analysis was carried out using the 86 Cronbach's alpha estimates obtained from 77 studies…
Descriptors: Generalization, Creativity, Meta Analysis, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Chen, Fu; Lutsyk, Alina; Leighton, Jacqueline P.; Cutumisu, Maria – Assessment in Education: Principles, Policy & Practice, 2023
With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and…
Descriptors: Data, Information Literacy, 21st Century Skills, Competence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thomas, Jason E.; Hornsey, Philip E. – Journal of Instructional Research, 2014
Formative Classroom Assessment Techniques (CAT) have been well-established instructional tools in higher education since their exposition in the late 1980s (Angelo & Cross, 1993). A large body of literature exists surrounding the strengths and weaknesses of formative CATs. Simpson-Beck (2011) suggested insufficient quantitative evidence exists…
Descriptors: Classroom Techniques, Nontraditional Education, Adult Education, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Morrison, Keith – Educational Research and Evaluation, 2013
This paper reviews the literature on comparing online and paper course evaluations in higher education and provides a case study of a very large randomised trial on the topic. It presents a mixed but generally optimistic picture of online course evaluations with respect to response rates, what they indicate, and how to increase them. The paper…
Descriptors: Literature Reviews, Course Evaluation, Case Studies, Higher Education
Peer reviewed Peer reviewed
Kapes, Jerome T.; Vansickle, Timothy R. – Measurement and Evaluation in Counseling and Development, 1992
Examined equivalence of mode of administration of the Career Decision-Making System, comparing paper-and-pencil version and computer-based version. Findings from 61 undergraduate students indicated that the computer-based version was significantly more reliable than paper-and-pencil version and was generally equivalent in other respects.…
Descriptors: Comparative Testing, Computer Assisted Testing, Higher Education, Test Format
Peer reviewed Peer reviewed
Tollefson, Nona – Educational and Psychological Measurement, 1987
This study compared the item difficulty, item discrimination, and test reliability of three forms of multiple-choice items: (1) one correct answer; (2) "none of the above" as a foil; and (3) "none of the above" as the correct answer. Twelve items in the three formats were administered in a college statistics examination. (BS)
Descriptors: Difficulty Level, Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Henk, William A. – Journal of Reading Behavior, 1981
Analyzes alternative cloze forms derived from selected deletion strategies, scoring procedures, and blank conditions for respective effects on the cloze test performance of college-level readers. (HOD)
Descriptors: Cloze Procedure, College Students, Higher Education, Reading Research
Schuldberg, David – 1988
Indices were constructed to measure individual differences in the effects of the automated testing format and repeated testing on Minnesota Multiphasic Personality Inventory (MMPI) responses. Two types of instability measures were studied within a data set from the responses of 150 undergraduate students who took a computer-administered and…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Individual Differences
Fishman, Judith – Writing Program Administration, 1984
Examines the CUNY-WAT program and questions many aspects of it, especially the choice and phrasing of topics. (FL)
Descriptors: Essay Tests, Higher Education, Test Format, Test Items
Peer reviewed Peer reviewed
Melancon, Janet G.; Thompson, Bruce – Psychology in the Schools, 1989
Investigated measurement characteristics of both forms of Finding Embedded Figures Test (FEFT). College students (N=302) completed both forms of FEFT or one form of FEFT and Group Embedded Figures Test. Results suggest that FEFT forms provide reasonable reliable and valid data. (Author/NB)
Descriptors: College Students, Field Dependence Independence, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Kumar, V. K.; And Others – Measurement and Evaluation in Counseling and Development, 1986
Disguising scale purpose by using an innocuous skill title and filler items had no effect on the reliability and validity of Rotter's Interpersonal Trust Scale. (Author)
Descriptors: College Students, Higher Education, Response Style (Tests), Student Attitudes
Peer reviewed Peer reviewed
Weiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Kolstad, Rosemarie; And Others – Journal of Dental Education, 1982
Nonrestricted-answer, multiple-choice test items are recommended as a way of including more facts and fewer incorrect answers in test items, and they do not cue successful guessing as restricted multiple choice items can. Examination construction, scoring, and reliability are discussed. (MSE)
Descriptors: Guessing (Tests), Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Green, Kathy; And Others – Educational and Psychological Measurement, 1982
Achievement test reliability and validity as a function of ability were determined for multiple sections of a large undergraduate French class. Results did not support previous arguments that decreasing the number of options results in a more efficient test for high-level examinees, but less efficient for low-level examinees. (Author/GK)
Descriptors: Academic Ability, Comparative Analysis, Higher Education, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6