Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Scoring Formulas | 10 |
| Statistical Analysis | 10 |
| Foreign Countries | 6 |
| Active Learning | 3 |
| Psychometrics | 3 |
| Scores | 3 |
| Teaching Methods | 3 |
| Test Construction | 3 |
| Test Items | 3 |
| Test Reliability | 3 |
| Undergraduate Students | 3 |
| More ▼ | |
Source
Author
| Divan, Aysha | 1 |
| Engell, Sebastian | 1 |
| Frey, Andreas | 1 |
| Gorman, Paul C. | 1 |
| Gräfe, Linda | 1 |
| Hadžibegovic, Zalkida | 1 |
| Holster, Trevor A. | 1 |
| Johnson, Jessica | 1 |
| Kimmel, Ernest W. | 1 |
| Kobrin, Jennifer L. | 1 |
| Lake, J. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 8 |
| Journal Articles | 7 |
| Collected Works - Proceedings | 1 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 10 |
| Postsecondary Education | 8 |
| Adult Education | 1 |
| Elementary Secondary Education | 1 |
| High Schools | 1 |
Audience
Location
| United Kingdom | 3 |
| Germany | 2 |
| Japan | 2 |
| Bosnia and Herzegovina… | 1 |
| Costa Rica | 1 |
| Ecuador | 1 |
| Estonia | 1 |
| Italy | 1 |
| Nicaragua | 1 |
| Poland | 1 |
| Portugal | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 2 |
What Works Clearinghouse Rating
A Generalizable Framework for Multi-Scale Auditing of Digital Learning Provision in Higher Education
Ross, Samuel R. P-J.; Volz, Veronica; Lancaster, Matthew K.; Divan, Aysha – Online Learning, 2018
It is increasingly important that higher education institutions be able to audit and evaluate the scope and efficacy of their digital learning resources across various scales. To date there has been little effort to address this need for a validated, appropriate, and simple-to-execute method that will facilitate such an audit, whether it be at the…
Descriptors: Higher Education, Audits (Verification), Electronic Learning, Educational Resources
Rozell, Timothy G.; Johnson, Jessica; Sexten, Andrea; Rhodes, Ashley E. – Journal of College Science Teaching, 2017
Students in a junior- and senior-level Anatomy and Physiology course have the opportunity to correct missed exam questions ("regrade") and earn up to half of the original points missed. The three objectives of this study were to determine if: (a) performance on the regrade assignment was correlated with scores on subsequent exams, (b)…
Descriptors: Physiology, Scores, Grades (Scholastic), Exit Examinations
Leslie, Laura J.; Gorman, Paul C. – European Journal of Engineering Education, 2017
Student engagement is vital in enhancing the student experience and encouraging deeper learning. Involving students in the design of assessment criteria is one way in which to increase student engagement. In 2011, a marking matrix was used at Aston University (UK) for logbook assessment (Group One) in a project-based learning module. The next…
Descriptors: Undergraduate Students, Evaluation Criteria, Student Participation, Learner Engagement
Holster, Trevor A.; Lake, J. – Language Assessment Quarterly, 2016
Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…
Descriptors: Guessing (Tests), Item Response Theory, Vocabulary, Language Tests
Taskinen, Päivi H.; Steimel, Jochen; Gräfe, Linda; Engell, Sebastian; Frey, Andreas – Peabody Journal of Education, 2015
This study examined students' competencies in engineering education at the university level. First, we developed a competency model in one specific field of engineering: process dynamics and control. Then, the theoretical model was used as a frame to construct test items to measure students' competencies comprehensively. In the empirical…
Descriptors: Models, Engineering Education, Test Items, Outcome Measures
Hadžibegovic, Zalkida; Sliško, Josip – Center for Educational Policy Studies Journal, 2013
Active learning is individual and group participation in effective activities such as in-class observing, writing, experimenting, discussion, solving problems, and talking about to-be-learned topics. Some instructors believe that active learning is impossible, or at least extremely difficult to achieve in large lecture sessions. Nevertheless, the…
Descriptors: Student Attitudes, Attitude Change, Optics, Active Learning
Stewart, Jeffrey; White, David A. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…
Descriptors: Guessing (Tests), Scoring Formulas, Multiple Choice Tests, Test Reliability
Nunes, Miguel Baptista, Ed.; McPherson, Maggie, Ed. – International Association for Development of the Information Society, 2016
These proceedings contain the papers of the International Conference e-Learning 2016, which was organised by the International Association for Development of the Information Society, 1-3 July, 2016. This conference is part of the Multi Conference on Computer Science and Information Systems 2016, 1-4 July. The e-Learning (EL) 2016 conference aims…
Descriptors: Professional Associations, Conferences (Gatherings), Electronic Learning, Computer Science Education
Lawrence, Ida M.; Schmidt, Amy Elizabeth – College Entrance Examination Board, 2001
The SAT® I: Reasoning Test is administered seven times a year. Primarily for security purposes, several different test forms are given at each administration. How is it possible to compare scores obtained from different test forms and from different test administrations? The purpose of this paper is to provide an overview of the statistical…
Descriptors: Scores, Comparative Analysis, Standardized Tests, College Entrance Examinations
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations

Peer reviewed
Direct link
