Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Error Patterns | 9 |
| Higher Education | 9 |
| Scores | 9 |
| College Students | 3 |
| English (Second Language) | 3 |
| Test Items | 3 |
| Academic Ability | 2 |
| Correlation | 2 |
| Error of Measurement | 2 |
| Evaluation Methods | 2 |
| Grades (Scholastic) | 2 |
| More ▼ | |
Source
| Higher Education Research and… | 1 |
| International Educational… | 1 |
| Journal of Educational… | 1 |
| Language Learning | 1 |
| Research & Practice in… | 1 |
Author
Publication Type
| Reports - Research | 8 |
| Journal Articles | 4 |
| Speeches/Meeting Papers | 2 |
| Collected Works - Proceedings | 1 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Adult Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| College Level Examination… | 1 |
| International English… | 1 |
| New Jersey College Basic… | 1 |
| Stanford Binet Intelligence… | 1 |
What Works Clearinghouse Rating
Müller, Amanda – Higher Education Research and Development, 2015
This paper attempts to demonstrate the differences in writing between International English Language Testing System (IELTS) bands 6.0, 6.5 and 7.0. An analysis of exemplars provided from the IELTS test makers reveals that IELTS 6.0, 6.5 and 7.0 writers can make a minimum of 206 errors, 96 errors and 35 errors per 1000 words. The following section…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Scores
Hathcoat, John D.; Penn, Jeremy D. – Research & Practice in Assessment, 2012
Critics of standardized testing have recommended replacing standardized tests with more authentic assessment measures, such as classroom assignments, projects, or portfolios rated by a panel of raters using common rubrics. Little research has examined the consistency of scores across multiple authentic assignments or the implications of this…
Descriptors: Generalizability Theory, Performance Based Assessment, Writing Across the Curriculum, Standardized Tests
Boyer, Kristy Elizabeth, Ed.; Yudelson, Michael, Ed. – International Educational Data Mining Society, 2018
The 11th International Conference on Educational Data Mining (EDM 2018) is held under the auspices of the International Educational Data Mining Society at the Templeton Landing in Buffalo, New York. This year's EDM conference was highly competitive, with 145 long and short paper submissions. Of these, 23 were accepted as full papers and 37…
Descriptors: Data Collection, Data Analysis, Computer Science Education, Program Proposals
Choi, Hee-sook – 1991
Twenty-eight protocols of the Stanford-Binet Fourth Edition (SB:IV) obtained from graduate students were examined for scoring and clerical errors that contributed to the inaccuracy of test scores. Scoring of individual items was identified as the most error prone process, as evidenced by the fact that 96% of the protocols contained scoring errors.…
Descriptors: Error Patterns, Graduate Students, Higher Education, Intelligence Tests
D'Costa, Ayres – 1993
The Sato Caution Index takes into account the number and difficulty of items gotten wrong by a student within his or her ability, as well as the number and difficulty of items gotten right beyond his or her ability. Sato subtracts the two components to define a single Caution Index. In this study, the components are kept separate, defining a…
Descriptors: Ability, College Students, Error Patterns, Factor Analysis
Schroeder, Lee L.
College Level Examination Program (CLEP) Tests were normed on a national basis, administering the test to nationwide samples of subjects. Norms appear in the booklet, CLEP Scores: Interpretation and Use, and consist of the test score means for groups of students receiving grades of A, B, C, D, and F in the relevant course, the proportion of…
Descriptors: Correlation, Decision Making, Equivalency Tests, Error Patterns
Peer reviewedSmith, Malbert, III; And Others – Journal of Educational Measurement, 1979
Results of multiple-choice tests in educational psychology were examined to discover the effects on students' scores of changing their original answer choices after reconsideration. Eighty-six percent of the students changed one or more answers, and six out of seven students who made changes improved their scores by doing so. (Author/CTM)
Descriptors: Academic Ability, Difficulty Level, Error Patterns, Guessing (Tests)
Peer reviewedZuengler, Jane – Language Learning, 1993
The influence on interlocutors' relative content knowledge on conversational participation in interactions between native speakers (NSs) and nonnative speakers (NNSs) with limited oral skills is investigated. Results indicate that both NSs and NNSs appeared conversationally active, but there were different patterns of participation that could to…
Descriptors: College Students, English (Second Language), Error Patterns, Higher Education
Smith, Richard M. – 1983
Previous studies of test item bias have investigated how different groups of examinees perform differently on a given set of items. These studies imply that examinees should be treated in a certain way because they are of a particular sex or race rather than as individuals in their own right, but it is unrealistic and unfair to assume such an…
Descriptors: Academic Ability, Error of Measurement, Error Patterns, Higher Education

Direct link
