NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gansemer-Topf, Ann M.; Downey, Jillian; Genschel, Ulrike – Research & Practice in Assessment, 2017
Effective assessment practice requires clearly defining and operationalizing terminology. We illustrate the importance of this practice by focusing on academic "undermatching"--when students enroll in colleges that are less academically selective than those for which they are academically prepared. Undermatching has been viewed as a…
Descriptors: Differences, Definitions, Vocabulary, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Naglieri, Jack A.; Ford, Donna Y. – Roeper Review, 2015
Black and Hispanic students are undeniably underidentified as gifted and underrepresented in gifted education. The underrepresentation of the two largest groups of "minority" students is long-standing, dating several decades, and is a serious area of contention. Most debates focus on the efficacy of traditional intelligence tests with…
Descriptors: Misconceptions, Nonverbal Ability, Ability, Ability Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Clarkeburn, Henriikka; Kettula, Kirsi – Teaching in Higher Education, 2012
This study looks at the fairness of assessing learning journals both as the fairness in creating a valid and robust marking process as well as how different student groups may have unfair disadvantages in performing well in reflective assessment tasks. The fairness of a marking process is discussed through reflecting on the practical process and…
Descriptors: Student Evaluation, Reflection, Summative Evaluation, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Baker, Beverly A. – Assessing Writing, 2010
In high-stakes writing assessments, rater training in the use of a rating scale does not eliminate variability in grade attribution. This realisation has been accompanied by research that explores possible sources of rater variability, such as rater background or rating scale type. However, there has been little consideration thus far of…
Descriptors: Foreign Countries, Writing Evaluation, Writing Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Owston, Ronald D.; Dudley-Marling, Curt – Journal of Research on Computing in Education, 1988
Reviews current educational software evaluation methods, highlights problems, and describes the York Educational Software Evaluation Scales (YESES), an alternative criterion based model. Panel evaluation used by YESES is explained and YESES results are compared with evaluations from the Educational Products Information Exchange (EPIE) to indicate…
Descriptors: Comparative Analysis, Computer Assisted Instruction, Correlation, Courseware
Hattendorf, Lynn C. – 1996
Since educational statistics, which are relatively easy to obtain, can only attempt to measure "quality," this paper asks how quality in higher education is assessed and how educational rankings, which are defined as benchmarks or attempts to measure, contribute to this process. The paper notes that while attempts to rank institutions of…
Descriptors: Citation Analysis, Comparative Analysis, Data Interpretation, Educational Assessment