Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 13 |
Descriptor
| Construct Validity | 16 |
| Correlation | 16 |
| Scoring | 16 |
| Factor Analysis | 8 |
| Scores | 7 |
| Computer Assisted Testing | 5 |
| Comparative Analysis | 4 |
| Essays | 4 |
| Interrater Reliability | 4 |
| Undergraduate Students | 4 |
| Writing Tests | 4 |
| More ▼ | |
Source
Author
| Attali, Yigal | 4 |
| Sinharay, Sandip | 2 |
| Alci, Bülent | 1 |
| Andersson, Marie | 1 |
| Boldt, Robert F. | 1 |
| Briller, Vladimir | 1 |
| Carmichael, Jessica A. | 1 |
| Clariana, Roy B. | 1 |
| Crossley, Scott A. | 1 |
| Deng, Hui | 1 |
| Elbert, Thomas | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 12 |
| Reports - Research | 11 |
| Reports - Evaluative | 4 |
| Speeches/Meeting Papers | 2 |
| Tests/Questionnaires | 2 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 4 |
| Postsecondary Education | 4 |
| Elementary Secondary Education | 2 |
| High Schools | 2 |
| Secondary Education | 1 |
Audience
| Practitioners | 1 |
Location
| New Jersey | 1 |
| Sweden | 1 |
| Turkey | 1 |
| Uganda | 1 |
| Utah | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kavgaoglu, Derya; Alci, Bülent – Educational Research and Reviews, 2016
The goal of this research which was carried out in reputable dedicated call centres within the Turkish telecommunication sector aims is to evaluate competence-based curriculums designed by means of internal funding through Stufflebeam's context, input, process, product (CIPP) model. In the research, a general scanning pattern in the scope of…
Descriptors: Foreign Countries, Evaluation Methods, Models, Curriculum Evaluation
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Carmichael, Jessica A.; Fraccaro, Rebecca L.; Nordstokke, David W. – Canadian Journal of School Psychology, 2014
Oral language skills are important to consider in school psychology practice, as they are directly tied to many areas of academic functioning. For example, research has demonstrated that oral language skills in early elementary school predict reading comprehension in later grades (Kendeou, van den Broek, White, & Lynch, 2009). With a…
Descriptors: Language Tests, Oral Language, Language Skills, School Psychology
Roscoe, Rod D.; Crossley, Scott A.; Snow, Erica L.; Varner, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the…
Descriptors: Correlation, Essays, Scoring, Writing Evaluation
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Johnson, Jeffrey Alan – Association for Institutional Research (NJ1), 2011
This paper examines the tension in the process of designing student surveys between the methodological requirements of good survey design and the institutional needs for survey data. Building on the commonly used argumentative approach to construct validity, I build an interpretive argument for student opinion surveys that allows assessment of the…
Descriptors: Student Surveys, Graduate Surveys, Opinions, Universities
Sandell, Rolf; Kimber, Birgitta; Andersson, Marie; Elg, Mattias; Fharm, Linus; Gustafsson, Niklas; Soderbaum, Wendela – Educational Psychology in Practice, 2012
This is a psychometric analysis of an instrument to assess the socio-emotional development of school students, How I Feel (HIF), developed as a situational judgment test, with scoring based on expert judgments. The HIF test was administered in grades 4-9, 1999-2005. Internal consistency, retest reliability, and year-to-year stability were…
Descriptors: Evaluation Methods, Emotional Development, Psychometrics, Construct Validity
Development and Psychometric Evaluation of the Yale-Brown Obsessive-Compulsive Scale--Second Edition
Storch, Eric A.; Rasmussen, Steven A.; Price, Lawrence H.; Larson, Michael J.; Murphy, Tanya K.; Goodman, Wayne K. – Psychological Assessment, 2010
The Yale-Brown Obsessive-Compulsive Scale (Y-BOCS; Goodman, Price, Rasmussen, Mazure, Delgado, et al., 1989) is acknowledged as the gold standard measure of obsessive-compulsive disorder (OCD) symptom severity. A number of areas where the Y-BOCS may benefit from revision have emerged in past psychometric studies of the Severity Scale and Symptom…
Descriptors: Check Lists, Construct Validity, Validity, Measures (Individuals)
Ertl, Verena; Pfeiffer, Anett; Saile, Regina; Schauer, Elisabeth; Elbert, Thomas; Neuner, Frank – Psychological Assessment, 2010
We studied the validity of the assessment of posttraumatic stress disorder (PTSD) and depression within the context of an epidemiological mental health survey among war-affected adolescents and young adults in northern Uganda. Local language versions of the Posttraumatic Diagnostic Scale (PDS) and the Depression section of the Hopkins Symptom…
Descriptors: African Languages, Posttraumatic Stress Disorder, Mental Health, Construct Validity
Katz, Irvin R.; Elliot, Norbert; Attali, Yigal; Scharf, Davida; Powers, Donald; Huey, Heather; Joshi, Kamal; Briller, Vladimir – ETS Research Report Series, 2008
This study presents an investigation of information literacy as defined by the ETS iSkills™ assessment and by the New Jersey Institute of Technology (NJIT) Information Literacy Scale (ILS). As two related but distinct measures, both iSkills and the ILS were used with undergraduate students at NJIT during the spring 2006 semester. Undergraduate…
Descriptors: Information Literacy, Information Skills, Skill Analysis, Case Studies
Boldt, Robert F.; Oltman, Philip K. – 1993
Administration of the Test of Spoken English (TSE) yields tapes of oral performance on items within six sections of the test. Trained scorers subsequently rate responses using four proficiency scales: pronunciation, grammar, fluency, and overall comprehensibility. This project examined the consistency of statistical relations among TSE scores with…
Descriptors: Audiotape Recordings, Construct Validity, Correlation, English (Second Language)
Peer reviewedKline, Rex B.; And Others – Assessment, 1994
The construct validity of a supplemental scoring system for the Kaufman Assessment Battery for Children (K-ABC) was evaluated with 146 referred school-age children (aged 6 to 12.5 years) and the K-ABC normative sample. Results support the construct validity of only part of the scoring model. (SLD)
Descriptors: Achievement Tests, Construct Validity, Correlation, Elementary Education
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Previous Page | Next Page »
Pages: 1 | 2
Direct link
