Publication Date
| In 2026 | 0 |
| Since 2025 | 31 |
| Since 2022 (last 5 years) | 188 |
| Since 2017 (last 10 years) | 537 |
| Since 2007 (last 20 years) | 1331 |
Descriptor
Source
Author
| Deane, Paul | 12 |
| Engelhard, George, Jr. | 11 |
| Graham, Steve | 11 |
| Lee, Yong-Won | 11 |
| Attali, Yigal | 9 |
| Bridgeman, Brent | 9 |
| Powers, Donald E. | 9 |
| Kantor, Robert | 8 |
| McMaster, Kristen L. | 8 |
| Thurlow, Martha L. | 8 |
| Wind, Stefanie A. | 8 |
| More ▼ | |
Publication Type
Education Level
Audience
| Practitioners | 130 |
| Teachers | 96 |
| Policymakers | 49 |
| Administrators | 22 |
| Students | 13 |
| Researchers | 12 |
| Parents | 4 |
| Counselors | 2 |
Location
| Canada | 53 |
| Iran | 52 |
| China | 38 |
| California | 34 |
| Texas | 31 |
| Florida | 26 |
| Australia | 25 |
| Georgia | 25 |
| Indonesia | 25 |
| Saudi Arabia | 25 |
| Turkey | 22 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 3 |
Thomason, Tommy; York, Carol – 2000
This handbook guides teachers through nine workshops designed to share strategies for success on writing tests. The workshops in the handbook give practical ideas that can be implemented in the elementary classroom to set the stage for test success without compromising children's growth as writers. Following a foreword by Michael R. Sampson and…
Descriptors: Achievement Tests, Elementary Education, State Standards, Test Wiseness
Taylor, Catherine S. – 1999
This document contains the technical information for the 1998 Washington Assessment of Student Learning (WASL), Grade 4 Assessment for Reading, Mathematics, Listening, and Writing. It documents the technical quality of the assessment, including the evidence for the reliability and validity of test scores. The manual's chapters are: (1)…
Descriptors: Grade 4, Intermediate Grades, Listening Skills, Mathematics Tests
Peer reviewedPowers, Donald E.; Fowles, Mary E. – Educational Assessment, 1997
The personal statement as an indicator of writing skill was evaluated by comparing personal statements with standardized essay measures of writing ability for 475 graduate students. Correlations of the essay and the personal statement with nontest indicators of writing skill indicate that the traditional essay was more highly related to nearly all…
Descriptors: Comparative Analysis, Essay Tests, Graduate Students, Graduate Study
Peer reviewedPowers, Donald E.; Fowles, Mary E. – Applied Measurement in Education, 2002
Studied how performance on a standardized writing assessment might influence graduate admissions decisions if, along with test scores, test takers' essays were made available to admissions committees. Results for 27 test takers(2 essays each) suggest that the availability of examinee essays would have little, if any, influence on admissions…
Descriptors: Case Studies, College Admission, Decision Making, Essay Tests
Peer reviewedLee, Young-Ju – Assessing Writing, 2002
Explores plausible differences in composing processes when English as a Second Language (ESL) students write timed-essays on paper and on the computer. Examines the way in which the quality of the written products differs across paper and computer modes. Reveals that individual participants are engaged in different ways and to differing degrees by…
Descriptors: Comparative Analysis, Computer Assisted Testing, Construct Validity, English (Second Language)
Peer reviewedWelch, Catherine J.; Miller, Timothy R. – Journal of Educational Measurement, 1995
Effects of using different combinations of internal and external matching variables were examined using a generalized Mantel-Haenszel statistic, a technique based on meta-analysis, and logistic discriminant function analysis with data from a writing assessment for over 4,200 8th graders. Results did not support use of an external matching…
Descriptors: Educational Assessment, Elementary Education, Elementary School Students, Grade 8
Peer reviewedSchmitz, Constance C.; delMas, Robert C. – Applied Measurement in Education, 1990
Using S. J. Messick's theoretical work concerning construct validity as a guide, underlying hypotheses for investigation when validating placement test decisions are assessed. Guidelines on validating placement decisions are offered, and the hypotheses and guidelines are applied in a validation study of the Written English Expression Placement…
Descriptors: College Freshmen, Construct Validity, Guidelines, Higher Education
Peer reviewedEngelhard, George, Jr. – Journal of Educational Measurement, 1994
Rater errors (rater severity, halo effect, central tendency, and restriction of range) are described, and criteria are presented for evaluating rating quality based on a many-faceted Rasch (FACETS) model. Ratings of 264 compositions from the Eighth Grade Writing Test in Georgia by 15 raters illustrate the discussion. (SLD)
Descriptors: Criteria, Educational Assessment, Elementary Education, Elementary School Students
Peer reviewedRussell, Michael; Haney, Walt – Education Policy Analysis Archives, 2000
Summarizes recent developments in the use of technology in schools and state-level testing programs. Presents results of two studies, one with 114 students and one with about 200 students, that indicate that written tests administered on paper underestimate the achievement of students accustomed to working with computers. (SLD)
Descriptors: Computer Assisted Testing, Educational Technology, High School Students, High Schools
Greenwald, Elissa A.; Persky, Hilary R.; Campbell, Jay R.; Mazzeo, John – Education Statistics Quarterly, 1999
Describes the National Assessment of Educational Progress (NAEP) 1998 writing assessment, Includes average scores and achievement-level results for the United States and the states, for demographic subgroups, and for students in a variety of contexts. Also examines the framework and content of the assessment. (Author/SLD)
Descriptors: Achievement Tests, Demography, Elementary Secondary Education, National Competency Tests
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Tucha, Oliver; Lange, Klaus W. – Journal of Attention Disorders, 2005
Two experiments were performed regarding the effect of conscious control on handwriting fluency in healthy adults and ADHD children. First, 26 healthy students were asked to write a sentence under different conditions. The results indicate that automated handwriting movements are independent from visual feedback. Second, the writing performance of…
Descriptors: Feedback (Response), Handwriting, Hyperactivity, Attention Deficit Disorders
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
College of the Canyons, Valencia, CA. Office of Institutional Development. – 1996
A study was conducted at California's College of the Canyons to determine whether evidence existed of disproportionate impact by age, gender, ethnicity, or disability in placement recommendations made based on the college's writing sample assessment test. The sample consisted of 617 students who took the writing test and writing sample test for…
Descriptors: Community Colleges, Student Characteristics, Student Placement, Test Bias
Gao, Xiaohong – 1996
The use of the Work Keys Listening and Writing Assessment, part of an assessment system of the generic employability skills of individuals, needs to be accompanied by systematic evaluation of its technical qualities. This study examined sampling variability and generalizability of Listening and Writing scores when multiple forms, raters, and…
Descriptors: Adults, Difficulty Level, Generalization, Job Skills

Direct link
