Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 13 |
| Since 2007 (last 20 years) | 22 |
Descriptor
| Test Validity | 51 |
| Timed Tests | 51 |
| Test Reliability | 20 |
| Scores | 13 |
| Psychometrics | 11 |
| Reading Tests | 10 |
| Reading Comprehension | 9 |
| Standardized Tests | 8 |
| Undergraduate Students | 8 |
| Correlation | 7 |
| Foreign Countries | 7 |
| More ▼ | |
Source
Author
| Bhola, Dennison S. | 2 |
| Hambleton, Ronald K. | 2 |
| Wise, Steven L. | 2 |
| Alkoby, Moty | 1 |
| Anthony, Jared Judd | 1 |
| Baldwin, Peter | 1 |
| Bliss, Leonard B. | 1 |
| Boori, Ali Akbar | 1 |
| Bucak, Deniz | 1 |
| Burns, Edward | 1 |
| Cahan, Sorel | 1 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 5 |
| Practitioners | 3 |
| Counselors | 1 |
Location
| Iran | 2 |
| Florida | 1 |
| India | 1 |
| Netherlands | 1 |
| North Carolina | 1 |
| South Africa | 1 |
| Spain | 1 |
| United Kingdom (Belfast) | 1 |
| Virgin Islands | 1 |
Laws, Policies, & Programs
| Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Furey, William M.; Marcotte, Amanda M.; Hintze, John M.; Shackett, Caroline M. – School Psychology Quarterly, 2016
The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect…
Descriptors: Curriculum Based Assessment, Classification, Accuracy, Test Validity
Hampton, David D.; Lembke, Erica S. – Reading & Writing Quarterly, 2016
The purpose of this study was to examine 4 early writing measures used to monitor the early writing progress of 1st-grade students. We administered the measures to 23 1st-grade students biweekly for a total of 16 weeks. We obtained 3-min samples and conducted analyses for each 1-min increment. We scored samples using 2 different methods: correct…
Descriptors: Progress Monitoring, Curriculum Based Assessment, Writing Tests, Outcome Measures
Ganzeveld, Paula – ProQuest LLC, 2015
Curriculum-Based Measures in writing (CBM-W) assesses a variety of fluency-based components of writing. While support exists for the use of CBM measures in the area of writing, there is a need to conduct further validation studies to investigate the utility of these measures within elementary and secondary classrooms. Since only countable indices…
Descriptors: Curriculum Based Assessment, Writing Evaluation, Test Validity, Educational Quality
Campbell, Heather; Espin, Christine A.; McMaster, Kristen – Reading and Writing: An Interdisciplinary Journal, 2013
The purpose of this study was to examine the validity and reliability of Curriculum-Based Measures in writing for English learners. Participants were 36 high school English learners with moderate to high levels of English language proficiency. Predictor variables were type of writing prompt (picture, narrative, and expository), time (3, 5, and 7…
Descriptors: Curriculum Based Assessment, Writing Tests, Test Validity, Test Reliability
Lu, Ying; Sireci, Stephen G. – Educational Measurement: Issues and Practice, 2007
Speededness refers to the situation where the time limits on a standardized test do not allow substantial numbers of examinees to fully consider all test items. When tests are not intended to measure speed of responding, speededness introduces a severe threat to the validity of interpretations based on test scores. In this article, we describe…
Descriptors: Test Items, Timed Tests, Standardized Tests, Test Validity
Anthony, Jared Judd – Assessing Writing, 2009
Testing the hypotheses that reflective timed-essay prompts should elicit memories of meaningful experiences in students' undergraduate education, and that computer-mediated classroom experiences should be salient among those memories, a combination of quantitative and qualitative research methods paints a richer, more complex picture than either…
Descriptors: Undergraduate Study, Qualitative Research, Research Methodology, Reflection
Peer reviewedReams, Redmond; And Others – Gifted Child Quarterly, 1990
The study evaluated speed as a factor in Wechsler Intelligence Scale for Children-Revised performance with 66 high scoring and 36 average scoring children (ages 3-15 years). Results cast doubt on the utility of speed bonuses in tests of general intelligence with gifted children. (Author/DB)
Descriptors: Gifted, Intelligence Tests, Scoring Formulas, Talent Identification
Peer reviewedSiegler, Robert S. – Educational Researcher, 1989
Discusses the problems of using chronometric analysis, a common cognitive psychological method, for educational assessment. Suggests that cognitive assessment has not reached the precision needed to analyze individual differences. (FMW)
Descriptors: Cognitive Measurement, Elementary Education, Evaluation, Individual Differences
Newkirk, Thomas – 1977
The validity of current standardized competency tests for writing is in doubt as is the need for such testing at all. Some tests, especially those requiring little writing, may not be testing what they purport to test (content validity). Instructional validity (testing what has actually been taught) raises the issue that what is being tested is…
Descriptors: Basic Skills, Standardized Tests, Test Bias, Test Interpretation
Peer reviewedHambleton, Ronald K. – Educational and Psychological Measurement, 1987
This paper presents an algorithm for determining the number of items to measure each objective in a criterion-referenced test when testing time is fixed and when the objectives vary in their levels of importance, reliability, and validity. Results of four special applications of the algorithm are presented. (BS)
Descriptors: Algorithms, Behavioral Objectives, Criterion Referenced Tests, Test Construction
Fishkin, Anne S.; Kampsnider, John J. – 1996
Since the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) was published in 1991, it has been reported that fewer students are qualifying for gifted programs that use the WISC-III as a criterion measure. WISC-III differs from the WISC-Revised (WISC-R) in having a greater emphasis on speed of response, which could…
Descriptors: Ability Identification, Children, Elementary Education, Elementary School Students
Peer reviewedRindler, Susan Ellerin – Educational and Psychological Measurement, 1980
A short verbal aptitude test was administered under varying time limits with answer sheets specially designed to allow items that had been skipped to be identified. It appeared advantageous for the more able (based on grade point averages) but disadvantageous for the less able to skip items. (Author/RL)
Descriptors: Aptitude Tests, Difficulty Level, Higher Education, Response Style (Tests)
Bliss, Leonard B. – 1984
A model for the validation of standardized tests of academic achievement upon populations not represented in the samples used to standardize the tests is presented, and the results of a field testing of the model are described. The 1973 editions of the Stanford Achievement Test and the Test of Academic Skills were administered to a sample of…
Descriptors: Achievement Tests, Basic Skills, Elementary Secondary Education, Item Analysis
Peer reviewedNelson, Jack K.; Dorociak, Jeff J. – Journal of Physical Education, Recreation & Dance, 1982
Test measurement, reliability, and validity are discussed in relation to methods of physical fitness testing. A successful testing method which involved students testing their peers is described, showing the administration of various test items and the use of test practice procedures. (JN)
Descriptors: Higher Education, Physical Education, Physical Fitness, Student Participation
Kong, Xiaojing J.; Wise, Steven L.; Bhola, Dennison S. – Educational and Psychological Measurement, 2007
This study compared four methods for setting item response time thresholds to differentiate rapid-guessing behavior from solution behavior. Thresholds were either (a) common for all test items, (b) based on item surface features such as the amount of reading required, (c) based on visually inspecting response time frequency distributions, or (d)…
Descriptors: Test Items, Reaction Time, Timed Tests, Item Response Theory

Direct link
