Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Source
Pearson | 2 |
Grantee Submission | 1 |
Author
Ackerman, Terry | 3 |
Turhan, Ahmet | 2 |
Ackerman, Terry A. | 1 |
Binici, Salih | 1 |
Bolden, Bernadine J. | 1 |
Bolt, Daniel | 1 |
Braun, Carl | 1 |
Brittain, Clay V. | 1 |
Brittain, Mary M. | 1 |
Brown, Scott W. | 1 |
Buhr, Dianne C. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 2 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
College Level Academic Skills… | 1 |
Gates MacGinitie Reading Tests | 1 |
Graduate Record Examinations | 1 |
Iowa Tests of Basic Skills | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Powers, Sonya; Turhan, Ahmet; Binici, Salih – Pearson, 2012
The population sensitivity of vertical scaling results was evaluated for a state reading assessment spanning grades 3-10 and a state mathematics test spanning grades 3-8. Subpopulations considered included males and females. The 3-parameter logistic model was used to calibrate math and reading items and a common item design was used to construct…
Descriptors: Scaling, Equated Scores, Standardized Tests, Reading Tests
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Rubin, Lois S.; Mott, David E. W. – 1984
An investigation of the effect on the difficulty value of an item due to position placement within a test was made. Using a 60-item operational test comprised of 5 subtests, 60 items were placed as experimental items on a number of spiralled test forms in three different positions (first, middle, last) within the subtest composed of like items.…
Descriptors: Difficulty Level, Item Analysis, Minimum Competency Testing, Reading Tests
Hsu, Yaowen; Ackerman, Terry A. – 1994
This paper summarizes an investigation of the format used for equating the 1993 Illinois Goal Assessment Program (IGAP) sixth grade reading test. In 1992, each student took only one test, either a narrative test or an expository test. In 1993, there was only one test, which included both formats. Several possible approaches for linking the 1993…
Descriptors: Context Effect, Elementary School Students, Equated Scores, Grade 6
Brittain, Mary M.; Brittain, Clay V. – 1981
A behavioral domain is well-defined when it is clear to both test developers and test users which categories of performance should or should not be considered for potential test items. Only those tests that are keyed to well-defined domains meet the definition of criterion-referenced tests. The greatest proliferation of criterion-referenced tests…
Descriptors: Criterion Referenced Tests, Reading Achievement, Reading Tests, Test Construction
Maimon, Lia F. – 1994
Two studies addressed the effects of failure in reading test performance. In experiment 1, 36 students in 3 intact reading and study skills courses at an upstate New York community college completed a questionnaire, were administered an "unsolvable" reading test, were either given no feedback or "failure feedback," an…
Descriptors: Community Colleges, Failure, Reading Research, Reading Tests
Thompson, Tony D.; Davey, Tim – 1999
Methods to control the test construct and the efficiency of a computerized adaptive test (CAT) were studied in the context of a reading comprehension test given as a part of a battery of tests for college admission. A goal of the study was to create test scores that were interchangeable with those from a fixed form paper and pencil test. The first…
Descriptors: Adaptive Testing, College Entrance Examinations, Comparative Analysis, Computer Assisted Testing
Kobrin, Jennifer L. – 2000
The comparability of computerized and paper-and-pencil tests was examined from cognitive perspective, using verbal protocols rather than psychometric methods, as the primary mode of inquiry. Reading comprehension items from the Graduate Record Examinations were completed by 48 college juniors and seniors, half of whom took the computerized test…
Descriptors: Cognitive Processes, College Students, Computer Assisted Testing, Higher Education
Huntley, Renee M.; Miller, Sherri – 1994
Whether the shaping of test items can itself result in qualitative differences in examinees' comprehension of reading passages was studied using the Pearson-Johnson item classification system. The specific practice studied incorporated, within an item stem line, references that point the examinee to a specific location within a reading passage.…
Descriptors: Ability, Classification, Difficulty Level, High School Students
Miller, Samuel D.; Smith, Donald E. P. – 1984
To test the assumption that questions measuring literal comprehension and those measuring inferential comprehension are equally valid indices for both oral and silent reading tests at all skill levels, questions from the Analytic Reading Inventory were classified as either literal or inferential. Subjects, 94 children in grades two to five, read…
Descriptors: Differences, Elementary Education, Oral Reading, Reading Ability
Henk, William A. – 1983
The specific performance characteristics of eight alternative cloze test formats were examined at the fourth and sixth grade levels. At each grade, 64 subjects were randomly assigned to one of four basic treatments (every-fifth/standard, every-fifth/cued, total random/standard, and total random/cued) and tested. Responses on each of the cloze…
Descriptors: Cloze Procedure, Comparative Analysis, Grade 4, Grade 6
Evans, John Andrew; Ackerman, Terry – 1994
The strengths of item-response theory (IRT) are used to examine the degree of information individual test items provide, as well as to investigate how the individual item types contribute to the overall measurement accuracy of the Illinois Goal Assessment Program (IGAP) reading test. Using the graded-response model of Samejima (1969), the amount…
Descriptors: Ability, Educational Diagnosis, Elementary Education, Elementary School Students
Brown, Scott W.; Hall, Vernon C. – 1982
A 1978 study (Torgesen, Bowen and Ivey) of the structure and modality variables of the Visual-Aural Digit Span (VADS) test was replicated to determine: (1) if the effects generalized across age; (2) if differences between simultaneous and sequential visually presented items were due to mode of presentation or the amount of study time; (3) the…
Descriptors: Cognitive Measurement, Cognitive Processes, Elementary Education, Grade 2
Roid, Gale; And Others – 1980
Using informal, objectives-based, or linguistic methods, three elementary school teachers and three experienced item writers developed criterion-referenced pretests-posttests to accompany a prose passage. Item difficulites were tabulated on the responses of 364 elementary students. The informal-subjective method, used by many achievement test…
Descriptors: Criterion Referenced Tests, Difficulty Level, Elementary Education, Elementary School Teachers