Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Test Format | 8 |
Scores | 5 |
Item Response Theory | 4 |
Simulation | 4 |
Test Items | 4 |
Adaptive Testing | 3 |
Comparative Analysis | 3 |
Computer Assisted Testing | 3 |
Estimation (Mathematics) | 3 |
High Schools | 3 |
Ability | 2 |
More ▼ |
Author
Pommerich, Mary | 8 |
Nicewander, W. Alan | 3 |
Hanson, Bradley A. | 2 |
Burden, Timothy | 1 |
Harris, Deborah J. | 1 |
Sconing, James A. | 1 |
Thissen, David | 1 |
Williams, Valerie S. L. | 1 |
Publication Type
Reports - Research | 5 |
Journal Articles | 3 |
Speeches/Meeting Papers | 3 |
Reports - Evaluative | 2 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Education Level
Grade 11 | 1 |
Grade 12 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
North Carolina End of Course… | 1 |
What Works Clearinghouse Rating
Pommerich, Mary – Journal of Technology, Learning, and Assessment, 2007
Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Scores

Pommerich, Mary; Nicewander, W. Alan; Hanson, Bradley A. – Journal of Educational Measurement, 1999
Studied whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Evaluated six Item Response Theory-based domain score estimation methods through simulation and concluded they performed better than observed score on the form taken. (SLD)
Descriptors: Estimation (Mathematics), Groups, Item Response Theory, Scores
Pommerich, Mary; Nicewander, W. Alan – 1998
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT)-based domain score estimation methods were evaluated, under conditions of few items per content area per…
Descriptors: Ability, Estimation (Mathematics), Groups, Item Response Theory
Pommerich, Mary; Nicewander, W. Alan – 1998
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT) -based domain score estimation methods were evaluated, under conditions of few items per content area per…
Descriptors: Ability, Estimation (Mathematics), Group Membership, Item Response Theory

Williams, Valerie S. L.; Pommerich, Mary; Thissen, David – Journal of Educational Measurement, 1998
Created a developmental scale for the North Carolina End-of-Grade Mathematics Tests using a subset of identical test forms administered to adjacent grade levels with Thurstone scaling and Item Response Theory methods. Discusses differences in patterns produced. (Author/SLD)
Descriptors: Achievement Tests, Child Development, Comparative Analysis, Elementary Secondary Education
Pommerich, Mary; Hanson, Bradley A.; Harris, Deborah J.; Sconing, James A. – 1999
This paper focuses on methodological issues in applying equipercentile equating methods to pairs of tests that do not meet the assumptions of equating. This situation is referred to as a concordance situation, as opposed to an equating situation, and the end result is a concordance table that gives "comparable" scores between the tests.…
Descriptors: College Entrance Examinations, Comparative Analysis, Equated Scores, Error of Measurement
Pommerich, Mary – 2002
This paper considers differences in modes of test administration, addressing three questions: (1) Do examinees respond to items in the same way across administration modes and computer interface variations? (2) What are some of the factors that can contribute to modal effects? and (3) Can item parameters calibrated from paper and pencil…
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Testing, Computer Literacy
Pommerich, Mary; Burden, Timothy – 2000
A small-scale study was conducted to compare test-taking strategies, problem-solving strategies, and general impressions about the test across computer and paper-and-pencil administration modes. Thirty-six examinees (high school students) participated in the study. Each examinee took a test in one of the content areas of English, Mathematics,…
Descriptors: Adaptive Testing, Attitudes, Comparative Analysis, Computer Assisted Testing