Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 2 |
Descriptor
Source
| Journal of Technology,… | 3 |
| Educational Assessment | 1 |
| New Directions for Teaching… | 1 |
| Physical Review Physics… | 1 |
| Practical Assessment,… | 1 |
Author
| Allen, Nancy | 1 |
| Becker, Kirk A. | 1 |
| Bennett, Randy Elliott | 1 |
| Bergstrom, Betty A. | 1 |
| Boudreaux, Andrew | 1 |
| Custer, Michael | 1 |
| Drake, Samuel | 1 |
| Eaton, Philip | 1 |
| Green, Sylvia | 1 |
| Gu, Lixiong | 1 |
| Horkay, Nancy | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 7 |
| Reports - Research | 4 |
| Reports - Evaluative | 3 |
Education Level
| Higher Education | 3 |
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
| Postsecondary Education | 2 |
| Grade 8 | 1 |
| Primary Education | 1 |
Audience
Location
| Washington | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Pomplun, Mark; Ritchie, Timothy; Custer, Michael – Educational Assessment, 2006
This study investigated factors related to score differences on computerized and paper-and-pencil versions of a series of primary K-3 reading tests. Factors studied included item and student characteristics. The results suggest that the score differences were more related to student than item characteristics. These student characteristics include…
Descriptors: Reading Tests, Student Characteristics, Response Style (Tests), Socioeconomic Status
McGhee, Debbie E.; Lowell, Nana – New Directions for Teaching and Learning, 2003
This study compares mean ratings, inter-rater reliabilities, and the factor structure of items for online and paper student-rating forms from the University of Washington's Instructional Assessment System. (Contains 3 figures and 2 tables.)
Descriptors: Psychometrics, Factor Structure, Student Evaluation of Teacher Performance, Test Items
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness

Peer reviewed
Direct link
