Publication Date
| In 2026 | 1 |
| Since 2025 | 21 |
| Since 2022 (last 5 years) | 149 |
| Since 2017 (last 10 years) | 410 |
| Since 2007 (last 20 years) | 685 |
Descriptor
Source
Author
| Wise, Steven L. | 10 |
| Sinharay, Sandip | 7 |
| Sawaki, Yasuyo | 6 |
| Attali, Yigal | 5 |
| Bennett, Randy Elliot | 5 |
| Bridgeman, Brent | 5 |
| Lee, Yong-Won | 5 |
| Ling, Guangming | 5 |
| Luecht, Richard M. | 5 |
| Nese, Joseph F. T. | 5 |
| Liu, Ou Lydia | 4 |
| More ▼ | |
Publication Type
Education Level
Audience
| Practitioners | 6 |
| Teachers | 6 |
| Researchers | 4 |
| Parents | 1 |
| Students | 1 |
Location
| China | 20 |
| Florida | 20 |
| Turkey | 20 |
| Iran | 18 |
| Taiwan | 16 |
| Japan | 13 |
| Germany | 12 |
| Canada | 11 |
| Texas | 11 |
| Australia | 10 |
| United Kingdom | 10 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 7 |
| Race to the Top | 2 |
| Elementary and Secondary… | 1 |
| Head Start | 1 |
| Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Brown, Richard S.; Villarreal, Julio C. – International Journal of Testing, 2007
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation,…
Descriptors: Psychometrics, Test Bias, Testing, Simulation
Jelden, D. L. – 1987
A study of 696 undergraduates at the University of Northern Colorado was undertaken to determine the effects of computerized unit test item feedback on final examination scores. The study, which employed the PHOENIX computer managed instruction system, included students at all undergraduate levels enrolled in an Oceanography course. To determine…
Descriptors: College Students, Computer Assisted Instruction, Computer Assisted Testing, Feedback
Kelly, P. Adam – 2001
The purpose of this research was to establish, within the constraints of the methods presented, whether the computer is capable of scoring essays in much the same way that human experts rate essays. The investigation attempted to establish what was actually going on within the computer and within the mind of the rater and to describe the degree to…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Essays, Higher Education
Capar, Nilufer K.; Thompson, Tony; Davey, Tim – 2000
Information provided for computerized adaptive test (CAT) simulees was compared under two conditions on two moderately correlated trait composites, mathematics and reading comprehension. The first condition used information provided by in-scale items alone, while the second condition used information provided by in- and out-of-scale items together…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewedBugbee, Alan C., Jr.; Bernt, Frank M. – Journal of Research on Computing in Education, 1990
Discusses the use of computer administered testing by The American College. Student performance on computer administered versus paper-and-pencil tests is examined, student attitudes about exams are described, the effects of time limits on computerized testing are considered, and offline versus online testing is discussed. (31 references) (LRW)
Descriptors: Academic Achievement, Computer Assisted Testing, Intermode Differences, Online Systems
Peer reviewedStocking, Martha L.; Jirele, Thomas; Lewis, Charles; Swanson, Len – Journal of Educational Measurement, 1998
Constructed a pool of items from operational tests of mathematics to investigate the feasibility of using automated-test-assembly (ATA) methods to moderate simultaneously possibly irrelevant differences between the performance of women and men and African-American and White test takers. Discusses the usefulness of ATA. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests
Peer reviewedLuecht, Richard M. – Journal of Educational Measurement, 1998
Comments on the application of a proposed automated test assembly (ATA) to the problem of reducing potential performance differential among population subgroups and points out some pitfalls. Presents a rejoinder by M. Stocking and others. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests
Pomplun, Mark; Custer, Michael – Journal of Educational Computing Research, 2005
This study investigated the equivalence of scores from computerized and paper-and-pencil formats of a series of K-3 reading screening tests. Concerns about score equivalence on the computerized formats were warranted because of the use of reading passages, computer unfamiliarity of primary school students, and teacher versus computer…
Descriptors: Screening Tests, Reading Tests, Family Income, Factor Analysis
Peer reviewedLumsden, Jill A.; Sampson, James P., Jr.; Reardon, Robert C.; Lenz, Janet G.; Peterson, Gary W. – Measurement and Evaluation in Counseling and Development, 2004
The authors examined the extent to which the Realistic, Investigative, Artistic, Social, Enterprising, and Conventional scales and 3-point codes of the Self-Directed Search may be considered statistically and practically equivalent across 3 different modes of administration: paper-and-pencil, personal computer, and Internet. Student preferences…
Descriptors: Internet, Psychological Testing, Scores, Vocational Interests
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – ETS Research Report Series, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Test Items, Computer Assisted Testing, Computation, Adaptive Testing
Edwards, Ethan A. – 1990
Testing 1-2-3 is a general purpose testing system developed at the Computer-Based Education Research Laboratory at the University of Illinois for use on NovaNET computer-based education systems. The testing system can be used for: short, teacher-made quizzes, individualized examinations, computer managed instruction curriculum testing,…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Scores, Teacher Made Tests
Thomasson, Gary L. – 1997
Score comparability is important to those who take tests and those who use them. One important concept related to test score comparability is that of "equity," which is defined as existing when examinees are indifferent as to which of two alternate forms of a test they would prefer to take. By their nature, computerized adaptive tests…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewedBridgeman, Brent; Lennon, Mary Lou; Jackenthal, Altamese – Applied Measurement in Education, 2003
Studied the effects of variations in screen size, resolution, and presentation delay on verbal and mathematics scores on a computerized test for 357 high school juniors. No significant differences were found for mathematics scores, but verbal scores were higher with the larger resolution display. (SLD)
Descriptors: Computer Assisted Testing, High School Students, High Schools, Mathematics Achievement
Peer reviewedLuecht, Richard M. – Applied Psychological Measurement, 1996
The example of a medical licensure test is used to demonstrate situations in which complex, integrated content must be balanced at the total test level for validity reasons, but items assigned to reportable subscore categories may be used under a multidimensional item response theory adaptive paradigm to improve subscore reliability. (SLD)
Descriptors: Adaptive Testing, Certification, Computer Assisted Testing, Licensing Examinations (Professions)

Direct link
