Publication Date
| In 2026 | 3 |
| Since 2025 | 206 |
| Since 2022 (last 5 years) | 1098 |
| Since 2017 (last 10 years) | 2172 |
| Since 2007 (last 20 years) | 3308 |
Descriptor
Source
Author
Publication Type
Education Level
Location
| Australia | 111 |
| Turkey | 108 |
| China | 93 |
| United Kingdom | 93 |
| Germany | 87 |
| Iran | 71 |
| Spain | 66 |
| Taiwan | 66 |
| Canada | 65 |
| Indonesia | 57 |
| Netherlands | 54 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 2 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 4 |
Peer reviewedSiew, Peg-Foo – International Journal of Mathematical Education in Science and Technology, 2003
Discusses the advantages to using on-line assessment for both the instructor and the learner. Reports on the use of an online assessment tool that provides interactive feedback to students learning linear algebra. Measures success in terms of improved pass rate and students' satisfaction with the flexible learning opportunities that the tool…
Descriptors: Algebra, Computer Assisted Testing, Computer Uses in Education, Evaluation Methods
Peer reviewedBan, Jae-Chun; Hanson, Bradley A.; Yi, Qing; Harris, Deborah J. – Journal of Educational Measurement, 2002
Compared three online pretest calibration scaling methods through simulation: (1) marginal maximum likelihood with one expectation maximization (EM) cycle (OEM) method; (2) marginal maximum likelihood with multiple EM cycles (MEM); and (3) M. Stocking's method B. MEM produced the smallest average total error in parameter estimation; OEM yielded…
Descriptors: Computer Assisted Testing, Error of Measurement, Maximum Likelihood Statistics, Online Systems
Peer reviewedJodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory
Peer reviewedStevens, Ronald H.; And Others – Academic Medicine, 1989
A study to determine the feasibility of creating and administering computer-based problem-solving examinations for evaluating second-year medical students in immunology and to determine how students would perform on these tests relative to their performances on concurrently administered objective and essay examinations is described. (Author/MLW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Medical Education
Dempsey, John V.; Wager, Susan U. – Educational Technology, 1988
Discusses and defines immediate and delayed feedback as they apply to computer-assisted instruction and testing. A research classification matrix is presented to provide a framework for classifying existing feedback studies and to guide future research, and a sample bibliography classified according to the matrix is given. (31 references) (LRW)
Descriptors: Bibliographies, Classification, Computer Assisted Instruction, Computer Assisted Testing
Peer reviewedBerger, Steven G.; And Others – Assessment, 1994
As part of a neuropsychological assessment, 95 adult patients completed either standard or computerized versions of the Category Test. Subjects who completed the computerized version exhibited more errors than those who completed the standard version, suggesting that it may be more difficult. (SLD)
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Demography
Peer reviewedHetter, Rebecca D.; And Others – Applied Psychological Measurement, 1994
Effects on computerized adaptive test score of using a paper-and-pencil (P&P) calibration to select items and estimate scores were compared with effects of using computer calibration. Results with 2,999 Navy recruits support the use of item parameters calibrated from either P&P or computer administrations. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedJones, W. Paul – Measurement and Evaluation in Counseling and Development, 1993
Investigated model for reducing time for administration of Myers-Briggs Type Indicator (MBTI) using real-data simulation of Bayesian scaling in computerized adaptive administration. Findings from simulation study using data from 127 undergraduates are strongly supportive of use of Bayesian scaled computerized adaptive administration of MBTI.…
Descriptors: Bayesian Statistics, Classification, College Students, Computer Assisted Testing
Peer reviewedBugbee, Alan C., Jr.; Bernt, Frank M. – Journal of Research on Computing in Education, 1990
Discusses the use of computer administered testing by The American College. Student performance on computer administered versus paper-and-pencil tests is examined, student attitudes about exams are described, the effects of time limits on computerized testing are considered, and offline versus online testing is discussed. (31 references) (LRW)
Descriptors: Academic Achievement, Computer Assisted Testing, Intermode Differences, Online Systems
Peer reviewedTaylor, Carol; Kirsch, Irwin; Jamieson, Joan; Eignor, Daniel – Language Learning, 1999
Administered a questionnaire focusing on examinees' computer familiarity to 90,000 Test of English as a Foreign Language test takers. A group of 1,200 low-computer-familiar and high-computer-familiar examinees' worked through a computer tutorial and a set of TOEFL test items. Concludes that no evidence exists of an adverse relationship between…
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Literacy, Familiarity
Peer reviewedHippisley, J.; Houghton, S. – Journal of Computer Assisted Learning, 1999
Describes a study of primary school children in Western Australia that was conducted to assess student attitudes towards a computer-based interactive arithmetic test. Examines whether the simple format of the test would capture the attention of the children long enough to gather the required statistical data. (Author/LRW)
Descriptors: Arithmetic, Attention Span, Computer Assisted Testing, Elementary Education
Peer reviewedStocking, Martha L.; Jirele, Thomas; Lewis, Charles; Swanson, Len – Journal of Educational Measurement, 1998
Constructed a pool of items from operational tests of mathematics to investigate the feasibility of using automated-test-assembly (ATA) methods to moderate simultaneously possibly irrelevant differences between the performance of women and men and African-American and White test takers. Discusses the usefulness of ATA. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests
Peer reviewedBrooks, Patricia J.; MacWhinney, Brian – Journal of Child Language, 2000
Two experiments examined phonological priming in children and adults using a cross-modal picture-word interference task. Pictures of familiar objects were presented on a computer screen, while interfering words were presented over headphones. Results indicate that priming effects reach a peak during a time when articulatory information is being…
Descriptors: Articulation (Speech), Computer Assisted Testing, Cues, Error Patterns
Peer reviewedHaaf, Robert; Duncan, Brent; Skarakis-Doyle, Elizabeth; Carew, Maria; Kapitan, Paula – Language, Speech, and Hearing Services in Schools, 1999
A study involving 72 children (ages 4-8) investigated the effects of computerized presentation of the Peabody Picture Vocabulary Test-Revised Form M that used two computer-based response formats. Results found no difference in performance when students responded using standard presentation--direct pointing, computer presentation--trackball, or…
Descriptors: Computer Assisted Testing, Computer Uses in Education, Educational Technology, Elementary Education
Peer reviewedClariana, Roy B.; Lee, Doris – Educational Technology Research and Development, 2001
Focusing on whether computer-based study tasks should use multiple-choice or constructed-response (CR) question formats, this study hypothesized that a CR task with feedback would be superior to multiple-choice study tasks that allowed either single or multiple tries (STF, MTF). As hypothesized, CR scores were larger than MTF and STF scores,…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Feedback, Instructional Design


