NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Neumann, Michelle M.; Neumann, David L. – International Journal of Research & Method in Education, 2019
Touch screen tablets are being increasingly used in schools for learning and assessment. However, the validity and reliability of assessments delivered via tablets are largely unknown. The present study tested the psychometric properties of a tablet-based app designed to measure early literacy skills. Tablet-based tests were also compared with…
Descriptors: Test Validity, Computer Assisted Testing, Handheld Devices, Emergent Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Auphan, Pauline; Ecalle, Jean; Magnan, Annie – Canadian Journal of Learning and Technology, 2020
The aim of this study is to propose advantages provided by computerized tools when assessing reading ability. A new computer-based reading assessment evaluating both word reading and reading comprehension processes was administered to 687 children in primary (N=400) and secondary (N=287) schools. Accuracy (weighted scores) and speed of access…
Descriptors: Computer Assisted Testing, Reading Tests, Reading Achievement, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
Arcia, Emily; And Others – Journal of School Psychology, 1991
Explored validity of Neurobehavioral Evaluation System, set of computerized tests and examined validity of reaction time variability as index of sustained attention. Findings from 105 children showed children able to complete 4 of tests. Findings from subsample of 88 children showed test performance significantly associated with teacher ratings of…
Descriptors: Attention, Children, Computer Assisted Testing, Elementary Education
Stricker, Lawrence J.; Alderton, David L. – 1991
The usefulness of response latency data for biographical inventory items was assessed for improving the inventory's validity. Focus was on assessing whether weighting item scores on the basis of their latencies improves the predictive validity of the inventory's total score. A total of 120 items from the Armed Services Applicant Profile (ASAP)…
Descriptors: Adults, Biographical Inventories, Computer Assisted Testing, Males
Peer reviewed Peer reviewed
Lansman, Marcy; And Others – Intelligence, 1982
Several measures of the speed of information processing were related to ability factors derived from the Cattell-Horn theory of fluid and crystallized intelligence. Correlations among the ability measures, among the information processing measures, and between the two domains were analyzed using confirmatory factor analysis. (Author/PN)
Descriptors: Cognitive Ability, Cognitive Processes, Computer Assisted Testing, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Applied Measurement in Education, 2006
In low-stakes testing, the motivation levels of examinees are often a matter of concern to test givers because a lack of examinee effort represents a direct threat to the validity of the test data. This study investigated the use of response time to assess the amount of examinee effort received by individual test items. In 2 studies, it was found…
Descriptors: Computer Assisted Testing, Motivation, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
Rafaeli, Sheizaf; Tractinsky, Noam – Computers in Human Behavior, 1991
Discussion of time-related measures in computerized ability tests focuses on a study of college students that used two intelligence test item types to develop a multitrait, multimethod assessment of response time measures. Convergent and discriminant validation are discussed, correlations between response time and accuracy are examined, and…
Descriptors: Computer Assisted Testing, Correlation, Higher Education, Intelligence Tests
Martin, John T.; And Others – 1983
A conventional verbal ability test and a Bayesian adaptive verbal ability test were compared using a variety of psychometric criteria. Tests were administered to 550 Marine recruits, half of whom received two 30-item alternate forms of a conventional test and half of whom received two 30-item alternate forms of a Bayesian adaptive test. Both types…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Individual Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Bhola, Dennison S.; Yang, Sheng-Ta – Educational Measurement: Issues and Practice, 2006
The attractiveness of computer-based tests (CBTs) is due largely to their capability to expand the ways we conduct testing. A relatively unexplored application, however, is actively using the computer to reduce construct-irrelevant variance while a test is being administered. This investigation introduces the effort-monitoring CBT, in which the…
Descriptors: Computer Assisted Testing, Test Validity, Reaction Time, Guessing (Tests)
Weiss, David J., Ed. – 1980
This report is the Proceedings of the third conference of its type. Included are 23 of the 25 papers presented at the conference, discussion of these papers by invited discussants, and symposium papers by a group of leaders in adaptive testing and latent trait test theory research and applications. The papers are organized into the following…
Descriptors: Academic Ability, Academic Achievement, Comparative Testing, Computer Assisted Testing
Wisniewski, Dennis R. – 1986
Three questions concerning the Binary Search Method (BSM) of computerized adaptive testing were studied: (1) whether it provided a reliable and valid estimation of examinee ability; (2) its effect on examinee attitudes toward computerized adaptive testing and conventional paper-and-pencil testing; and (3) the relationship between item response…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Grade 5