NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Foster, Colin; Woodhead, Simon; Barton, Craig; Clark-Wilson, Alison – Educational Studies in Mathematics, 2022
In this paper, we analyse a large, opportunistic dataset of responses (N = 219,826) to online, diagnostic multiple-choice mathematics questions, provided by 6-16-year-old UK school mathematics students (N = 7302). For each response, students were invited to indicate on a 5-point Likert-type scale how confident they were that their response was…
Descriptors: Foreign Countries, Elementary School Students, Secondary School Students, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Jing-Wen – Journal of Science Education and Technology, 2016
This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…
Descriptors: Computer Assisted Testing, Diagnostic Tests, Quasiexperimental Design, Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Henderson, Sheila – Journal of Education for Teaching: International Research and Pedagogy, 2012
This paper describes a study conducted with a random sample of 80 student primary teachers drawn from all four years of the Bachelor of Education (BEd) programme at a teacher education institution in Scotland, with a view to determining why there were such differing levels of engagement with an online maths assessment. The assessment was created…
Descriptors: Program Effectiveness, Foreign Countries, College Students, Mathematics Anxiety
Peer reviewed Peer reviewed
Direct linkDirect link
Nix, Ingrid; Wyllie, Ali – British Journal of Educational Technology, 2011
Many institutions encourage formative computer-based assessment (CBA), yet competing priorities mean that learners are necessarily selective about what they engage in. So how can we motivate them to engage? Can we facilitate learners to take more control of shaping their learning experience? To explore this, the Learning with Interactive…
Descriptors: Feedback (Response), Student Evaluation, Learning Experience, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yen, Yung-Chin; Ho, Rong-Guey; Chen, Li-Ju; Chou, Kun-Yi; Chen, Yan-Lin – Educational Technology & Society, 2010
The purpose of this study was to examine whether the efficiency, precision, and validity of computerized adaptive testing (CAT) could be improved by assessing confidence differences in knowledge that examinees possessed. We proposed a novel polytomous CAT model called the confidence-weighting computerized adaptive testing (CWCAT), which combined a…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Huett, Jason Bond; Young, Jon; Huett, Kimberly Cleaves; Moller, Leslie; Bray, Marty – Quarterly Review of Distance Education, 2008
The purpose of this research was to manipulate the component of confidence found in Keller's ARCS Model to enhance the confidence and performance of undergraduate students enrolled in an online course at a Texas University. This experiment used SAM Office 2003 and WebCT for the delivery of the tactics, strategies, confidence-enhancing e-mails…
Descriptors: Control Groups, Undergraduate Students, Online Courses, Instructional Effectiveness
Anderson, Richard Ivan – Journal of Computer-Based Instruction, 1982
Describes confidence testing methods (confidence weighting, probabilistic marking, multiple alternative selection) as alternative to computer-based, multiple choice tests and explains potential benefits (increased reliability, improved examinee evaluation of alternatives, extended diagnostic information and remediation prescriptions, happier…
Descriptors: Computer Assisted Testing, Confidence Testing, Multiple Choice Tests, Probability
Rippey, Robert M.; Voytovich, Anthony E. – Journal of Computer-Based Instruction, 1983
Describes a computer-based method of confidence-testing, available in batch processing and interactive form, which improves a student's ability to assess probabilities during clinical diagnosis. The methods and results of three experiments are presented. (EAO)
Descriptors: Clinical Diagnosis, Computer Assisted Testing, Confidence Testing, Decision Making
Peer reviewed Peer reviewed
Stone, Gregory Ethan; Lunz, Mary E. – Applied Measurement in Education, 1994
Effects of reviewing items and altering responses on examinee ability estimates, test precision, test information, decision confidence, and pass/fail status were studied for 376 examinees taking 2 certification tests. Test precision is only slightly affected by review, and average information loss can be recovered by addition of one item. (SLD)
Descriptors: Ability, Adaptive Testing, Certification, Change
Peer reviewed Peer reviewed
Bruno, James E. – Computers in the Schools, 1987
Discussion of how computer technology can be integrated into elementary school instruction focuses on evaluating student learning through test scoring. Highlights include descriptions of a computer-based instructional delivery system, modified confidence weighted admissible probability measurement (MCW-APM), and computer-managed instructional…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Managed Instruction, Confidence Testing
Peer reviewed Peer reviewed
Bergstrom, Betty A.; Lunz, Mary E. – Evaluation and the Health Professions, 1992
The level of confidence in pass/fail decisions obtained with computerized adaptive tests and paper-and-pencil tests was greater for 645 medical technology students when the computer adaptive test implemented a 90 percent confidence stopping rule than for paper-and-pencil tests of comparable length. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Confidence Testing
Bejar, Issac I. – 1976
The concept of testing for partial knowledge is considered with the concept of tailored testing. Following the special usage of latent trait theory, the word valdity is used to mean the correlation of a test with the construct the test measures. The concept of a method factor in the test is also considered as a part of the validity. The possible…
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Testing, Confidence Testing
Peer reviewed Peer reviewed
Sturges, Persis T. – Journal of Educational Psychology, 1978
Undergraduate students took a multiple choice, computer assisted test and received feedback (items with the correct answers identified) either: (1) immediately, item-by-item; (2) following the entire test; (3) 24 hours later; or (4) no feedback. Retention one to three weeks later was significantly better for delayed feedback, and confidence…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Confidence Testing, Feedback