NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Achmad Rante Suparman; Eli Rohaeti; Sri Wening – Journal on Efficiency and Responsibility in Education and Science, 2024
This study focuses on developing a five-tier chemical diagnostic test based on a computer-based test with 11 assessment categories with an assessment score from 0 to 10. A total of 20 items produced were validated by education experts, material experts, measurement experts, and media experts, and an average index of the Aiken test > 0.70 was…
Descriptors: Chemistry, Diagnostic Tests, Computer Assisted Testing, Credits
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Endang Susantini; Yurizka Melia Sari; Prima Vidya Asteria; Muhammad Ilyas Marzuqi – Journal of Education and Learning (EduLearn), 2025
Assessing preservice' higher order thinking skills (HOTS) in science and mathematics is essential. Teachers' HOTS ability is closely related to their ability to create HOTS-type science and mathematics problems. Among various types of HOTS, one is Bloomian HOTS. To facilitate the preservice teacher to create problems in those subjects, an Android…
Descriptors: Content Validity, Mathematics Instruction, Decision Making, Thinking Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carrió-Pastor, María Luisa; Martín Marchante, Beatriz – International Journal of English Studies, 2018
The work at hand is part of a wider study the aim of which was to determine what kind of factors influence failures in pragmatic items of an online adaptive placement test taken by a group of 34 Spanish students in their first year at university. A preceding analysis (Carrió & Martín, 2016) showed the type of personal factors, such as lack of…
Descriptors: Pragmatics, Test Items, Language Tests, College Freshmen
Peer reviewed Peer reviewed
Direct linkDirect link
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms"…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Dumas, Helene M. – International Journal of Rehabilitation Research, 2010
The PEDI-CAT is a new computer adaptive test (CAT) version of the Pediatric Evaluation of Disability Inventory (PEDI). Additional PEDI-CAT items specific to postacute pediatric hospital care were recently developed using expert reviews and cognitive interviewing techniques. Expert reviews established face and construct validity, providing positive…
Descriptors: Hospitals, Adaptive Testing, Content Validity, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Yip, Chi Kwong; Man, David W. K. – International Journal of Rehabilitation Research, 2009
This study investigates the validity of a newly developed computerized cognitive assessment system (CCAS) that is equipped with rich multimedia to generate simulated testing situations and considers both test item difficulty and the test taker's ability. It is also hypothesized that better predictive validity of the CCAS in self-care of persons…
Descriptors: Test Items, Content Validity, Predictive Validity, Patients