NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 46 to 60 of 1,052 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baryktabasov, Kasym; Jumabaeva, Chinara; Brimkulov, Ulan – Research in Learning Technology, 2023
Many examinations with thousands of participating students are organized worldwide every year. Usually, this large number of students sit the exams simultaneously and answer almost the same set of questions. This method of learning assessment requires tremendous effort and resources to prepare the venues, print question books and organize the…
Descriptors: Information Technology, Computer Assisted Testing, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jolanta Kisielewska; Paul Millin; Neil Rice; Jose Miguel Pego; Steven Burr; Michal Nowakowski; Thomas Gale – Education and Information Technologies, 2024
Between 2018-2021, eight European medical schools took part in a study to develop a medical knowledge Online Adaptive International Progress Test. Here we discuss participants' self-perception to evaluate the acceptability of adaptive vs non-adaptive testing. Study participants, students from across Europe at all stages of undergraduate medical…
Descriptors: Medical Students, Medical Education, Student Attitudes, Self Efficacy
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kreitchmann, Rodrigo S.; Sorrel, Miguel A.; Abad, Francisco J. – Educational and Psychological Measurement, 2023
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of…
Descriptors: Measurement Techniques, Questionnaires, Social Desirability, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Zhihui Zhang; Xiaomeng Huang – Education and Information Technologies, 2024
Blended learning combines online and traditional classroom instruction, aiming to optimize educational outcomes. Despite its potential, student engagement with online components remains a significant challenge. Gamification has emerged as a popular solution to bolster engagement, though its effectiveness is contested, with research yielding mixed…
Descriptors: Educational Games, Blended Learning, Learning Motivation, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Musa Adekunle Ayanwale; Mdutshekelwa Ndlovu – Journal of Pedagogical Research, 2024
The COVID-19 pandemic has had a significant impact on high-stakes testing, including the national benchmark tests in South Africa. Current linear testing formats have been criticized for their limitations, leading to a shift towards Computerized Adaptive Testing [CAT]. Assessments with CAT are more precise and take less time. Evaluation of CAT…
Descriptors: Adaptive Testing, Benchmarking, National Competency Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Lingling; Wang, Shiyu; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2021
Designing a multidimensional adaptive test (M-MST) based on a multidimensional item response theory (MIRT) model is critical to make full use of the advantages of both MST and MIRT in implementing multidimensional assessments. This study proposed two types of automated test assembly (ATA) algorithms and one set of routing rules that can facilitate…
Descriptors: Item Response Theory, Adaptive Testing, Automation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Cooperman, Allison W.; Weiss, David J.; Wang, Chun – Educational and Psychological Measurement, 2022
Adaptive measurement of change (AMC) is a psychometric method for measuring intra-individual change on one or more latent traits across testing occasions. Three hypothesis tests--a Z test, likelihood ratio test, and score ratio index--have demonstrated desirable statistical properties in this context, including low false positive rates and high…
Descriptors: Error of Measurement, Psychometrics, Hypothesis Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Gloria; Bonner, Cavan V.; Tay, Louis – International Journal of Testing, 2022
With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are…
Descriptors: Psychometrics, Computer Assisted Testing, Adaptive Testing, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yin; Brown, Anna; Williams, Paul – Educational and Psychological Measurement, 2023
Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by…
Descriptors: Measurement Techniques, Computer Assisted Testing, Adaptive Testing, Industrial Psychology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Süleyman Demir; Derya Çobanoglu Aktan; Nese Güler – International Journal of Assessment Tools in Education, 2023
This study has two main purposes. Firstly, to compare the different item selection methods and stopping rules used in Computerized Adaptive Testing (CAT) applications with simulative data generated based on the item parameters of the Vocational Maturity Scale. Secondly, to test the validity of CAT application scores. For the first purpose,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Vocational Maturity, Measures (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Yu Wang – ProQuest LLC, 2024
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Cognitive Tests, Cognitive Measurement, Educational Diagnosis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shiyu; Xiao, Houping; Cohen, Allan – Journal of Educational and Behavioral Statistics, 2021
An adaptive weight estimation approach is proposed to provide robust latent ability estimation in computerized adaptive testing (CAT) with response revision. This approach assigns different weights to each distinct response to the same item when response revision is allowed in CAT. Two types of weight estimation procedures, nonfunctional and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computation, Robustness (Statistics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  71