NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Individuals with Disabilities…8
What Works Clearinghouse Rating
Showing 1 to 15 of 553 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Ling Wu; Yao-Hsuan Huang; Chia-Wen Chen; Po-Hsi Chen – Journal of Educational Measurement, 2025
Multistage testing (MST), a variant of computerized adaptive testing (CAT), differs from conventional CAT in that it is adapted at the module level rather than at the individual item level. Typically, all examinees begin the MST with a linear test form in the first stage, commonly known as the routing stage. In 2020, Han introduced an innovative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Beyza Aksu Dünya; Stefanie A. Wind; Mehmet Can Demir – SAGE Open, 2025
The purpose of this study was to generate an item bank for assessing faculty members' assessment literacy and to examine the applicability and feasibility of a Computerized Adaptive Test (CAT) approach to monitor assessment literacy among faculty members. In developing this assessment using a sequential mixed-methods research design, our goal was…
Descriptors: Assessment Literacy, Item Banks, College Faculty, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Yiling – Measurement: Interdisciplinary Research and Perspectives, 2023
Computerized adaptive testing (CAT) offers an efficient and highly accurate method for estimating examinees' abilities. In this article, the free version of Concerto Software for CAT was reviewed, dividing our evaluation into three sections: software implementation, the Item Response Theory (IRT) features of CAT, and user experience. Overall,…
Descriptors: Computer Software, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Fu; Lu, Chang; Cui, Ying; Gao, Yizhu – IEEE Transactions on Learning Technologies, 2023
Learning outcome modeling is a technical underpinning for the successful evaluation of learners' learning outcomes through computer-based assessments. In recent years, collaborative filtering approaches have gained popularity as a technique to model learners' item responses. However, how to model the temporal dependencies between item responses…
Descriptors: Outcomes of Education, Models, Computer Assisted Testing, Cooperation
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Suhwa; Kang, Hyeon-Ah – Journal of Educational Measurement, 2023
The study presents multivariate sequential monitoring procedures for examining test-taking behaviors online. The procedures monitor examinee's responses and response times and signal aberrancy as soon as significant change is identifieddetected in the test-taking behavior. The study in particular proposes three schemes to track different…
Descriptors: Test Wiseness, Student Behavior, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Goodwin Amanda; Jorge Salas; Sophia Mueller – Grantee Submission, 2025
This study incorporates a random forest (RF) approach to probe complex interactions and nonlinearity among predictors into an item response model with the goal of using a hybrid approach to outperform either an RF or explanatory item response model (EIRM) only in explaining item responses. In the specified model, called EIRM-RF, predicted values…
Descriptors: Item Response Theory, Artificial Intelligence, Statistical Analysis, Predictor Variables
Jackson, Kayla – ProQuest LLC, 2023
Prior research highlights the benefits of multimode surveys and best practices for item-by-item (IBI) and matrix-type survey items. Some researchers have explored whether mode differences for online and paper surveys persist for these survey item types. However, no studies discuss measurement invariance when both item types and online modes are…
Descriptors: Test Items, Surveys, Error of Measurement, Item Response Theory
Mark L. Davison; David J. Weiss; Joseph N. DeWeese; Ozge Ersan; Gina Biancarosa; Patrick C. Kennedy – Journal of Educational and Behavioral Statistics, 2023
A tree model for diagnostic educational testing is described along with Monte Carlo simulations designed to evaluate measurement accuracy based on the model. The model is implemented in an assessment of inferential reading comprehension, the Multiple-Choice Online Causal Comprehension Assessment (MOCCA), through a sequential, multidimensional,…
Descriptors: Cognitive Processes, Diagnostic Tests, Measurement, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Meijuan Li; Hongyun Liu; Mengfei Cai; Jianlin Yuan – Education and Information Technologies, 2024
In the human-to-human Collaborative Problem Solving (CPS) test, students' problem-solving process reflects the interdependency among partners. The high interdependency in CPS makes it very sensitive to group composition. For example, the group outcome might be driven by a highly competent group member, so it does not reflect all the individual…
Descriptors: Problem Solving, Computer Assisted Testing, Cooperative Learning, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  37