NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Schmidt, Nico; Wolgast, Anett – Educational and Psychological Measurement, 2023
Recent approaches to the detection of cheaters in tests employ detectors from the field of machine learning. Detectors based on supervised learning algorithms achieve high accuracy but require labeled data sets with identified cheaters for training. Labeled data sets are usually not available at an early stage of the assessment period. In this…
Descriptors: Identification, Cheating, Information Retrieval, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sanaz Nazari; Walter L. Leite; A. Corinne Huggins-Manley – Educational and Psychological Measurement, 2024
Social desirability bias (SDB) is a common threat to the validity of conclusions from responses to a scale or survey. There is a wide range of person-fit statistics in the literature that can be employed to detect SDB. In addition, machine learning classifiers, such as logistic regression and random forest, have the potential to distinguish…
Descriptors: Social Desirability, Bias, Artificial Intelligence, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Zhen; Xiaoyan Zhu – Educational and Psychological Measurement, 2024
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep…
Descriptors: Artificial Intelligence, Models, Cheating, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Philippe Goldammer; Peter Lucas Stöckli; Yannik Andrea Escher; Hubert Annen; Klaus Jonas – Educational and Psychological Measurement, 2024
Indirect indices for faking detection in questionnaires make use of a respondent's deviant or unlikely response pattern over the course of the questionnaire to identify them as a faker. Compared with established direct faking indices (i.e., lying and social desirability scales), indirect indices have at least two advantages: First, they cannot be…
Descriptors: Identification, Deception, Psychological Testing, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Ilagan, Michael John; Falk, Carl F. – Educational and Psychological Measurement, 2023
Administering Likert-type questionnaires to online samples risks contamination of the data by malicious computer-generated random responses, also known as bots. Although nonresponsivity indices (NRIs) such as person-total correlations or Mahalanobis distance have shown great promise to detect bots, universal cutoff values are elusive. An initial…
Descriptors: Likert Scales, Questionnaires, Artificial Intelligence, Identification
Peng, Luyao; Sinharay, Sandip – Educational and Psychological Measurement, 2022
Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of…
Descriptors: Cheating, Identification, Statistical Analysis, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Bitna; Sohn, Wonsook – Educational and Psychological Measurement, 2022
A Monte Carlo study was conducted to compare the performance of a level-specific (LS) fit evaluation with that of a simultaneous (SI) fit evaluation in multilevel confirmatory factor analysis (MCFA) models. We extended previous studies by examining their performance under MCFA models with different factor structures across levels. In addition,…
Descriptors: Goodness of Fit, Factor Structure, Monte Carlo Methods, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Shao, Can – Educational and Psychological Measurement, 2022
Computer-based and web-based testing have become increasingly popular in recent years. Their popularity has dramatically expanded the availability of response time data. Compared to the conventional item response data that are often dichotomous or polytomous, response time has the advantage of being continuous and can be collected in an…
Descriptors: Reaction Time, Test Wiseness, Computer Assisted Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Chansoon; Qian, Hong – Educational and Psychological Measurement, 2022
Using classical test theory and item response theory, this study applied sequential procedures to a real operational item pool in a variable-length computerized adaptive testing (CAT) to detect items whose security may be compromised. Moreover, this study proposed a hybrid threshold approach to improve the detection power of the sequential…
Descriptors: Computer Assisted Testing, Adaptive Testing, Licensing Examinations (Professions), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2019
Researchers frequently use machine-learning methods in many fields. In the area of detecting fraud in testing, there have been relatively few studies that have used these methods to identify potential testing fraud. In this study, a technical review of a recently developed state-of-the-art algorithm, Extreme Gradient Boosting (XGBoost), is…
Descriptors: Identification, Test Items, Deception, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Audette, Lillian M.; Hammond, Marie S.; Rochester, Natalie K. – Educational and Psychological Measurement, 2020
Longitudinal studies are commonly used in the social and behavioral sciences to answer a wide variety of research questions. Longitudinal researchers often collect data anonymously from participants when studying sensitive topics to ensure that accurate information is provided. One difficulty gathering longitudinal anonymous data is that of…
Descriptors: Research Methodology, Longitudinal Studies, Research Design, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Johnson, Matthew S. – Educational and Psychological Measurement, 2017
In a pioneering research article, Wollack and colleagues suggested the "erasure detection index" (EDI) to detect test tampering. The EDI can be used with or without a continuity correction and is assumed to follow the standard normal distribution under the null hypothesis of no test tampering. When used without a continuity correction,…
Descriptors: Deception, Identification, Testing Problems, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John; Stark, Stephen – Educational and Psychological Measurement, 2019
In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and detecting the interaction effect of two covariates through a simulation and provides an empirical demonstration of…
Descriptors: Hierarchical Linear Modeling, Structural Equation Models, Computation, Identification
Previous Page | Next Page »
Pages: 1  |  2  |  3