Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 18 |
| Since 2007 (last 20 years) | 18 |
Descriptor
| Computation | 18 |
| Item Response Theory | 9 |
| Sample Size | 8 |
| Test Items | 8 |
| Foreign Countries | 7 |
| Statistical Analysis | 7 |
| Bayesian Statistics | 4 |
| Factor Analysis | 4 |
| Test Length | 4 |
| Accuracy | 3 |
| Achievement Tests | 3 |
| More ▼ | |
Source
| International Journal of… | 18 |
Author
| Kilic, Abdullah Faruk | 3 |
| Lee, Hyung Rock | 2 |
| Lee, Sunbok | 2 |
| Sung, Jaeyun | 2 |
| Ali Orhan | 1 |
| Arslan Namli, Nihan | 1 |
| Atar, Burcu | 1 |
| Baris Pekmezci, Fulya | 1 |
| Brooks, Gordon | 1 |
| Bulut, Okan | 1 |
| Diaz, Emily | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 18 |
| Reports - Research | 18 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Secondary Education | 3 |
| Elementary Education | 2 |
| Grade 8 | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Ali Orhan; Inan Tekin; Sedat Sen – International Journal of Assessment Tools in Education, 2025
In this study, it was aimed to translate and adapt the Computational Thinking Multidimensional Test (CTMT) developed by Kang et al. (2023) into Turkish and to investigate its psychometric qualities with Turkish university students. Following the translation procedures of the CTMT with 12 multiple-choice questions developed based on real-life…
Descriptors: Cognitive Tests, Thinking Skills, Computation, Test Validity
Hasibe Yahsi Sari; Hulya Kelecioglu – International Journal of Assessment Tools in Education, 2025
The aim of the study is to examine the effect of polytomous item ratio on ability estimation in different conditions in multistage tests (MST) using mixed tests. The study is simulation-based research. In the PISA 2018 application, the ability parameters of the individuals and the item pool were created by using the item parameters estimated from…
Descriptors: Test Items, Test Format, Accuracy, Test Length
Ucar, Arzu; Dogan, Celal Deha – International Journal of Assessment Tools in Education, 2021
Distance learning has become a popular phenomenon across the world during the COVID-19 pandemic. This led to answer copying behavior among individuals. The cut point of the Kullback-Leibler Divergence (KL) method, one of the copy detecting methods, was calculated using the Youden Index, Cost-Benefit, and Min Score p-value approaches. Using the cut…
Descriptors: Cheating, Identification, Cutting Scores, Statistical Analysis
Lee, Hyung Rock; Sung, Jaeyun; Lee, Sunbok – International Journal of Assessment Tools in Education, 2021
Conventional estimators for indirect effects using a difference in coefficients and product of coefficients produce the same results for continuous outcomes. However, for binary outcomes, the difference in coefficient estimator systematically underestimates the indirect effects because of a scaling problem. One solution is to standardize…
Descriptors: Statistical Analysis, Computation, Regression (Statistics), Scaling
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Karadavut, Tugba – International Journal of Assessment Tools in Education, 2019
Item Response Theory (IRT) models traditionally assume a normal distribution for ability. Although normality is often a reasonable assumption for ability, it is rarely met for observed scores in educational and psychological measurement. Assumptions regarding ability distribution were previously shown to have an effect on IRT parameter estimation.…
Descriptors: Item Response Theory, Computation, Bayesian Statistics, Ability
Baris Pekmezci, Fulya; Sengul Avsar, Asiye – International Journal of Assessment Tools in Education, 2021
There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the…
Descriptors: Item Response Theory, Computation, Accuracy, Monte Carlo Methods
Diaz, Emily; Brooks, Gordon; Johanson, George – International Journal of Assessment Tools in Education, 2021
This Monte Carlo study assessed Type I error in differential item functioning analyses using Lord's chi-square (LC), Likelihood Ratio Test (LRT), and Mantel-Haenszel (MH) procedure. Two research interests were investigated: item response theory (IRT) model specification in LC and the LRT and continuity correction in the MH procedure. This study…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Comparative Analysis
Zopluoglu, Cengiz – International Journal of Assessment Tools in Education, 2019
Unusual response similarity among test takers may occur in testing data and be an indicator of potential test fraud (e.g., examinees copy responses from other examinees, send text messages or pre-arranged signals among themselves for the correct response, item pre-knowledge). One index to measure the degree of similarity between two response…
Descriptors: Item Response Theory, Computation, Cheating, Measurement Techniques
Kilic, Abdullah Faruk – International Journal of Assessment Tools in Education, 2019
The purpose of this study is to investigate whether factor scores can be used instead of ability estimation and total score. For this purpose, the relationships among total score, ability estimation, and factor scores were investigated. In the research, Turkish subtest data from the Transition from Primary to Secondary Education (TEOG) exam…
Descriptors: Foreign Countries, Scores, Computation, Item Response Theory
Stanke, Luke; Bulut, Okan – International Journal of Assessment Tools in Education, 2019
Item response theory is a widely used framework for the design, scoring, and scaling of measurement instruments. Item response models are typically used for dichotomously scored questions that have only two score points (e.g., multiple-choice items). However, given the increasing use of instruments that include questions with multiple response…
Descriptors: Item Response Theory, Test Items, Responses, College Freshmen
Lee, Hyung Rock; Lee, Sunbok; Sung, Jaeyun – International Journal of Assessment Tools in Education, 2019
Applying single-level statistical models to multilevel data typically produces underestimated standard errors, which may result in misleading conclusions. This study examined the impact of ignoring multilevel data structure on the estimation of item parameters and their standard errors of the Rasch, two-, and three-parameter logistic models in…
Descriptors: Item Response Theory, Computation, Error of Measurement, Test Bias
Arslan Namli, Nihan; Senkal, Ozan – International Journal of Assessment Tools in Education, 2018
The overall objective of this study is to understand how the fuzzy logic theory can be used in measuring the programming performance of the undergraduate students, as well as proving the advantages of using fuzzy logic in evaluation of students' performance. 336 students were involved in the sample of this quantitative study. The first group was…
Descriptors: Undergraduate Students, Programming, Computation, Student Evaluation
Kilic, Abdullah Faruk; Uysal, Ibrahim; Atar, Burcu – International Journal of Assessment Tools in Education, 2020
This Monte Carlo simulation study aimed to investigate confirmatory factor analysis (CFA) estimation methods under different conditions, such as sample size, distribution of indicators, test length, average factor loading, and factor structure. Binary data were generated to compare the performance of maximum likelihood (ML), mean and variance…
Descriptors: Factor Analysis, Computation, Methods, Sample Size
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
