Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 24 |
Descriptor
Computation | 25 |
Item Response Theory | 25 |
Foreign Countries | 12 |
Test Items | 12 |
Models | 10 |
Grade 8 | 8 |
Mathematics Tests | 8 |
Measurement | 8 |
Comparative Analysis | 7 |
Middle School Students | 7 |
Difficulty Level | 6 |
More ▼ |
Source
Author
Cho, Sun-Joo | 3 |
Wang, Wen-Chung | 3 |
Amanda Goodwin | 2 |
Bottge, Brian | 2 |
Cohen, Allan S. | 2 |
Huang, Hung-Yu | 2 |
Jeon, Minjeong | 2 |
Matthew Naveiras | 2 |
Paul De Boeck | 2 |
Rabe-Hesketh, Sophia | 2 |
Sun-Joo Cho | 2 |
More ▼ |
Publication Type
Journal Articles | 21 |
Reports - Research | 20 |
Reports - Descriptive | 2 |
Collected Works - Proceedings | 1 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Middle Schools | 25 |
Junior High Schools | 21 |
Secondary Education | 21 |
Elementary Education | 13 |
Grade 8 | 9 |
Grade 7 | 6 |
High Schools | 5 |
Grade 5 | 3 |
Grade 6 | 3 |
Grade 9 | 3 |
Elementary Secondary Education | 2 |
More ▼ |
Audience
Location
Germany | 2 |
South Korea | 2 |
Taiwan | 2 |
Australia | 1 |
Brazil | 1 |
Canada | 1 |
Colorado | 1 |
Florida | 1 |
Hong Kong | 1 |
Italy | 1 |
Netherlands | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 2 |
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Stella Eteng-Uket – Numeracy, 2023
This paper describes a study that focused on developing, validating and standardizing a dyscalculia test, henceforth called the Dyscalculia Test. Out of the 4,758,800 students in Nigeria's upper primary and junior secondary schools, I randomly drew a sample of 2340 students, using a multistage sampling procedure that applied various sampling…
Descriptors: Test Construction, Learning Disabilities, Elementary School Students, Junior High School Students
Ilhan, Mustafa – International Journal of Assessment Tools in Education, 2019
This study investigated the effectiveness of statistical adjustments applied to rater bias in many-facet Rasch analysis. Some changes were first made in the dataset that did not include "rater × examinee" bias to cause to have "rater × examinee" bias. Later, bias adjustment was applied to rater bias included in the data file,…
Descriptors: Statistical Analysis, Item Response Theory, Evaluators, Bias
Chen, Guanhua – ProQuest LLC, 2018
This study is part of a larger design study that iteratively improves a robotics programming curriculum as well as a computational thinking (CT) instrument. Its focus was majorly on CT assessment and particularly on an online CT instrument with logging functionality that can store a student's problem-solving process by recording interactions…
Descriptors: Elementary School Students, Test Construction, Cognitive Tests, Computer Assisted Testing
Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2014
Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…
Descriptors: Item Response Theory, Models, Educational Assessment, Computation
Lockwood, J. R.; Castellano, Katherine E. – Grantee Submission, 2015
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
Descriptors: Statistical Analysis, Achievement Gains, Academic Achievement, Computation
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas – Educational and Psychological Measurement, 2015
The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…
Descriptors: Measurement, Computation, Test Format, Test Items
Humphry, Stephen; Heldsinger, Sandra; Andrich, David – Applied Measurement in Education, 2014
One of the best-known methods for setting a benchmark standard on a test is that of Angoff and its modifications. When scored dichotomously, judges estimate the probability that a benchmark student has of answering each item correctly. As in most methods of standard setting, it is assumed implicitly that the unit of the latent scale of the…
Descriptors: Foreign Countries, Standard Setting (Scoring), Judges, Item Response Theory
Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2015
In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Pretests Posttests, Scores
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Michaelides, Michalis P.; Haertel, Edward H. – Applied Measurement in Education, 2014
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Descriptors: Equated Scores, Test Items, Sampling, Statistical Inference
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Educational and Psychological Measurement, 2015
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Descriptors: Competence, Tests, Evaluation Methods, Adults
Cui, Ying; Mousavi, Amin – International Journal of Testing, 2015
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
Descriptors: Measurement, Achievement Tests, Comparative Analysis, Test Items
Murphy, Daniel L.; Beretvas, S. Natasha – Applied Measurement in Education, 2015
This study examines the use of cross-classified random effects models (CCrem) and cross-classified multiple membership random effects models (CCMMrem) to model rater bias and estimate teacher effectiveness. Effect estimates are compared using CTT versus item response theory (IRT) scaling methods and three models (i.e., conventional multilevel…
Descriptors: Teacher Effectiveness, Comparative Analysis, Hierarchical Linear Modeling, Test Theory
Previous Page | Next Page »
Pages: 1 | 2