Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 20 |
Descriptor
Computation | 24 |
Statistical Analysis | 24 |
Item Response Theory | 8 |
Error of Measurement | 7 |
Sampling | 7 |
Equated Scores | 6 |
Scores | 6 |
Probability | 5 |
Sample Size | 5 |
Test Items | 5 |
Accuracy | 4 |
More ▼ |
Source
ETS Research Report Series | 24 |
Author
Haberman, Shelby J. | 5 |
Qian, Jiahe | 4 |
Dorans, Neil J. | 3 |
Moses, Tim | 3 |
Guo, Hongwen | 2 |
Holland, Paul | 2 |
Kim, Sooyeon | 2 |
Livingston, Samuel A. | 2 |
Oranje, Andreas | 2 |
Zhang, Jinming | 2 |
Blew, Edwin O. | 1 |
More ▼ |
Publication Type
Journal Articles | 24 |
Reports - Research | 23 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 4 |
Grade 8 | 3 |
Higher Education | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
Postsecondary Education | 3 |
Secondary Education | 3 |
Grade 4 | 2 |
Intermediate Grades | 2 |
Audience
Location
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 4 |
Praxis Series | 2 |
Graduate Management Admission… | 1 |
Pre Professional Skills Tests | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Haberman, Shelby J. – ETS Research Report Series, 2019
Cross-validation is a common statistical procedure applied to problems that are otherwise computationally intractable. It is often employed to assess the effectiveness of prediction procedures. In this report, cross-validation is discussed in terms of "U"-statistics. This approach permits consideration of the statistical properties of…
Descriptors: Statistical Analysis, Generalization, Prediction, Computation
Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2019
We derive formulas for the differential item functioning (DIF) measures that two routinely used DIF statistics are designed to estimate. The DIF measures that match on observed scores are compared to DIF measures based on an unobserved ability (theta or true score) for items that are described by either the one-parameter logistic (1PL) or…
Descriptors: Scores, Test Bias, Statistical Analysis, Item Response Theory
Qian, Jiahe – ETS Research Report Series, 2020
The finite population correction (FPC) factor is often used to adjust variance estimators for survey data sampled from a finite population without replacement. As a replicated resampling approach, the jackknife approach is usually implemented without the FPC factor incorporated in its variance estimates. A paradigm is proposed to compare the…
Descriptors: Computation, Sampling, Data, Statistical Analysis
von Davier, Matthias – ETS Research Report Series, 2016
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Descriptors: Psychometrics, Mathematics, Models, Statistical Analysis
Kim, Sooyeon; Robin, Frederic – ETS Research Report Series, 2017
In this study, we examined the potential impact of item misfit on the reported scores of an admission test from the subpopulation invariance perspective. The target population of the test consisted of 3 major subgroups with different geographic regions. We used the logistic regression function to estimate item parameters of the operational items…
Descriptors: Scores, Test Items, Test Bias, International Assessment
Livingston, Samuel A.; Chen, Haiwen H. – ETS Research Report Series, 2015
Quantitative information about test score reliability can be presented in terms of the distribution of equated scores on an alternate form of the test for test takers with a given score on the form taken. In this paper, we describe a procedure for estimating that distribution, for any specified score on the test form taken, by estimating the joint…
Descriptors: Scores, Statistical Distributions, Research Reports, Equated Scores
Qian, Jiahe; Jiang, Yanming; von Davier, Alina A. – ETS Research Report Series, 2013
Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…
Descriptors: Item Response Theory, Test Items, Sampling, True Scores
Haberman, Shelby J. – ETS Research Report Series, 2013
A general program for item-response analysis is described that uses the stabilized Newton-Raphson algorithm. This program is written to be compliant with Fortran 2003 standards and is sufficiently general to handle independent variables, multidimensional ability parameters, and matrix sampling. The ability variables may be either polytomous or…
Descriptors: Predictor Variables, Mathematics, Item Response Theory, Probability
Guo, Hongwen; Oh, Hyeonjoo J. – ETS Research Report Series, 2009
In operational equating, frequency estimation (FE) equipercentile equating is often excluded from consideration when the old and new groups have a large ability difference. This convention may, in some instances, cause the exclusion of one competitive equating method from the set of methods under consideration. In this report, we study the…
Descriptors: Equated Scores, Computation, Statistical Analysis, Test Items
Moses, Tim; Holland, Paul – ETS Research Report Series, 2008
The purpose of this paper is to extend von Davier, Holland, and Thayer's (2004b) framework of kernel equating so that it can incorporate raw data and traditional equipercentile equating methods. One result of this more general framework is that previous equating methodology research can be viewed more comprehensively. Another result is that the…
Descriptors: Equated Scores, Error of Measurement, Statistical Analysis, Computation
Kim, Sooyeon; Linvingston, Samuel A.; Lewis, Charles – ETS Research Report Series, 2008
This paper describes an empirical evaluation of a Bayesian procedure for equating scores on test forms taken by small numbers of examinees, using collateral information from the equating of other test forms. In this procedure, a separate Bayesian estimate is derived for the equated score at each raw-score level, making it unnecessary to specify a…
Descriptors: Equated Scores, Statistical Analysis, Sample Size, Bayesian Statistics
Moses, Tim; Holland, Paul – ETS Research Report Series, 2007
The purpose of this study was to empirically evaluate the impact of loglinear presmoothing accuracy on equating bias and variability across chained and post-stratification equating methods, kernel and percentile-rank continuization methods, and sample sizes. The results of evaluating presmoothing on equating accuracy generally agreed with those of…
Descriptors: Equated Scores, Statistical Analysis, Accuracy, Sample Size
Braun, Henry; Zhang, Jinming; Vezzu, Sailesh – ETS Research Report Series, 2008
At present, although the percentages of students with disabilities (SDs) and/or students who are English language learners (ELL) excluded from a NAEP administration are reported, no statistical adjustment is made for these excluded students in the calculation of NAEP results. However, the exclusion rates for both SD and ELL students vary…
Descriptors: Research Methodology, Computation, Disabilities, English Language Learners
Haberman, Shelby J. – ETS Research Report Series, 2007
In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…
Descriptors: Item Response Theory, Ability, Information Theory, Computation
Oranje, Andreas – ETS Research Report Series, 2006
Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…
Descriptors: Computation, Statistical Analysis, National Competency Tests, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2