NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven; Kuhfeld, Megan – Applied Measurement in Education, 2021
Effort-moderated (E-M) scoring is intended to estimate how well a disengaged test taker would have performed had they been fully engaged. It accomplishes this adjustment by excluding disengaged responses from scoring and estimating performance from the remaining responses. The scoring method, however, assumes that the remaining responses are not…
Descriptors: Scoring, Achievement Tests, Identification, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chunyan; Jurich, Daniel; Morrison, Carol; Grabovsky, Irina – Applied Measurement in Education, 2021
The existence of outliers in the anchor items can be detrimental to the estimation of examinee ability and undermine the validity of score interpretation across forms. However, in practice, anchor item performance can become distorted due to various reasons. This study compares the performance of modified "INFIT" and "OUTFIT"…
Descriptors: Equated Scores, Test Items, Item Response Theory, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Applied Measurement in Education, 2019
The identification of rapid guessing is important to promote the validity of achievement test scores, particularly with low-stakes tests. Effective methods for identifying rapid guesses require reliable threshold methods that are also aligned with test taker behavior. Although several common threshold methods are based on rapid guessing response…
Descriptors: Guessing (Tests), Identification, Reaction Time, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Y.; Lee, Won-Chan – Applied Measurement in Education, 2019
This study explores classification consistency and accuracy for mixed-format tests using real and simulated data. In particular, the current study compares six methods of estimating classification consistency and accuracy for seven mixed-format tests. The relative performance of the estimation methods is evaluated using simulated data. Study…
Descriptors: Classification, Reliability, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
McClintock, Joseph Clair – Applied Measurement in Education, 2015
Erasure analysis is the study of the pattern or quantity of erasures on multiple-choice paper-and-pencil examinations, to determine whether erasures were made post-testing for the purpose of unfairly increasing students' scores. This study examined the erasure data from over 1.4 million exams, taken by more than 600,000 students. Three…
Descriptors: Multiple Choice Tests, Cheating, Methods, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Kyung Yong; Lee, Won-Chan – Applied Measurement in Education, 2017
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Talley, Anna E. – Applied Measurement in Education, 2015
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T.; Wells, Craig S.; Sireci, Stephen G. – Applied Measurement in Education, 2012
Item parameter drift (IPD) occurs when item parameter values change from their original value over time. IPD may pose a serious threat to the fairness and validity of test score interpretations, especially when the goal of the assessment is to measure growth or improvement. In this study, we examined the effect of multidirectional IPD (i.e., some…
Descriptors: Item Response Theory, Test Items, Scaling, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ho, Tsung-Han; Dodd, Barbara G. – Applied Measurement in Education, 2012
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Bejar, Isaac I. – Applied Measurement in Education, 2010
The foregoing articles constitute what I consider a comprehensive and clear description of the redesign process of a major assessment. The articles serve to illustrate the problems that will need to be addressed by large-scale assessments in the twenty-first century. Primary among them is how to organize the development of such assessments to meet…
Descriptors: Advanced Placement Programs, Equivalency Tests, Evidence, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Monahan, Patrick – Applied Measurement in Education, 2008
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Descriptors: Monte Carlo Methods, Factor Analysis, Generalization, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam; Sinharay, Sandip; Haberman, Shelby; Larkin, Kevin – Applied Measurement in Education, 2010
Will subscores provide additional information than what is provided by the total score? Is there a method that can estimate more trustworthy subscores than observed subscores? To answer the first question, this study evaluated whether the true subscore was more accurately predicted by the observed subscore or total score. To answer the second…
Descriptors: Licensing Examinations (Professions), Scores, Computation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Huff, Kristen; Steinberg, Linda; Matts, Thomas – Applied Measurement in Education, 2010
The cornerstone of evidence-centered assessment design (ECD) is an evidentiary argument that requires that each target of measurement (e.g., learning goal) for an assessment be expressed as a "claim" to be made about an examinee that is relevant to the specific purpose and audience(s) for the assessment. The "observable evidence" required to…
Descriptors: Advanced Placement Programs, Equivalency Tests, Evidence, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ewing, Maureen; Packman, Sheryl; Hamen, Cynthia; Thurber, Allison Clark – Applied Measurement in Education, 2010
In the last few years, the Advanced Placement (AP) Program[R] has used evidence-centered assessment design (ECD) to articulate the knowledge, skills, and abilities to be taught in the course and measured on the summative exam for four science courses, three history courses, and six world language courses; its application to calculus and English…
Descriptors: Advanced Placement Programs, Equivalency Tests, Evidence, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Plake, Barbara S.; Huff, Kristen; Reshetar, Rosemary – Applied Measurement in Education, 2010
In many large-scale assessment programs, achievement level descriptors (ALDs) provide a critical role in communicating what scores on the assessment mean and in interpreting what examinees know and are able to do based on their test performance. Based on their test performance, examinees are often classified into performance categories. The…
Descriptors: Evidence, Test Construction, Measurement, Standard Setting
Previous Page | Next Page ยป
Pages: 1  |  2