NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,906 to 4,920 of 9,547 results Save | Export
Peer reviewed Peer reviewed
Bolt, Daniel M. – Journal of Educational Measurement, 2000
Reviewed aspects of the SIBTEST procedure through three studies. Study 1 examined the effects of item format using 40 mathematics items from the Scholastic Assessment Test. Study 2 considered the effects of a problem type factor and its interaction with item format for eight items, and study 3 evaluated the degree to which factors varied in the…
Descriptors: Computer Software, Hypothesis Testing, Item Bias, Mathematics
Peer reviewed Peer reviewed
Gierl, Mark J.; Leighton, Jacqueline P.; Hunka, Stephen M. – Educational Measurement: Issues and Practice, 2000
Discusses the logic of the rule-space model (K. Tatsuoka, 1983) as it applies to test development and analysis. The rule-space model is a statistical method for classifying examinees' test item responses into a set of attribute-mastery patterns associated with different cognitive skills. Directs readers to a tutorial that may be downloaded. (SLD)
Descriptors: Item Analysis, Item Response Theory, Test Construction, Test Items
Peer reviewed Peer reviewed
Jackson, Stacy L.; And Others – Journal of Career Assessment, 1996
Factor analysis of 1,030 adults' responses on the Myers Briggs Type Indicator (MBTI) were used to test 4 alternative models. Results support a four-factor structure similar to the original Jungian structure. Elimination of 12 MBTI items was recommended. (SK)
Descriptors: Construct Validity, Factor Analysis, Models, Personality Measures
Peer reviewed Peer reviewed
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2000
Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…
Descriptors: Computer Assisted Testing, Standardized Tests, Test Items, Testing Problems
Peer reviewed Peer reviewed
Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang – Applied Psychological Measurement, 2001
Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Peer reviewed Peer reviewed
Bandalos, Deborah L. – Structural Equation Modeling, 2002
Used simulation to study the effects of the practice of item parceling. Results indicate that certain types of item parceling can obfuscate a multidimensional factor structure in a way that acceptable values of fit indexes are found for a misspecified solution. Discusses why the use of parceling cannot be recommended when items are…
Descriptors: Estimation (Mathematics), Factor Structure, Goodness of Fit, Test Items
Peer reviewed Peer reviewed
Muraki, Eiji – Journal of Educational Measurement, 1999
Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…
Descriptors: Groups, Item Bias, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Ankenmann, Robert D. – Journal of Educational Measurement, 1996
This book is designed to be an instructional guide rather than a technical manual. For that reason, it provides a comprehensive and integrated overview of the procedures for detecting differential item functioning with citations to more technically detailed references. (SLD)
Descriptors: Evaluation Methods, Identification, Item Bias, Test Construction
Peer reviewed Peer reviewed
Zeng, Lingjia – Applied Psychological Measurement, 1997
Proposes a marginal Bayesian estimation procedure to improve item parameter estimates for the three parameter logistic model. Computer simulation suggests that implementing the marginal Bayesian estimation algorithm with four-parameter beta prior distributions and then updating the priors with empirical means of updated intermediate estimates can…
Descriptors: Algorithms, Bayesian Statistics, Estimation (Mathematics), Statistical Distributions
Peer reviewed Peer reviewed
Maller, Susan J. – Educational and Psychological Measurement, 2001
Used the national standardization sample (n=2,200) of the Wechsler Intelligence Scale for Children Third Edition (WISC-III) to investigate differential item functioning (DIF) in 6 WISC-III subtests. Detected both uniform DIF and nonuniform DIF, finding DIF for about one third of the items studied. Discusses implications for use of the WISC-III.…
Descriptors: Children, Intelligence Tests, Item Bias, Test Items
Peer reviewed Peer reviewed
Plake, Barbara S.; Impara, James C. – Educational Assessment, 2001
Examined the reliability and accuracy of item performance estimates from an Angoff standard setting application with 29 panelists on 1 year and 30 in the next year. Results provide evidence that item performance estimates were both reasonable and reliable. Discusses factors that might have influenced the results. (SLD)
Descriptors: Estimation (Mathematics), Evaluators, Performance Factors, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Po-Hsi; Cheng, Ying-Yao – Psychological Methods, 2004
A conventional way to analyze item responses in multiple tests is to apply unidimensional item response models separately, one test at a time. This unidimensional approach, which ignores the correlations between latent traits, yields imprecise measures when tests are short. To resolve this problem, one can use multidimensional item response models…
Descriptors: Item Response Theory, Test Items, Testing, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sijtsma, Klaas; van der Ark, L. Andries – Multivariate Behavioral Research, 2003
This article first discusses a statistical test for investigating whether or not the pattern of missing scores in a respondent-by-item data matrix is random. Since this is an asymptotic test, we investigate whether it is useful in small but realistic sample sizes. Then, we discuss two known simple imputation methods, person mean (PM) and two-way…
Descriptors: Test Items, Questionnaires, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Measurement in Education, 2004
Three methods of detecting item drift were compared: the procedure in BILOG-MG for estimating linear trends in item difficulty, the CUSUM procedure that Veerkamp and Glas (2000) used to detect trends in difficulty or discrimination, and a modification of Kim, Cohen, and Park's (1995) x 2 test for multiple-group differential item functioning (DIF),…
Descriptors: Comparative Analysis, Test Items, Testing, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shapiro, Amy – Journal of the Scholarship of Teaching and Learning, 2009
Student evaluations of a large General Psychology course indicate that students enjoy the class a great deal, yet attendance is low. An experiment was conducted to evaluate a personal response system as a solution. Attendance rose by 30% as compared to extra credit as an inducement, but was equivalent to offering pop quizzes. Performance on test…
Descriptors: Test Items, Instructional Effectiveness, Learning Strategies, Classroom Techniques
Pages: 1  |  ...  |  324  |  325  |  326  |  327  |  328  |  329  |  330  |  331  |  332  |  ...  |  637