NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,736 to 6,750 of 9,530 results Save | Export
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Journal of Educational Measurement, 1994
The comparability of Angoff-based item ratings on a general education test battery made by judges from within-content and across-content domains was studied. Results with 26 college faculty judges indicate that, at least for some tests, item ratings might be essentially equivalent regardless of judge's content specialty. (SLD)
Descriptors: College Faculty, Comparative Analysis, General Education, Higher Education
Doty, Robert – Internet World, 1995
Features Internet sites that are sources for lesson plans, materials, group discussion topics, activities, test questions, computer software, and videos for K-12 education. Resources highlighted include CNN Newsroom, KidLink, and AskERIC. (AEF)
Descriptors: Computer Software, Elementary Secondary Education, Group Discussion, Information Sources
Peer reviewed Peer reviewed
Boldt, Robert F. – Language Testing, 1992
The assumption called PIRC (proportional item response curve) was tested in which PIRC was used to predict item scores of selected examinees on selected items. Findings show approximate accuracies of prediction for PIRC, the three-parameter logist model, and a modified Rasch model. (12 references) (Author/LB)
Descriptors: Comparative Analysis, English (Second Language), Factor Analysis, Item Response Theory
Peer reviewed Peer reviewed
Perkins, Kyle – Language Testing, 1992
The effect of five types of topical structure (including initial sentence element, mood subject, and surface subject) on the item difficulty of reading comprehension questions was investigated. Results indicated differences in the item difficulty of questions according to the type of topical structure on which the questions were based. (21…
Descriptors: Difficulty Level, English (Second Language), Language Tests, Models
Kubota, Mel; Connell, Anne – College Board Review, 1992
The processes used in developing the Scholastic Aptitude Test (SAT) to eliminate cultural bias while still measuring skills related to academic success are described, including test item writing, pretesting, and validation. Test items from 1908, 1927, 1947, and 1980 tests illustrate the evolution of the examinations. (MSE)
Descriptors: College Entrance Examinations, Cultural Pluralism, Educational Change, Higher Education
Peer reviewed Peer reviewed
Norcini, John J.; Shea, Judy A. – Journal of Educational Measurement, 1992
Scores from four samples each of 250 and 1,000 physicians demonstrated that a linear procedure for equating scores and rescaling judges' standards for a certification test could be applied to individual item data gathered through the Angoff standard-setting method. Equated and rescaled values were close to those actually assigned. (SLD)
Descriptors: Certification, Equated Scores, Estimation (Mathematics), Evaluators
Peer reviewed Peer reviewed
Hegarty, Mary; And Others – Journal of Educational Psychology, 1992
Eye-fixation analysis of 38 undergraduates allowed identification of phases in solution of arithmetic word problems and location of students' difficulties with inconsistent problems within the phases. Results indicate that the locus of the inconsistency effect lies outside the execution phase of problem solving. (SLD)
Descriptors: Arithmetic, Eye Movements, Higher Education, Identification
Peer reviewed Peer reviewed
Cizek, Gregory J. – Educational and Psychological Measurement, 1994
Performance of a common set of test items on an examination in which the order of options for one test form was experimentally manipulated. Results for 759 medical specialty board examinees find that reordering item options results in significant but unpredictable effects on item difficulty. (SLD)
Descriptors: Change, Difficulty Level, Equated Scores, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Arthur, Winfred, Jr.; Day, David V. – Educational and Psychological Measurement, 1994
The development of a short form of the Raven Advanced Progressive Matrices Test is reported. Results from 3 studies with 663 college students indicate that the short form demonstrates psychometric properties similar to the long form yet requires a substantially shorter administration time. (SLD)
Descriptors: Cognitive Ability, College Students, Educational Research, Higher Education
Peer reviewed Peer reviewed
Thomas, John W.; And Others – Higher Education, 1991
Research on college students' study skills and habits and on secondary-level course organization suggest certain patterns of instructor demand (workload, test difficulty, latitude for self-direction) and compensations (test review practices, test item overlap with instructor handouts, "safety nets") may account for student study deficiencies, and…
Descriptors: College Preparation, College Students, Course Organization, High School Graduates
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
Peer reviewed Peer reviewed
Miller, Timothy R.; Spray, Judith A. – Journal of Educational Measurement, 1993
Presents logistic discriminant analysis as a means of detecting differential item functioning (DIF) in items that are polytomously scored. Provides examples of DIF detection using a 27-item mathematics test with 1,977 examinees. The proposed method is simpler and more practical than polytomous extensions of the logistic regression DIF procedure.…
Descriptors: Discriminant Analysis, Item Bias, Mathematical Models, Mathematics Tests
Peer reviewed Peer reviewed
Hambleton, Ronald K.; And Others – Journal of Educational Measurement, 1993
Item parameter estimation errors in test development are highlighted. The problem is illustrated with several simulated data sets, and a conservative solution is offered for addressing the problem in item response theory test development practice. Steps that reduce the problem of capitalizing on chance in item selections are suggested. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Peer reviewed Peer reviewed
Kunnan, Antony John – TESOL Quarterly, 1990
This study shows that a placement test cannot only be examined for items that display differential item functioning (DIF) by using an item response theory, but also that the identification of potential sources for these DIF items can be attempted and short- and long-term measures to reduce DIF can then be proposed. (JL)
Descriptors: Cultural Differences, English (Second Language), Higher Education, Item Analysis
Peer reviewed Peer reviewed
Seong, Tae-Je – Applied Psychological Measurement, 1990
The sensitivity of marginal maximum likelihood estimation of item and ability (theta) parameters was examined when prior ability distributions were not matched to underlying ability distributions. Thirty sets of 45-item test data were generated. Conditions affecting the accuracy of estimation are discussed. (SLD)
Descriptors: Ability, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)
Pages: 1  |  ...  |  446  |  447  |  448  |  449  |  450  |  451  |  452  |  453  |  454  |  ...  |  636