NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,461 to 2,475 of 5,170 results Save | Export
Peer reviewed Peer reviewed
Clopton, James R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Comparative Analysis, Computer Programs, Hypothesis Testing, Item Analysis
Peer reviewed Peer reviewed
Bohrnstedt, George W.; Campbell, Richard T. – Educational and Psychological Measurement, 1972
Descriptors: Computer Programs, Data Analysis, Item Analysis, Rating Scales
Peer reviewed Peer reviewed
Whitney, Douglas R.; Sabers, Darrell L. – Journal of Experimental Education, 1971
Descriptors: Discriminant Analysis, Essay Tests, Item Analysis, Statistical Analysis
Gunn, Robert L.; Pearman, H. Egar – J Clin Psychol, 1970
A schedule was developed for assessing the future outlook of hospitalized psychiatric patients and administered to samples of patients from two different hospitals. A factor analysis was done for each sample. (CK)
Descriptors: Attitudes, Factor Analysis, Item Analysis, Patients
Simon, George B. – J Educ Meas, 1969
Descriptors: Item Analysis, Measurement Instruments, Test Construction, Test Results
Hunt, Richard A. – Educ Psychol Meas, 1970
Descriptors: Computer Programs, Item Analysis, Psychological Evaluation, Rating Scales
Koppel, Mark A.; Sechrest, Lee – Educ Psychol Meas, 1970
Descriptors: Correlation, Experimental Groups, Humor, Intelligence
Peer reviewed Peer reviewed
Frisbie, David A. – Educational and Psychological Measurement, 1981
The Relative Difficulty Ratio (RDR) was developed as an index of test or item difficulty for use when raw score means or item p-values are not directly comparable because of chance score differences. Computational RDR are described. Applications of the RDR at both the test and item level are illustrated. (Author/BW)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Test Items
Peer reviewed Peer reviewed
Jackson, Paul H. – Psychometrika, 1979
Use of the same term "split-half" for division of an n-item test into two subtests containing equal (Cronbach), and possibly unequal (Guttman), numbers of items sometimes leads to a misunderstanding about the relation between Guttman's maximum split-half bound and Cronbach's coefficient alpha. This distinction is clarified. (Author/JKS)
Descriptors: Item Analysis, Mathematical Formulas, Technical Reports, Test Reliability
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewed Peer reviewed
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2001
Item-discrimination indices are numbers calculated from test data that are used in assessing the effectiveness of individual test questions. This article asserts that the indices are so unreliable as to suggest that countless good questions may have been discarded over the years. It considers how the indices, and hence overall test reliability,…
Descriptors: Guessing (Tests), Item Analysis, Test Reliability, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational Measurement, 2005
In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize…
Descriptors: Law Schools, Item Analysis, Admission (School), Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, Sion Wyn – Educational Studies in Mathematics, 2007
This paper draws on data from the development of annual national mathematics assessment materials for 7-year-old pupils in Wales for use during the period 2000-2002. The materials were developed in both English and Welsh and were designed to be matched. The paper reports on item analyses which sought items that exhibited differential performance…
Descriptors: Foreign Countries, Welsh, Test Bias, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2007
Two conditional versions of the exposure-control method with item-ineligibility constraints for adaptive testing in van der Linden and Veldkamp (2004) are presented. The first version is for unconstrained item selection, the second for item selection with content constraints imposed by the shadow-test approach. In both versions, the exposure rates…
Descriptors: Law Schools, Adaptive Testing, Item Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Alina A.; Wilson, Christine – Educational and Psychological Measurement, 2007
This article discusses the assumptions required by the item response theory (IRT) true-score equating method (with Stocking & Lord, 1983; scaling approach), which is commonly used in the nonequivalent groups with an anchor data-collection design. More precisely, this article investigates the assumptions made at each step by the IRT approach to…
Descriptors: Calculus, Item Response Theory, Scores, Data Collection
Pages: 1  |  ...  |  161  |  162  |  163  |  164  |  165  |  166  |  167  |  168  |  169  |  ...  |  345