NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cahan, Sorel; Gamliel, Eyal – Journal of Educational Measurement, 2006
Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…
Descriptors: Probability, Bias, Mathematical Concepts, Mathematical Models
Peer reviewed Peer reviewed
Barcikowski, Robert S. – Journal of Educational Measurement, 1972
These results indicate that in deciding on the data-gathering design to be used in seeking norm information, attention should be given to item characteristics and test length with particular attention paid to the range of biserial correlations between item response and ability. (Author)
Descriptors: Item Sampling, Mathematical Models, Measurement Techniques, Monte Carlo Methods
Peer reviewed Peer reviewed
Levin, Joel R. – Journal of Educational Measurement, 1975
A set procedure developed in this study is useful in determining sample size, based on specification of linear contrasts involving certain formula treatments. (Author/DEP)
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Measurement Techniques
Peer reviewed Peer reviewed
Marsh, Herbert W. – Journal of Educational Measurement, 1987
This investigation tested the ability of an a priori hierarchical structure of self-concept derived from the Shavelson model to explain responses to the Self Description Questionnaire (SDQ) III, and demonstrated the application and problems with the use of hierarchical confirmatory factor analysis (HCFA). (Author/JAZ)
Descriptors: Factor Analysis, Factor Structure, Goodness of Fit, Mathematical Models
Peer reviewed Peer reviewed
Embretson, Susan E. – Journal of Educational Measurement, 1992
New developments for solving the validation problem are applied to measuring and validating spatial modifiability. Results from 582 Air Force recruits support construct and criterion-related validities for the cognitive modifiability of spatial visualization items. Results also support modifiability as a direct measurement of learning ability.…
Descriptors: Cognitive Ability, Cognitive Measurement, Concurrent Validity, Construct Validity
Peer reviewed Peer reviewed
Hambleton, Ronald K.; De Gruijter, Dato N. M. – Journal of Educational Measurement, 1983
Addressing the shortcomings of classical item statistics for selecting criterion-referenced test items, this paper describes an optimal item selection procedure utilizing item response theory (IRT) and offers examples in which random selection and optimal item selection methods are compared. Theoretical advantages of optimal selection based upon…
Descriptors: Criterion Referenced Tests, Cutting Scores, Item Banks, Latent Trait Theory
Peer reviewed Peer reviewed
Swaminathan, H.; And Others – Journal of Educational Measurement, 1975
A decision-theoretic procedure is outlined which provides a framework within which Bayesian statistical methods can be employed with criterion-referenced tests to improve the quality of decision making in objectives based instructional programs. (Author/DEP)
Descriptors: Bayesian Statistics, Computer Assisted Instruction, Criterion Referenced Tests, Decision Making
Peer reviewed Peer reviewed
Shavelson, Richard J.; And Others – Journal of Educational Measurement, 1993
Evidence is presented on the generalizability and convergent validity of performance assessments using data from six studies of student achievement that sampled a wide range of measurement facets and methods. Results at individual and school levels indicate that task-sampling variability is the major source of measurement error. (SLD)
Descriptors: Academic Achievement, Educational Assessment, Error of Measurement, Generalizability Theory
Peer reviewed Peer reviewed
Koffler, Stephen L. – Journal of Educational Measurement, 1980
Cut-off scores from two approaches for setting standards are examined. Standards determined from judgments about groups and from inspection of test content are compared. Results indicate that there was neither consistency nor pattern to cut-off scores set from the two procedures. (Author/RD)
Descriptors: Academic Standards, Cutting Scores, Educational Testing, Elementary Secondary Education