NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Callender, John C.; Osburn, H. G. – Educational and Psychological Measurement, 1977
A FORTRAN program for maximizing and cross-validating split-half reliability coefficients is described. Externally computed arrays of item means and covariances are used as input for each of two samples. The user may select a number of subsets from the complete set of items for analysis in a single run. (Author/JKS)
Descriptors: Computer Programs, Item Analysis, Test Reliability, Test Validity
Ree, Malcolm James – 1976
A method for developing statistically parallel tests based on the analysis of unique item variance was developed. A test population of 907 basic airmen trainees were required to estimate the angle at which an object in a photograph was viewed, selecting from eight possibilities. A FORTRAN program known as VARSEL was used to rank all the test items…
Descriptors: Comparative Analysis, Computer Programs, Enlisted Personnel, Item Analysis
Hedl, John J., Jr.; And Others – 1971
The development and evaluation of an interactive computer-based administration and scoring program for the Slosson Intelligence Test (SIT) is discussed. A computer program successfully developed for administering and scoring the Wechsler Adult Intelligence Scale is cited to support the feasibility of developing this proposed SIT program. Testing,…
Descriptors: Attitude Measures, Automation, College Students, Computer Programs
Avant, Glen R.; O'Brien, Michael L. – 1980
The Rasch Model was used to define the word analysis skill variable and to develop, field test, and calibrate a corresponding test for grades 2-12: the Emory Word Analysis Skill Inventory (EWASI). Word analysis objectives focusing on content and hierarchical levels of difficulty were identified and field tested with 78 students, grades 2-12,…
Descriptors: Computer Programs, Diagnostic Tests, Elementary Secondary Education, Field Tests
Brennan, Robert L. – 1974
The first four chapters of this report primarily provide an extensive, critical review of the literature with regard to selected aspects of the criterion-referenced and mastery testing fields. Major topics treated include: (a) definitions, distinctions, and background, (b) the relevance of classical test theory, (c) validity and procedures for…
Descriptors: Computer Programs, Confidence Testing, Criterion Referenced Tests, Error of Measurement
Pine, Steven M.; Weiss, David J. – 1976
This report examines how selection fairness is influenced by the item characteristics of a selection instrument in terms of its distribution of item difficulties, level of item discrimination, and degree of item bias. Computer simulation was used in the administration of conventional ability tests to a hypothetical target population consisting of…
Descriptors: Aptitude Tests, Bias, Computer Programs, Culture Fair Tests
PDF pending restoration PDF pending restoration
Civil Service Commission, Washington, DC. Personnel Research and Development Center. – 1976
This pamphlet reprints three papers and an invited discussion of them, read at a Division 5 Symposium at the 1975 American Psychological Association Convention. The first paper describes a Bayesian tailored testing process and shows how it demonstrates the importance of using test items with high discrimination, low guessing probability, and a…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Oriented Programs, Computer Programs
Cliff, Norman; And Others – 1977
TAILOR is a computer program that uses the implied orders concept as the basis for computerized adaptive testing. The basic characteristics of TAILOR, which does not involve pretesting, are reviewed here and two studies of it are reported. One is a Monte Carlo simulation based on the four-parameter Birnbaum model and the other uses a matrix of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Difficulty Level
Rippey, Robert M. – 1971
Technical improvements, which may be made in the reliability and validity of tests through confidence scores, are discussed. However, studies indicate that subjects do not handle their confidence uniformly. (MS)
Descriptors: Computer Programs, Confidence Testing, Correlation, Difficulty Level
Smith, Richard M.; Mitchell, Virginia P. – 1979
To improve the accuracy of college placement, Rasch scoring and person-fit statistics on the Comparative Guidance and Placement test (CGP) was compared to the traditional right-only scoring. Correlations were calculated between English and mathematics course grades and scores of 1,448 entering freshmen on the reading, writing, and mathematics…
Descriptors: Academic Ability, Computer Programs, Difficulty Level, Goodness of Fit