NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Urry, Vern W. – 1983
In this report, selection theory is used as a theoretical framework from which mathematical algorithms for tailored testing are derived. The process of tailored, or adaptive, testing is presented as analogous to personnel selection and rejection on a series of continuous variables that are related to ability. Proceeding from a single common-factor…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Latent Trait Theory
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Kingsbury, G. Gage – 1985
A procedure for assessing content-area and total-test dimensionality which uses response function discrepancies (RFD) was studied. Three different versions of the RFD procedure were compared to Bejar's principal axis content-area procedure and Indow and Samejima's exploratory factor analytic technique. The procedures were compared in terms of the…
Descriptors: Achievement Tests, Comparative Analysis, Elementary Education, Estimation (Mathematics)
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Huntley, Renee M.; Carlson, James E. – 1986
This study compared student performance on language-usage test items presented in two different formats: as discrete sentences and as items embedded in passages. American College Testing (ACT) Program's Assessment experimental units were constructed that presented 40 items in the two different formats. Results suggest item presentation may not…
Descriptors: College Entrance Examinations, Difficulty Level, Goodness of Fit, Item Analysis
Levine, Michael V.; Drasgow, Fritz – 1984
Some examinees' test-taking behavior may be so idiosyncratic that their scores are not comparable to the scores of more typical examinees. Appropriateness indices, which provide quantitative measures of response-pattern atypicality, can be viewed as statistics for testing a null hypothesis of normal test-taking behavior against an alternative…
Descriptors: Cheating, College Entrance Examinations, Computer Simulation, Estimation (Mathematics)
Waller, Michael I. – 1986
This study compares the fit of the 3-parameter model to the Ability Removing Random Guessing (ARRG) model on data from a wide range of tests of cognitive ability in three representative samples. When the guessing parameters under the 3-parameter model are estimated individually for each item, the 3-parameter model yields the better fit to…
Descriptors: Cognitive Tests, Cohort Analysis, Elementary Secondary Education, Equations (Mathematics)