NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Pell Grant Program1
What Works Clearinghouse Rating
Showing 166 to 180 of 265 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
David, Gergely – Language Testing, 2007
Some educational contexts almost mandate the application of multiple-choice (MC) testing techniques, even if they are deplored by many practitioners in the field. In such contexts especially, research into how well these types of item perform and how their performance may be characterised is both appropriate and desirable. The focus of this paper…
Descriptors: Student Evaluation, Grammar, Language Tests, Test Items
Plake, Barbara S.; And Others – 1983
Differential test performance by undergraduate males and females enrolled in a developmental educational psychology course (n=167) was reported on a quantitative examination as a function of item arrangement. Males were expected to perform better than females on tests whose items arranged easy to hard. Plake and Ansorge (1982) speculated this may…
Descriptors: Difficulty Level, Feedback, Higher Education, Scoring
Peer reviewed Peer reviewed
Laffitte, Rondeau G., Jr. – Teaching of Psychology, 1984
A study involving undergraduate college students enrolled in an introductory psychology course showed that test item arrangement by difficulty or by order of content presentation has no effect on total achievement test score. The data also fail to demonstrate any influence of test item order on student perception of test difficulty. (RM)
Descriptors: Difficulty Level, Educational Research, Higher Education, Psychology
Peer reviewed Peer reviewed
Knowles, Susan L.; Welch, Cynthia A. – Educational and Psychological Measurement, 1992
A meta-analysis of the difficulty and discrimination of the "none-of-the-above" (NOTA) test option was conducted with 12 articles (20 effect sizes) for difficulty and 7 studies (11 effect sizes) for discrimination. Findings indicate that using the NOTA option does not result in items of lesser quality. (SLD)
Descriptors: Difficulty Level, Effect Size, Meta Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Deak, Gedeon O.; Ray, Shanna D.; Pick, Anne D. – Cognitive Development, 2004
To test preschoolers' ability to flexibly switch between abstract rules differing in difficulty, ninety-three 3-, 4-, and 5-year-olds were instructed to switch from an (easier) shape-sorting to a (harder) function-sorting rule, or vice versa. Children learned one rule, sorted four test sets, then learned the other rule, and sorted four more sets.…
Descriptors: Difficulty Level, Preschool Children, Cognitive Tests, Adaptive Testing
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Rubin, Lois S.; Mott, David E. W. – 1984
An investigation of the effect on the difficulty value of an item due to position placement within a test was made. Using a 60-item operational test comprised of 5 subtests, 60 items were placed as experimental items on a number of spiralled test forms in three different positions (first, middle, last) within the subtest composed of like items.…
Descriptors: Difficulty Level, Item Analysis, Minimum Competency Testing, Reading Tests
Ebel, Robert L. – 1981
An alternate-choice test item is a simple declarative sentence, one portion of which is given with two different wordings. For example, "Foundations like Ford and Carnegie tend to be (1) eager (2) hesitant to support innovative solutions to educational problems." The examinee's task is to choose the alternative that makes the sentence…
Descriptors: Comparative Testing, Difficulty Level, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Weiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Berger, Steven G.; And Others – Assessment, 1994
As part of a neuropsychological assessment, 95 adult patients completed either standard or computerized versions of the Category Test. Subjects who completed the computerized version exhibited more errors than those who completed the standard version, suggesting that it may be more difficult. (SLD)
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Demography
Li, Yuan H.; Griffith, William D.; Tam, Hak P. – 1997
This study explores the relative merits of a potentially useful item response theory (IRT) linking design: using a single set of anchor items with fixed common item parameters (FCIP) during the calibration process. An empirical study was conducted to investigate the appropriateness of this linking design using 6 groups of students taking 6 forms…
Descriptors: Ability, Difficulty Level, Equated Scores, Error of Measurement
Peer reviewed Peer reviewed
Straton, Ralph G.; Catts, Ralph M. – Educational and Psychological Measurement, 1980
Multiple-choice tests composed entirely of two-, three-, or four-choice items were investigated. Results indicated that number of alternatives per item was inversely related to item difficulty, but directly related to item discrimination. Reliability and standard error of measurement of three-choice item tests was equivalent or superior.…
Descriptors: Difficulty Level, Error of Measurement, Foreign Countries, Higher Education
Peer reviewed Peer reviewed
Wainer, Howard; And Others – Journal of Educational Measurement, 1994
The comparability of scores on test forms that are constructed through examinee item choice is examined in an item response theory framework. The approach is illustrated with data from the College Board's Advanced Placement Test in Chemistry taken by over 18,000 examinees. (SLD)
Descriptors: Advanced Placement, Chemistry, Comparative Analysis, Constructed Response
Freedle, Roy; Kostin, Irene – 1993
Prediction of the difficulty (equated delta) of a large sample (n=213) of reading comprehension items from the Test of English as a Foreign Language (TOEFL) was studied using main idea, inference, and supporting statement items. A related purpose was to examine whether text and text-related variables play a significant role in predicting item…
Descriptors: Construct Validity, Difficulty Level, Multiple Choice Tests, Prediction
PDF pending restoration PDF pending restoration
Faggen, Jane; And Others – 1995
The objective of this study was to determine the degree to which recommendations for passing scores, calculated on the basis of a traditional standard-setting methodology, might be affected by the mode (paper versus computer-screen prints) in which test items were presented to standard setting panelists. Results were based on the judgments of 31…
Descriptors: Computer Assisted Testing, Cutting Scores, Difficulty Level, Evaluators
Pages: 1  |  ...  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18