Publication Date
| In 2026 | 0 |
| Since 2025 | 9 |
| Since 2022 (last 5 years) | 50 |
| Since 2017 (last 10 years) | 103 |
| Since 2007 (last 20 years) | 160 |
Descriptor
Source
Author
| Plake, Barbara S. | 7 |
| Huntley, Renee M. | 5 |
| Tollefson, Nona | 4 |
| Wainer, Howard | 4 |
| Baghaei, Purya | 3 |
| Bennett, Randy Elliot | 3 |
| Halpin, Glennelle | 3 |
| Katz, Irvin R. | 3 |
| Lunz, Mary E. | 3 |
| Allen, Nancy L. | 2 |
| Anderson, Paul S. | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 8 |
| Policymakers | 1 |
| Practitioners | 1 |
| Teachers | 1 |
Location
| Germany | 8 |
| Turkey | 8 |
| Australia | 5 |
| China | 4 |
| Indonesia | 4 |
| Iran | 4 |
| United Kingdom (England) | 4 |
| Canada | 3 |
| Japan | 3 |
| Netherlands | 3 |
| Taiwan | 3 |
| More ▼ | |
Laws, Policies, & Programs
| Pell Grant Program | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
van Roosmalen, Willem M. M. – 1983
The construction of objective tests for native language reading comprehension is described. The tests were designed for the early secondary school years in several kinds of schools, vocational and non-vocational. The description focuses on the use of the Rasch model in test development, to develop a large pool of homogenous items and establish…
Descriptors: Ability Grouping, Difficulty Level, Foreign Countries, Item Banks
Huntley, Renee M.; Carlson, James E. – 1986
This study compared student performance on language-usage test items presented in two different formats: as discrete sentences and as items embedded in passages. American College Testing (ACT) Program's Assessment experimental units were constructed that presented 40 items in the two different formats. Results suggest item presentation may not…
Descriptors: College Entrance Examinations, Difficulty Level, Goodness of Fit, Item Analysis
Huntley, Renee M.; Welch, Catherine – 1988
A study compared student performance on language-usage test items embedded in a passage when the location of the answer was varied. American College Testing (ACT) Assessment experimental units were constructed that presented 35 items whose sequence of foils was varied so that each foil appeared once as the "no change" option embedded in…
Descriptors: College Entrance Examinations, Difficulty Level, Distractors (Tests), Evaluation Methods
Huntley, Renee M.; Plake, Barbara S. – 1988
The combinational-format item (CFI)--multiple-choice item with combinations of alternatives presented as response choices--was studied to determine whether CFIs were different from regular multiple-choice items in item characteristics or in cognitive processing demands. Three undergraduate Foundations of Education classes (consisting of a total of…
Descriptors: Cognitive Processes, Computer Assisted Testing, Difficulty Level, Educational Psychology
Kirisci, Levent; Hsu, Tse-Chi – 1992
A predictive adaptive testing (PAT) strategy was developed based on statistical predictive analysis, and its feasibility was studied by comparing PAT performance to those of the Flexilevel, Bayesian modal, and expected a posteriori (EAP) strategies in a simulated environment. The proposed adaptive test is based on the idea of using item difficulty…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Computer Assisted Testing
PDF pending restorationHyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Miller, George A. – 1986
In assessing the quality of science teaching for an effort such as the National Assessment of Educational Progress (NAEP), it is important to understand what is meant by scientific thinking--the search for explanations. Instruction should involve higher-order cognitive skill development, but it is difficult to measure reasoning and understanding…
Descriptors: Cognitive Processes, Difficulty Level, Educational Assessment, Educational Testing
Wainer, Howard; Kiely, Gerard L. – 1986
Recent experience with the Computerized Adaptive Test (CAT) has raised a number of concerns about its practical applications. The concerns are principally involved with the concept of having the computer construct the test from a precalibrated item pool, and substituting statistical characteristics for the test developer's skills. Problems with…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Construct Validity
Ward, William C.; And Others – 1986
The keylist format (rather than the conventional multiple-choice format) for item presentation provides a machine-scorable surrogate for a truly free-response test. In this format, the examinee is required to think of an answer, look it up in a long ordered list, and enter its number on an answer sheet. The introduction of keylist items into…
Descriptors: Analogy, Aptitude Tests, Construct Validity, Correlation
Oaster, T. R. F.; And Others – 1986
This study hypothesized that items in the one-question-per-passage format would be less easily answered when administered without their associated contexts than conventional reading comprehension items. A total of 256 seventh and eighth grade students were administered both Forms 3A and 3B of the Sequential Tests of Educational Progress (STEP 11).…
Descriptors: Context Effect, Difficulty Level, Grade 7, Grade 8
Legg, Sue M.; Algina, James – 1986
This paper focuses on the questions which arise as test practitioners monitor score scales derived from latent trait theory. Large scale assessment programs are dynamic and constantly challenge the assumptions and limits of latent trait models. Even though testing programs evolve, test scores must remain reliable indicators of progress.…
Descriptors: Difficulty Level, Educational Assessment, Elementary Secondary Education, Equated Scores
Plake, Barbara S.; Wise, Steven L. – 1986
One question regarding the utility of adaptive testing is the effect of individualized item arrangements on examinee test scores. The purpose of this study was to analyze the item difficulty choices by examinees as a function of previous item performance. The examination was a 25-item test of basic algebra skills given to 36 students in an…
Descriptors: Adaptive Testing, Algebra, College Students, Computer Assisted Testing
Wang, Lih Shing; Stansfield, Charles W. – 1988
The manual for administration of the Chinese Proficiency Test contains an overview of the program, including: (1) its history, content, and format; (2) its primary focus and uses; (3) administration procedures, including registration, ordering the test, reporting scores, and billing; (4) the interpretation of test scores based on normative data…
Descriptors: Chinese, Difficulty Level, Item Analysis, Language Proficiency
Maihoff, N. A.; Mehrens, Wm. A. – 1985
A comparison is presented of alternate-choice and true-false item forms used in an undergraduate natural science course. The alternate-choice item is a modified two-choice multiple-choice item in which the two responses are included within the question stem. This study (1) compared the difficulty level, discrimination level, reliability, and…
Descriptors: Classroom Environment, College Freshmen, Comparative Analysis, Comparative Testing


