Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Lord, Frederic M. – 1971
A flexilevel test is found to be inferior to a peaked conventional test for measuring examinees in the middle of the ability range, superior for examinees at the extremes. Throughout the entire range of ability, a flexilevel test is much superior to any conventional test that attempts to provide accurate measurement at both extremes. See also ED…
Descriptors: Ability, Comparative Analysis, Difficulty Level, Guessing (Tests)
Campbell, Jeff H. – College Board Review, 1978
A professor's 10-year experience on the development committee for the CLEP Humanities Test provides the basis for this supportive assessment of the College Level Examination Program. Changes in tests, question selection, the value of pre-testing, and other responsibilities of development committees are discussed. (LBH)
Descriptors: College Bound Students, College Credits, College Entrance Examinations, Equivalency Tests
Morse, David T. – Florida Vocational Journal, 1978
Presents guidelines for constructing tests which accurately measure a student's cognitive skills and performance in a particular course. The advantages and disadvantages of two types of test items are listed (selected response and constructed response items). Both poor and good examples are given and general rules for test item writing are…
Descriptors: Cognitive Development, Criterion Referenced Tests, Essay Tests, Multiple Choice Tests
Peer reviewedFuchs, Douglas; And Others – Exceptional Children, 1987
This study analyzed user manuals and technical supplements of 27 aptitude and achievement tests to determine whether disabled children were included in development of the tests' norms, items, reliability indices, and validity indices. Most test developers provided scant evidence that their tests were valid for use with disabled students. (JDD)
Descriptors: Achievement Tests, Aptitude Tests, Disabilities, Elementary Secondary Education
Peer reviewedErickson, Gerald – Classical Outlook, 1987
Describes grading and scoring procedures for advanced placement examinations in two Latin courses: Vergil and Cattalus-Horace. Explanations for cited test items are offered. (CB)
Descriptors: Achievement Tests, Advanced Placement, Classical Literature, Grading
Peer reviewedLinn, Robert L.; Drasgow, Fritz – Educational Measurement: Issues and Practice, 1987
This article discusses the application of the Golden Rule procedure to items of the Scholastic Aptitude Test. Using item response theory, the analyses indicate that the Golden Rule procedures are ineffective in detecting biased items and may undermine the reliability and validity of tests. (Author/JAZ)
Descriptors: College Entrance Examinations, Difficulty Level, Item Analysis, Latent Trait Theory
Peer reviewedOkeafor, Karen R.; And Others – Journal of Experimental Education, 1987
An effort to operationalize the logic of confidence construct is described that uses item development, factor analysis, and reliability and validity tests. Data from public school and higher education teachers indicate correlations for teacher logic of confidence, status obeisance, professional zone of acceptance, and autonomy. (TJH)
Descriptors: Beliefs, Factor Analysis, Higher Education, Item Analysis
Peer reviewedDucroquet, Lucile – British Journal of Language Teaching, 1986
Meaningful and relevant tests of oral competence in foreign languages must address problems impeding communicative competence such as lack of student motivation, unimaginative questions, inhibitive personal questions, and pictorial tests. Examples of test questions are presented in French. (CB)
Descriptors: Communicative Competence (Languages), Evaluation Criteria, French, Language Tests
Peer reviewedPoggio, John P.; And Others – Educational and Psychological Measurement, 1987
College faculty served as judges to rate the instructional validity of items on the National Teacher Examinations Core Battery. The ratings were examined in relation to actual test performance, as well as panelists' ratings of item difficulty and relevance. (Author/GDC)
Descriptors: Beginning Teachers, Content Validity, Difficulty Level, Education Majors
Peer reviewedAesche, Darryl W.; Parslow, Graham R. – Biochemical Education, 1988
Discusses the use of a bank of about 9,000 test items in a computer-assisted instructional program at Adelaide University (South Australia). Describes the program and outlines the steps in producing an instructional program. (TW)
Descriptors: Biochemistry, College Science, Computer Assisted Instruction, Computer Assisted Testing
Peer reviewedHarrison, David A. – Journal of Educational Statistics, 1986
Multidimensional item response data were created. The strength of a general factor, the number of common factors, the distribution of items loadingon common factors, and the number of items in simulated tests were manipulated. LOGIST effectively recovered both item and trait parameters in nearly all of the experimental conditions. (Author/JAZ)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Correlation
Peer reviewedSmith, Pauline – British Journal of Educational Psychology, 1986
Examines traditional psychometric approaches to measuring intelligence and recent work by cognitive psychologists to develop a rationale for a non-verbal reasoning test for 10- to 11-year-olds. Recent studies providing basis for analyzing structure of test items are outlined, and benefits of analyzing items at a sub-type level are discussed.…
Descriptors: Classification, Cognitive Ability, Cognitive Measurement, Cognitive Processes
Peer reviewedChansarkar, B. A. – Assessment and Evaluation in Higher Education, 1985
A British polytechnic's first experiences with having one segment of a major examination be known to students ahead of time is discussed, and an empirical comparison of this and the traditional test type is presented. (MSE)
Descriptors: Business Administration Education, Comparative Analysis, Evaluation Methods, Foreign Countries
Peer reviewedKlimko, Ivan P. – Journal of Experimental Education, 1984
The influence of item arrangement on students' total test performance was investigated. Two hierarchical multiple regression analyses were used to analyze the data. The main finding within the context of this study was that item arrangements based on item difficulties did not influence achievement examination performance. (Author/DWH)
Descriptors: Achievement Tests, Cognitive Style, College Students, Difficulty Level
Peer reviewedCleary, Vincent J. – Classical Outlook, 1986
Analyzes several questions and student answers, and the graders' evaluations of student responses on each of two advanced placement examinations--the one which tests Vergil and the one which tests Catallus and Horace. The percentages of participants scoring at each grade level of the exam are also presented. (SED)
Descriptors: Advanced Placement Programs, Grading, Language Tests, Latin


