Publication Date
| In 2026 | 0 |
| Since 2025 | 18 |
| Since 2022 (last 5 years) | 120 |
| Since 2017 (last 10 years) | 262 |
| Since 2007 (last 20 years) | 435 |
Descriptor
| Test Format | 956 |
| Test Items | 956 |
| Test Construction | 363 |
| Multiple Choice Tests | 260 |
| Foreign Countries | 227 |
| Difficulty Level | 199 |
| Higher Education | 179 |
| Computer Assisted Testing | 160 |
| Item Response Theory | 151 |
| Item Analysis | 149 |
| Scores | 146 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 62 |
| Teachers | 47 |
| Researchers | 32 |
| Students | 15 |
| Administrators | 13 |
| Parents | 6 |
| Policymakers | 5 |
| Community | 1 |
| Counselors | 1 |
Location
| Turkey | 27 |
| Canada | 15 |
| Germany | 15 |
| Australia | 13 |
| Israel | 13 |
| Japan | 12 |
| Netherlands | 10 |
| United Kingdom | 10 |
| United States | 9 |
| Arizona | 6 |
| Iran | 6 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 2 |
| No Child Left Behind Act 2001 | 2 |
| Elementary and Secondary… | 1 |
| Head Start | 1 |
| Job Training Partnership Act… | 1 |
| Perkins Loan Program | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedBrindley, Geoff – Annual Review of Applied Linguistics, 1998
This review of research on assessment of second-language listening abilities looks at some testing issues and challenges (assessing higher-level skills, confounding of skills, assessing listening in oral interaction, authenticity), discusses assessment methods and techniques (test administration, item formats), and considers potential applications…
Descriptors: Computer Assisted Testing, Educational Technology, Language Research, Language Tests
Huntley, Renee M.; Welch, Catherine J. – 1993
Writers of mathematics test items, especially those who write for standardized tests, are often advised to arrange the answer options in logical order, usually ascending or descending numerical order. In this study, 32 mathematics items were selected for inclusion in four experimental pretest units, each consisting of 16 items. Two versions…
Descriptors: Ability, College Entrance Examinations, Comparative Testing, Distractors (Tests)
Stansfield, Charles W.; Kahl, Stuart R. – 1998
The Massachusetts Comprehensive Assessment System (MCAS) is the new Massachusetts state assessment program that is being implemented in response to state education reform legislation. The paper describes the early efforts of the state Department of Education (MDOE), its prime contractor for development of the MCAS (Advanced Systems in Measurement…
Descriptors: Educational Change, Elementary School Students, Elementary School Teachers, Elementary Secondary Education
Owen, K. – 1989
Sources of item bias located in characteristics of the test item were studied in a reasoning test developed in South Africa. Subjects were 1,056 White, 1,063 Indian, and 1,093 Black students from standard 7 in Afrikaans and English schools. Format and content of the 85-item Reasoning Test were manipulated to obtain information about bias or…
Descriptors: Afrikaans, Black Students, Cognitive Tests, Comparative Testing
Cohen, Paul, Comp. – 1985
One in a series of reading, writing, and mathematics subject booklets offered to acquaint teachers with the skills tested in each content area on the New Jersey High School Proficiency Test, this booklet on reading presents material that can be used in lesson planning and preparation of materials to ensure coverage of all skills throughout the…
Descriptors: Competency Based Education, Critical Reading, Evaluation Criteria, Grade 9
Marso, Ronald N. – 1985
A questionnaire concerning the use of teacher made tests was completed by 123 public school teachers in Ohio. Five testing practices were examined: (1) number of teacher made tests given during a course or school year; (2) types of test items most commonly used; (3) sources used in obtaining test items; (4) information used in assigning grades;…
Descriptors: Achievement Tests, Constructed Response, Educational Testing, Elementary Secondary Education
Lunz, Mary E.; And Others – 1989
A method for understanding and controlling the multiple facets of an oral examination (OE) or other judge-intermediated examination is presented and illustrated. This study focused on determining the extent to which the facets model (FM) analysis constructs meaningful variables for each facet of an OE involving protocols, examiners, and…
Descriptors: Computer Software, Difficulty Level, Evaluators, Examiners
Siskind, Teri G.; Rose, Janet S. – 1986
The Charleston County School District (CCSD) has recently begun development of criterion-referenced tests (CRT) in different subject areas and for different grade levels. This paper outlines the process that CCSD followed in the development of math and language arts tests for grades one through eight and area exams for required high school…
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Educational Objectives, Educational Testing
Velanoff, John – 1987
This report describes courseware for comprehensive computer-assisted testing and instruction. With this program, a personal computer can be used to: (1) generate multiple test versions to meet test objectives; (2) create study guides for self-directed learning; and (3) evaluate student and teacher performance. Numerous multiple-choice examples,…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Uses in Education, Courseware
Eignor, Daniel R. – 1985
The feasibility of pre-equating, or establishing conversions from raw to scaled scores through the use of pretest data before operationally administering a test, was investigated for the Scholastic Aptitude Test (SAT). Item-response theory based equating methods were used to estimate item parameters on SAT pretest data, instead of using final form…
Descriptors: College Entrance Examinations, Equated Scores, Estimation (Mathematics), Feasibility Studies
Peer reviewedWise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Read, John; Nation, Paul – 1986
A review of the literature on a variety of issues related to testing vocabulary knowledge in a second language addresses these topics: problems in estimating vocabulary size, including the related questions of what constitutes a word, how a sample should be selected, and what are the criteria for knowing a word; sampling the basic and specialized…
Descriptors: Achievement Tests, Check Lists, Classification, Comparative Analysis
Vocational Technical Education Consortium of States, Atlanta, GA. – 1984
A project was conducted to develop vocational education tests for use in Georgia secondary schools, specifically for welding, machine shop, and sheet metal courses. The project team developed an outline of an assessment model that included the following components: (1) select a program for use in developing test items; (2) verify duties, tasks,…
Descriptors: Item Analysis, Job Skills, Machine Tool Operators, Machine Tools
Arter, Judith A.; Estes, Gary D. – 1985
This handbook is intended for persons who might develop or use an item bank to support their testing program. An item bank is defined as a "large collection of distinguishable test items," with "large" explained as meaning that the number of items available is greater than the number to be used in any one test. The first section of the handbook…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Software, Curriculum


