Publication Date
| In 2026 | 0 |
| Since 2025 | 389 |
| Since 2022 (last 5 years) | 1887 |
| Since 2017 (last 10 years) | 4031 |
| Since 2007 (last 20 years) | 6737 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 644 |
| Teachers | 455 |
| Researchers | 440 |
| Administrators | 126 |
| Policymakers | 68 |
| Students | 68 |
| Counselors | 26 |
| Parents | 24 |
| Community | 10 |
| Support Staff | 5 |
| Media Staff | 3 |
| More ▼ | |
Location
| Turkey | 603 |
| Australia | 339 |
| Canada | 254 |
| China | 180 |
| Indonesia | 147 |
| United States | 143 |
| United Kingdom | 130 |
| Germany | 116 |
| Taiwan | 111 |
| California | 109 |
| Spain | 107 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 3 |
| Does not meet standards | 2 |
Leung, Chi K.; Chang, Hua H.; Hau, Kit T. – 1999
An a-stratified design (H. Chang and Z. Ying, 1997) is a new concept proposed to address the issues of item security and pool utilization in testing. It has been demonstrated to be effective in lowering the test overlap rate and improving the use of the entire pool when content constraints are not main concerns. However, it cannot really solve the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Mislevy, Robert J.; Steinberg, Linda S.; Almond, Russell G. – 1999
Tasks are the most visible element in an educational assessment. Their purpose, however, is to provide evidence about targets of inference that cannot be directly seen at all: what examinees know and can do, more broadly conceived than can be observed in the context of any particular set of tasks. This paper concerns issues in an assessment design…
Descriptors: Educational Assessment, Evaluation Methods, Higher Education, Models
Thomas, Susan J. – 1999
Creating a survey that asks the right questions at a level appropriate for the intended audience is a difficult task. This guide is designed to support educators who want to be confident that the data they gather will be useful. The guide is organized according to the developmental steps in creating a survey. Individual chapters correspond to the…
Descriptors: Data Collection, Planning, Questionnaires, Research Design
Luecht, Richard M. – 2000
Computerized testing has created new challenges for the production and administration of test forms. This paper describes a multi-stage, testlet-based framework for test design, assembly, and administration called computer-adaptive sequential testing (CAST). CAST is a structured testing approach that is amenable to both adaptive and mastery…
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Test Construction
Peer reviewedDudycha, Arthur L.; Carpenter, James B. – Journal of Applied Psychology, 1973
In this study, three structural characteristics--stem format, inclusive versus specific distracters, and stem orientation--were selected for experimental manipulation, while the number of alternatives, the number of correct answers, and the order of items were experimentally controlled. (Author)
Descriptors: Discriminant Analysis, Item Analysis, Multiple Choice Tests, Test Construction
Peer reviewedKruger, Irwin – Journal of Educational and Psychological Measurement, 1974
Descriptors: Computer Programs, Item Banks, Multiple Choice Tests, Test Construction
Peer reviewedSmith, A. G. – Australian Science Teachers Journal, 1972
Presents the theoretical advantages of banks of test items from which tests with pre-determined characteristics can be constructed, with particular emphasis on the possibility of providing comparable achievement data concerning students from different schools without forcing all to take exactly the same test. Reviews some related literature. (AL)
Descriptors: Achievement Tests, Evaluation, Secondary School Science, Test Construction
Peer reviewedLayton, Frances – Alberta Journal of Educational Research, 1973
Purpose of this study was to test a short version of the Stanford-Binet, Form L-M using a group covering a wide age and ability level in an attempt to reduce the time factor involved in administration of some of the S-B tests, without sacrificing the reported accuracy. (Author/CB)
Descriptors: Intelligence Tests, Scoring Formulas, Tables (Data), Test Construction
Clarke, Gertrude M. – NJEA Review, 1972
Some students study, and some learn, in preparation for exams; to increase their number, author (a high school teacher) presents a restructured exam program." (SP)
Descriptors: Educational Testing, Objective Tests, Study, Test Construction
Deterline, William A. – NSPI Journal, 1971
A discussion of the need to utilize a training approach based on job performance requirements and criterion tests." (Author/AK)
Descriptors: Criterion Referenced Tests, Job Training, Test Construction, Testing Problems
Peer reviewedRoberts, Don – English in Australia, 1971
The most reliable test is the student made or question-test". Suggestions for administering and grading this type of test are presented. (AF)
Descriptors: Educational Testing, Questioning Techniques, Test Construction, Test Validity
Exceptional Parent, 1972
Reviewed are a definition of intelligence level, the origin of intelligence tests, technical points concerning intelligence test construction, and individual intelligence tests. (CB)
Descriptors: Intelligence, Intelligence Tests, Literature Reviews, Psychological Testing
Peer reviewedDeBlassie, Richard R. – Clearing House, 1972
A de-emphasis on the use of tests, in and of themselves, will minimize a great deal of the anxiety that typically accompanies the test-taking situation in school children. (Author)
Descriptors: Anxiety, Child Psychology, Elementary School Students, Test Construction
Rogers, Eric M. – Amer J Phys, 1969
Discusses the sociology of examinations, their interaction with teachers, with aims and philosophy of teaching programs, and with students. Emphasizes the need to (1) construct valid tests, (2) test for understanding instead of memory, and (3) use meaningful grading procedures. (LC)
Descriptors: Educational Testing, Evaluation, Grading, Physics
Peer reviewedBerdie, Frances S. – Journal of Educational Measurement, 1971
Descriptors: Community Attitudes, Evaluation, Public Opinion, Test Construction


