Publication Date
| In 2026 | 0 |
| Since 2025 | 81 |
| Since 2022 (last 5 years) | 449 |
| Since 2017 (last 10 years) | 1237 |
| Since 2007 (last 20 years) | 2511 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 130 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 34 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Mady, Callie – Canadian Journal of Applied Linguistics / Revue canadienne de linguistique appliquee, 2007
Recently arrived English as a second language (ESL) students were compared to their unilingual and multilingual Canadian-born peers on measures of French proficiency. All of the participants were enrolled in secondary core French (CF)--the ESL students were studying introductory French, whereas the Canadian-born students were in Grade 9 CF, their…
Descriptors: Verbal Communication, Speech Communication, Listening Comprehension Tests, Multilingualism
PDF pending restorationSykes, Robert C.; And Others – 1996
The presence of multiple readings of a student response to a constructed-response item in a large-scale assessment requires a procedure for combining the ratings to obtain an item score. An alternative to the averaged item ratings that are usually used is the summing of ratings for each item. This study evaluated the effect of summing as opposed…
Descriptors: Constructed Response, High Schools, Item Response Theory, Mathematics Education
Kehoe, Jerard – 1995
This digest describes some basics of the construction of multiple-choice tests. As a rule, the test maker should strive for test item stems (introductory questions or incomplete statements at the beginning of each item that are followed by the options) that are clear and parsimonious, answers that are unequivocal and chosen by the students who do…
Descriptors: Culture Fair Tests, Distractors (Tests), Educational Assessment, Item Bias
Adams, Raymond J.; Khoo, Siek-Toon – 1993
The Quest program offers a comprehensive test and questionnaire analysis environment by providing a data analyst (a computer program) with access to the most recent developments in Rasch measurement theory, as well as a range of traditional analysis procedures. This manual helps the user use Quest to construct and validate variables based on…
Descriptors: Computer Assisted Testing, Computer Software, Estimation (Mathematics), Foreign Countries
Garcia-Perez, Miguel A.; Frary, Robert B. – 1991
A new approach to the development of the item characteristic curve (ICC), which expresses the functional relationship between the level of performance on a given task and an independent variable that is relevant to the task, is presented. The approach focuses on knowledge states, decision processes, and other circumstances underlying responses to…
Descriptors: Decision Making, Equations (Mathematics), Graphs, Guessing (Tests)
Greenwood, John C. – 1991
Tests are intended to assess performance of students. However, tests can also be used as an educational tool. Current patterns in education have produced a group of students with weak learning skills, limited confidence in their own abilities, an underlying hostility or distrust of the educational system, and an inhibited attitude towards…
Descriptors: Answer Keys, Convergent Thinking, Error Correction, Feedback
Roberts, David C. – 1993
The differences between multiple-choice, simulated, and concurrent tests of software-skills proficiency are discussed. For three basic human-resource functions, the advantages of concurrent tests (i.e., those that use the actual application software) include true performance-based assessment, unconstrained response alternatives, and increased job…
Descriptors: Competence, Computer Literacy, Computer Oriented Programs, Computer Software
Hanson, Bradley A. – 1990
Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)
Withers, Graeme – 1991
This book is directed toward all persons taking tests and examinations at school, for a new job, for college or university entrance, or for promotion. Topics discussed include: things students should ask themselves before taking a test, the difference between tests and examinations, ways of feeling good about a test or examination, tactics and…
Descriptors: Achievement Tests, Books, College Entrance Examinations, Guidelines
Peer reviewedLord, Frederic M. – Psychometrika, 1974
Omitted items cannot properly be treated as wrong when estimating ability and item parameters. A convenient method for utilizing the information provided by omissions is presented. Theoretical and empirical justifications are presented for the estimates obtained by the new method. (Author)
Descriptors: Academic Ability, Guessing (Tests), Item Analysis, Latent Trait Theory
von Davier, Alina A.; Wilson, Christine – ETS Research Report Series, 2005
This paper discusses the assumptions required by the item response theory (IRT) true-score equating method (with Stocking & Lord, 1983; scaling approach), which is commonly used in the nonequivalent groups with an anchor data-collection design. More precisely, this paper investigates the assumptions made at each step by the IRT approach to…
Descriptors: Item Response Theory, True Scores, Equated Scores, Test Items
Holland, Paul W.; von Davier, Alina A.; Sinharay, Sandip; Han, Ning – ETS Research Report Series, 2006
This paper focuses on the Non-Equivalent Groups with Anchor Test (NEAT) design for test equating and on two classes of observed--score equating (OSE) methods--chain equating (CE) and poststratification equating (PSE). These two classes of methods reflect two distinctly different ways of using the information provided by the anchor test for…
Descriptors: Equated Scores, Test Items, Statistical Analysis, Comparative Analysis
Seong, Tae-Je; Subkoviak, Michael J. – 1987
The purpose of this research was to reinvestigate the accuracy of three item bias detection procedures: (1) Linn and Harnisch's pseudo-IRT(Z) method; (2) Camilli's chi-square technique; and (3) Angoff's revised transformed item difficulty method. These methods are applied when the minority group sample size is too small to obtain stable estimates…
Descriptors: Blacks, Difficulty Level, Higher Education, Item Analysis
Jannarone, Robert J. – 1986
A variety of locally dependent models are introduced having individual difference parameters that may be interpreted as reflecting effective learning abilities. One version is a univariate extension of the Rasch model with a Markov property: the probability that a given individual will pass an item depends on previous items only through the…
Descriptors: Academic Aptitude, Bayesian Statistics, Cognitive Ability, Estimation (Mathematics)
Wilhite, Stephen C. – 1984
This experiment examined the effects of headings and adjunct questions embedded in expository text on the delayed multiple-choice test performance of college students. Subjects in the headings-present group performed significantly better on the retention test than did the subjects in the headings-absent group. The main effect of adjunct questions…
Descriptors: Advance Organizers, Cognitive Processes, Cognitive Style, Higher Education

Direct link
