Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Peer reviewedDrasgow, Fritz; And Others – Applied Measurement in Education, 1996
A general approach to the identification of individuals mismeasured by a standardized psychological test is reviewed. The method, originated by M. V. Levine and F. Drasgow (1988), has the advantage of statistical optimality. Use of optimal methods requires a psychometric model for normal responding and one for aberrant responding. (SLD)
Descriptors: Identification, Item Response Theory, Measurement Techniques, Models
Peer reviewedTatsuoka, Kikumi – Applied Measurement in Education, 1996
Application of person-fit statistics to cognitive diagnosis requires special efforts to detect normal and usual response patterns resulting from sources of misconception that are frequently observed among students. This study shows a solution for the problem by introducing an extension of a person-fit statistic developed by K. Tatsuoka (1985).…
Descriptors: Classification, Cognitive Tests, Diagnostic Tests, Item Response Theory
Peer reviewedHiltner, Arthur A.; Loyland, Mary O. – Journal of Education for Business, 1998
Accounting faculty (n=180) rated items for effectiveness in assessing research, teaching, and service. They perceived a strong role for department chairs in all three areas. Their ratings of their institutions assessment programs were not high. (SK)
Descriptors: Accounting, College Faculty, Department Heads, Evaluation Utilization
Peer reviewedRogers, W. Todd; Ndalichako, Joyce – Educational and Psychological Measurement, 2000
Determined the robustness of several types of scoring (number-right; one-, two-, and three-parameter item response; finite-state, and partial-credit) with respect to the violation of the equally classifiable options and option independence made in finite-state scoring using analysis of test responses of 1,232 high school seniors. (SLD)
Descriptors: Classification, High School Seniors, High Schools, Item Response Theory
Peer reviewedFriedman, Stephen J. – Journal of Educational Measurement, 1999
This volume describes the characteristics and functions of test items, presents editorial guidelines for writing test items, presents methods for determining the quality of test items, and presents a compendium of important issues about test items. (SLD)
Descriptors: Constructed Response, Criteria, Evaluation Methods, Multiple Choice Tests
Hierarchical Classes Models for Three-Way Three-Mode Binary Data: Interrelations and Model Selection
Ceulemans, Eva; Van Mechelen, Iven – Psychometrika, 2005
Several hierarchical classes models can be considered for the modeling of three-way three-mode binary data, including the INDCLAS model (Leenen, Van Mechelen, De Boeck, and Rosenberg, 1999), the Tucker3-HICLAS model (Ceulemans,VanMechelen, and Leenen, 2003), the Tucker2-HICLAS model (Ceulemans and Van Mechelen, 2004), and the Tucker1-HICLAS model…
Descriptors: Test Items, Models, Vertical Organization, Emotional Response
Hessen, David J. – Psychometrika, 2005
In the present paper, a new family of item response theory (IRT) models for dichotomous item scores is proposed. Two basic assumptions define the most general model of this family. The first assumption is local independence of the item scores given a unidimensional latent trait. The second assumption is that the odds-ratios for all item-pairs are…
Descriptors: Item Response Theory, Scores, Test Items, Models
Burt, Gordon – Assessment and Evaluation in Higher Education, 2005
A number of recent articles have claimed strong relationships--i.e., very high "proportions of shared variance"--between pairs of teaching and learning questionnaires. These claims have been the subject of debate and it has emerged that the proportion of shared variance is defined as the complement of Wilks' lambda. The present article argues that…
Descriptors: Questionnaires, Evaluation Methods, Measurement Techniques, Test Items
Peer reviewedTaylor, Annette Kujawski – College Student Journal, 2005
This research examined 2 elements of multiple-choice test construction, balancing the key and optimal number of options. In Experiment 1 the 3 conditions included a balanced key, overrepresentation of a and b responses, and overrepresentation of c and d responses. The results showed that error-patterns were independent of the key, reflecting…
Descriptors: Comparative Analysis, Test Items, Multiple Choice Tests, Test Construction
Benjamin, Aaron S.; Bird, Randy D. – Journal of Memory and Language, 2006
Rememberers play an active role in learning, not only by committing material more or less faithfully to memory, but also by selecting judicious study strategies (or not). In three experiments, subjects chose whether to mass or space the second presentation of to-be-learned paired-associate terms that were either normatively difficult or easy to…
Descriptors: Metacognition, Memory, Difficulty Level, Test Items
van der Linden, Wim J.; Sotaridona, Leonardo – Journal of Educational Measurement, 2004
A statistical test for the detection of answer copying on multiple-choice tests is presented. The test is based on the idea that the answers of examinees to test items may be the result of three possible processes: (1) knowing, (2) guessing, and (3) copying, but that examinees who do not have access to the answers of other examinees can arrive at…
Descriptors: Multiple Choice Tests, Test Items, Hypothesis Testing, Statistical Distributions
Reckase, Mark D. – Educational Measurement: Issues and Practice, 2006
Schulz (2006) provides a different perspective on standard setting than that provided in Reckase (2006). He also suggests a modification to the bookmark procedure and some alternative models for errors in panelists' judgments than those provided by Reckase. This article provides a response to some of the points made by Schulz and reports some…
Descriptors: Evaluation Methods, Standard Setting, Reader Response, Regression (Statistics)
Spaan, Mary – Language Assessment Quarterly, 2006
This article provides practical advice on the development of test and item specifications. Because my experience has been with English as a second or foreign language, most examples are taken from this field, although they can be applied to other language tests. The article first discusses steps in the process of test development, including…
Descriptors: Feasibility Studies, Language Tests, Language Skills, Test Construction
Nairne, James S.; Kelley, Matthew R. – Journal of Memory and Language, 2004
In the present paper, we develop and apply a technique, based on the logic of process dissociation, for obtaining numerical estimates of item and order information. Certain variables, such as phonological similarity, are widely believed to produce dissociative effects on item and order retention. However, such beliefs rest on the questionable…
Descriptors: Memory, Phonology, Language Processing, Cognitive Tests
Vigneau, Francois; Caissie, Andre F.; Bors, Douglas A. – Intelligence, 2006
Taking into account various models and findings pertaining to the nature of analogical reasoning, this study explored quantitative and qualitative individual differences in intelligence using latency and eye-movement data. Fifty-five university students were administered 14 selected items of the Raven's Advanced Progressive Matrices test. Results…
Descriptors: Eye Movements, Intelligence, Logical Thinking, Individual Differences

Direct link
