Publication Date
| In 2026 | 0 |
| Since 2025 | 389 |
| Since 2022 (last 5 years) | 1887 |
| Since 2017 (last 10 years) | 4031 |
| Since 2007 (last 20 years) | 6737 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 644 |
| Teachers | 455 |
| Researchers | 440 |
| Administrators | 126 |
| Policymakers | 68 |
| Students | 68 |
| Counselors | 26 |
| Parents | 24 |
| Community | 10 |
| Support Staff | 5 |
| Media Staff | 3 |
| More ▼ | |
Location
| Turkey | 603 |
| Australia | 339 |
| Canada | 254 |
| China | 180 |
| Indonesia | 147 |
| United States | 143 |
| United Kingdom | 130 |
| Germany | 116 |
| Taiwan | 111 |
| California | 109 |
| Spain | 107 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 3 |
| Does not meet standards | 2 |
Peer reviewedVelozzo, Craig A.; Lai, Jin-Shei; Mallinson, Trudy; Hauselman, Ellyn – Journal of Outcome Measurement, 2001
Studied how Rasch analysis could be used to reduce the number of items in an instrument while maintaining credible psychometric properties. Applied the approach to the Visual Function-14 developed to measure the need for and outcomes of cataract surgery. Results show how Rasch analysis can be useful in designing modifications of instruments. (SLD)
Descriptors: Item Response Theory, Psychometrics, Test Construction, Test Items
Peer reviewedKolen, Michael J. – Educational Measurement: Issues and Practice, 2001
Discusses some practical issues in linking educational assessments, focusing on the importance of clarity of purpose when assessments are linked. Also stresses the importance of the design used to collect data for linking. Uses linking studies from a variety of situations to illustrate these points. (SLD)
Descriptors: Data Collection, Educational Assessment, Equated Scores, Research Design
Peer reviewedDe Meuse, Kenneth P.; Hostager, Todd J. – Human Resource Development Quarterly, 2001
Three studies were conducted to (1) identify emotional, cognitive, and behavioral responses to diversity; (2) develop an instrument; and (3) test the instrument with 110 students and 66 workers. The Reaction-to-Diversity Inventory was deemed useful for assessing attitudes and perceptions prior to diversity training. (Contains 24 references.) (SK)
Descriptors: Attitudes, Diversity (Institutional), Emotional Response, Measures (Individuals)
Peer reviewedPershing, James A.; Pershing, Jana L. – Human Resource Development Quarterly, 2001
Question dimensions, construction, and response formats of 50 reactionnaire forms completed by participants in medical school programs were analyzed. Numerous problems in 30 forms and shortcomings in 20 others were identified. Ways to improve layout, appearance, anonymity protection, and questions were suggested. (Contains 53 references.) (SK)
Descriptors: Attitude Measures, Evaluation Problems, Privacy, Surveys
Peer reviewedRead, John; Chapelle, Carol A. – Language Testing, 2001
Presents a framework that takes as its starting point an analysis of test purpose, and then shows how purpose can be systematically related to test design. Argues that the way forward for vocabulary assessment is to take account of test purposes in the design and validation of tests, as well as considering an interactionalist approach to construct…
Descriptors: Guidelines, Language Tests, Test Construction, Test Validity
Peer reviewedFrary, Robert B. – Applied Measurement in Education, 2000
Characterizes the circumstances under which validity changes may occur as a result of the deletion of a predictor test segment. Equations show that, for a positive outcome, one should seek a relatively large correlation between the scores from the deleted segment and the remaining items, with a relatively low correlation between scores from the…
Descriptors: Equations (Mathematics), Prediction, Reliability, Scores
Holburn, Steve; Jacobson, John W.; Vietze, Peter M.; Schwartz, Allen A.; Sersen, Eugene – American Journal on Mental Retardation, 2000
This paper considers the measurement of person-centered planning with developmentally disabled individuals. It reports on the development of three instruments to assess person-centered planning and both a process and an outcome index. Psychometric evaluation indicated test-retest reliability and measures of internal consistency were adequate.…
Descriptors: Developmental Disabilities, Evaluation Methods, Individualized Programs, Planning
Peer reviewedWolfe, Edward W.; Dozier, Hallie – Journal of Applied Measurement, 2000
Developed an instrument to measure invasive plant environmentalism (knowledge and attitudes concerning non-native plant invasions). Scaled responses of 237 plant nursery customers to a 17-item standardized interview using the partial credit model. Results indicate that the instrument measured the construct of invasive plant environmentalism…
Descriptors: Adults, Attitude Measures, Environment, Knowledge Level
Peer reviewedLee, Guemin; Dunbar, Stephen B.; Frisbie, David A. – Educational and Psychological Measurement, 2001
Conceptualized eight different types of measurement models for a test composed of testlets and studied the goodness of fit of those models to data using data from the Iowa Tests of Basic Skills and simulated data. The essentially tau-equivalent model and the congeneric model provided worse model fit than the other measurement models. (SLD)
Descriptors: Goodness of Fit, Measurement Techniques, Models, Scores
Peer reviewedOakland, Thomas; Poortinga, Ype H.; Schlegel, Justin; Hambleton, Ronald K. – International Journal of Testing, 2001
Traces the history of the International Test Commission (ITC), reviewing the context in which it was formed, its goals, and major milestones in its development. Suggests ways the ITC may continue to impact test development positively, and introduces this inaugural journal issue. (SLD)
Descriptors: Educational History, Educational Testing, International Education, Test Construction
Peer reviewedCampbell, David – Journal of Career Assessment, 2002
This overview of the development of the Campbell Interest and Skill Survey includes a series of questions answered in the construction process related to domains assessed, item content, response format, scale construction, length, item bias, scoring scales, and interpretation. The addition of skill items to the interest items is described. (SK)
Descriptors: Job Skills, Measures (Individuals), Test Construction, Test Items
Peer reviewedFox, Christine – Journal of Nursing Education, 1999
Demonstrates how the partial credit model, a variation of the Rasch Measurement Model, can be used to develop performance-based assessments for nursing education. Applies the model using the Practical Knowledge Inventory for Nurses. (SK)
Descriptors: Higher Education, Nursing Education, Performance Based Assessment, Psychometrics
Peer reviewedGierl, Mark J.; Henderson, Diane; Jodoin, Michael; Klinger, Don – Journal of Experimental Education, 2001
Examined the influence of item parameter estimation errors across three item selection methods using the two- and three-parameter logistic item response theory (IRT) model. Tests created with the maximum no target and maximum target item selection procedures consistently overestimated the test information function. Tests created using the theta…
Descriptors: Estimation (Mathematics), Item Response Theory, Selection, Test Construction
Peer reviewedO'Connor, S. E.; Pearce, J.; Smith, R. L.; Voegeli, D.; Walton, P. – Nurse Education Today, 2001
Senior nurses' (n=139) expectations of 36 beginning nurses were compared with the beginners' competence ratings by their clinical preceptors. Senior nurses' expectations were lower than the actual competence demonstrated by the graduates, suggesting that assessment instruments should not be derived solely from supervisor expectations. (SK)
Descriptors: Clinical Experience, Entry Workers, Expectation, Job Performance
Rodriguez, Michael C. – Educational Measurement: Issues and Practice, 2005
Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4- or 5-option items per testing time while improving content coverage, without…
Descriptors: Psychometrics, Testing, Scores, Test Construction

Direct link
