NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Secolsky, Charles – Journal of Educational Measurement, 1987
For measuring the face validity of a test, Nevo suggested that test takers and nonprofessional users rate items on a five point scale. This article questions the ability of those raters and the credibility of the aggregated judgment as evidence of the validity of the test. (JAZ)
Descriptors: Content Validity, Measurement Techniques, Rating Scales, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Schoenfeld, Alan H. – Measurement: Interdisciplinary Research and Perspectives, 2007
The authors of this volume's stimulus papers have taken on the challenge of developing measures of teachers' mathematical knowledge for teaching (MKT). This task involves multiple decisions and considerations, including: (1) How does one specify the body of knowledge being assessed? What warrants are offered for those choices?; (2) How does one…
Descriptors: Test Validity, Psychometrics, Test Construction, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Heather C. – Measurement: Interdisciplinary Research and Perspectives, 2007
The author offers some thoughts on commentator's reactions to the substance of the measures, particularly those about measuring teacher learning and change, based on the major uses of the measures, and because this is a significant challenge facing test development as an enterprise. If teacher learning results in more integrated knowledge or…
Descriptors: Educational Testing, Tests, Measurement, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Schilling, Stephen – Measurement: Interdisciplinary Research and Perspectives, 2007
In this article, the author echoes his co-author's and colleague's pleasure (Hill, this issue) at the thoughtfulness and far-ranging nature of the comments to their initial attempts at test validation for the mathematical knowledge for teaching (MKT) measures using the validity argument approach. Because of the large number of commentaries they…
Descriptors: Generalizability Theory, Persuasive Discourse, Educational Testing, Measurement
Herman, Joan L. – 1986
Issues in designing valid tests for the National Assessment of Educational Progress (NAEP) are discussed. Test scores are often provided without any information on the nature of the tasks represented. Because test domains are defined by individual item writers, the generalizability between tests and items is suspect. While typical content…
Descriptors: Achievement Tests, Content Validity, Criterion Referenced Tests, Educational Assessment
Medina, Noe; Neill, D. Monty – 1990
Standardized tests often produce results that are inaccurate, inconsistent, and biased against minority, female, and low-income students. Such tests shift control and authority into the hands of the unregulated testing industry and can undermine school achievement by narrowing the curriculum, frustrating teachers, and driving students out of…
Descriptors: Academic Achievement, Administrators, Construct Validity, Content Validity
Carlson, Ken – 1986
This paper discusses the content of the social studies tests of the 1981-82 National Assessment of Educational Progress (NAEP). Selected social studies and citizenship items were administered to 3,200 students aged 9, 13, and 17 and to adults aged 26-35. Twelve sources describing the social studies tests were reviewed, particularly the Citizenship…
Descriptors: Achievement Tests, Citizenship Education, Content Analysis, Content Validity