Publication Date
| In 2026 | 0 |
| Since 2025 | 135 |
| Since 2022 (last 5 years) | 650 |
| Since 2017 (last 10 years) | 1629 |
| Since 2007 (last 20 years) | 7521 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 1535 |
| Policymakers | 1030 |
| Teachers | 796 |
| Administrators | 571 |
| Researchers | 505 |
| Community | 207 |
| Students | 120 |
| Parents | 74 |
| Counselors | 18 |
| Support Staff | 12 |
| Media Staff | 10 |
| More ▼ | |
Location
| Canada | 580 |
| Australia | 504 |
| California | 497 |
| United States | 405 |
| United Kingdom | 363 |
| Texas | 304 |
| United Kingdom (England) | 293 |
| Florida | 270 |
| Illinois | 231 |
| New York | 220 |
| Pennsylvania | 218 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 10 |
| Meets WWC Standards with or without Reservations | 17 |
| Does not meet standards | 13 |
Feldhusen, John F.; And Others – Educational Technology, 1976
A new concept, instructional validity implies that a test is valid if it can be demonstrated that instruction of sufficient quality and behaviorally matched to the performance demands of the test items was offered. (Author/LS)
Descriptors: Achievement Tests, Educational Assessment, Standardized Tests, Test Validity
Peer reviewedBrookfield, Stephen D. – New Directions for Adult and Continuing Education, 1997
Critical thinking is a socially constructed, contextual process not well measured by standardized tests. Better, locally grounded methods are pre/posttest (scenario building), experiential (critical practice audit), behavioral (critical debate), and conversational (storytellers and detectives). (SK)
Descriptors: Critical Thinking, Educational Assessment, Evaluation Methods, Standardized Tests
Peer reviewedSchmitt, Neal – Psychological Assessment, 1996
Some concerns about the use and reporting of coefficient alpha are addressed. It is also shown that alpha is not a measure of homogeneity or unidimensionality. Four ways in which researchers use of alpha can convey inaccurate information or a lack of understanding are reviewed. (SLD)
Descriptors: Correlation, Educational Assessment, Reliability, Research Methodology
Peer reviewedPatz, Richard J.; Junker, Brian W.; Johnson, Matthew S.; Mariano, Louis T. – Journal of Educational and Behavioral Statistics, 2002
Discusses the hierarchical rater model (HRM) of R. Patz (1996) and shows how it can be used to scale examinees and items, model aspects of consensus among raters, and model individual rater severity and consistency effects. Also shows how the HRM fits into the generalizability theory framework. Compares the HRM to the conventional item response…
Descriptors: Educational Assessment, Generalizability Theory, Item Response Theory, Scaling
Peer reviewedBuck, George H.; Osborne, John – Journal of Educational Thought/Revue de la Pensee Educative, 1990
Using the deconstructionist theories of Michel Foucault, argues that there are several perennial myths in educational thought (e.g., all change is progressive and what is promoted as change is novel). Finds support for this method of criticism in the work of Quintilian, Bloom, Elkind, and Popper. (DMM)
Descriptors: Educational Assessment, Educational Change, Educational Sociology, Educational Theories
Peer reviewedMason, Robert C. – New Directions for Adult and Continuing Education, 1993
Methods of preparing for and conducting an adult education program evaluation are discussed: using standards and quality indicators, involving stakeholders, collecting program information, conducting a self-evaluation, and participating in the exit interview. (SK)
Descriptors: Adult Education, Educational Assessment, Evaluation Methods, Program Evaluation
Prime, Glenda – Journal of Technology Studies, 1998
Discusses factors to consider in assessing the knowledge, skills, and affective components of technological literacy. Raises issues about the validity of different assessment methods. (SK)
Descriptors: Educational Assessment, Evaluation Methods, Technological Literacy, Test Validity
Peer reviewedTosey, Paul; Smith, Peter A. C. – Learning Organization, 1999
Presents two approaches to assessing learning organizations: (1) Focus, Will, Capability, Performance System and (2) organizations as energies. Describes ways in which behavior change is measured in each approach. (SK)
Descriptors: Behavior Change, Educational Assessment, Evaluation Methods, Institutional Evaluation
Peer reviewedHambleton, Ronald K.; Jaeger, Richard M.; Plake, Barbara S.; Mills, Craig – Applied Psychological Measurement, 2000
Reviews a number of promising methods for setting performance standards and discusses their strengths and weaknesses. Outlines some areas for future research that address the role of feedback to panelists and validation efforts for performance standards among other topics. (SLD)
Descriptors: Educational Assessment, Performance Based Assessment, Scoring, Standards
Peer reviewedSummers, Anita A. – Education Next, 2002
Advocates the use of value-added assessment. Argues that value-added assessment is not too complicated for teachers to use, that proposed alternatives such as subjective evaluation methods are untenable, and that low-achieving students are the major beneficiaries. (PKP)
Descriptors: Accountability, Educational Assessment, Elementary Secondary Education, Low Achievement
Mertler, Craig A. – American Secondary Education, 2004
Assessing student performance is one of the most critical aspects of the job of a classroom teacher, but many teachers do not feel adequately prepared to assess their students' performance. In order to measure and compare secondary preservice and inservice teachers' "assessment literacy", both groups were surveyed using the Classroom…
Descriptors: Methods, Educational Assessment, Preservice Teachers, Grading
Kane, Michael; Case, Susan M. – Applied Measurement in Education, 2004
The scores on 2 distinct tests (e.g., essay and objective) are often combined to create a composite score, which is used to make decisions. The validity of the observed composite can sometimes be evaluated relative to an external criterion. However, in cases where no criterion is available, the observed composite has generally been evaluated in…
Descriptors: Validity, Weighted Scores, Reliability, Student Evaluation
Ferrera, Robert J. – Leadership, 2005
It would be hard to find an educator who didn't believe that public schools should be held accountable to the taxpayers for their work. Furthermore, there is wide agreement that assessment goes hand-in-hand with accountability, and is therefore necessary. Disagreements tend to center on the political decisions connected with what to assess and…
Descriptors: Accountability, Educational Assessment, Educational Quality, Professional Development
Testimony Pertaining to the Science Framework for the 2009 National Assessment of Education Progress
Peer reviewedStarkweather, Kendall N. – Technology Teacher, 2005
In this brief article, The International Technology Education Association (ITEA) responds to a request for input pertaining to the coming science assessments that will be conducted for the first time in 2009. The testimony presented outlines a position not taken by ITEA previously in that a request is made to keep technological design in the…
Descriptors: Science Education, Educational Assessment, Technology Education, Technological Literacy
Colorado Department of Education, 2009
In January 2009, the Colorado Department of Education (CDE) and the Colorado Education Association (CEA) initiated a 13-city teacher tour to engage teachers in a statewide discussion about CAP4K, its relevance to practice, its impact on teaching and learning and the kind of help that teachers would find useful for classroom implementation. Between…
Descriptors: Educational Change, Alignment (Education), Educational Policy, Work Environment

Direct link
