NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Salmani Nodoushan, Mohammad Ali – Online Submission, 2021
This paper follows a line of logical argumentation to claim that what Samuel Messick conceptualized about construct validation has probably been misunderstood by some educational policy makers, practicing educators, and classroom teachers. It argues that, while Messick's unified theory of test validation aimed at (a) warning educational…
Descriptors: Construct Validity, Test Theory, Test Use, Affordances
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lim Hooi Lian; Wun Thiam Yew – International Journal of Assessment Tools in Education, 2023
The majority of students from elementary to tertiary levels have misunderstandings and challenges acquiring various statistical concepts and skills. However, the existing statistics assessment frameworks challenge practice in a classroom setting. The purpose of this research is to develop and validate a statistical thinking assessment tool…
Descriptors: Psychometrics, Grade 7, Middle School Mathematics, Statistics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Measurement: Interdisciplinary Research and Perspectives, 2012
The 1999 "Standards for Educational and Psychological Testing" defines validity as the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests. Although quite explicit, there are ways in which this definition lacks precision, consistency, and clarity. The history of validity has taught us…
Descriptors: Evidence, Validity, Educational Testing, Risk
Peer reviewed Peer reviewed
Direct linkDirect link
Evers, Arne; Sijtsma, Klaas; Lucassen, Wouter; Meijer, Rob R. – International Journal of Testing, 2010
This article describes the 2009 revision of the Dutch Rating System for Test Quality and presents the results of test ratings from almost 30 years. The rating system evaluates the quality of a test on seven criteria: theoretical basis, quality of the testing materials, comprehensiveness of the manual, norms, reliability, construct validity, and…
Descriptors: Rating Scales, Documentation, Educational Quality, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Coe, Robert – Research Papers in Education, 2010
Much of the argument about comparability of examination standards is at cross-purposes; contradictory positions are in fact often both defensible, but they are using the same words to mean different things. To clarify this, two broad conceptualisations of standards can be identified. One sees the standard in the observed phenomena of performance…
Descriptors: Foreign Countries, Tests, Evaluation Methods, Standards
Camara, Wayne – College Board, 2011
This presentation was presented at the 2011 National Conference on Student Assessment (CCSSO). The focus of this presentation is how to validate the common core state standards (CCSS) in math and ELA and the subsequent assessments that will be developed by state consortia. The CCSS specify the skills students need to be ready for post-secondary…
Descriptors: College Readiness, Career Readiness, Benchmarking, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Research Papers in Education, 2010
Robert Coe has claimed that three broad conceptions of comparability can be identified from the literature: performance, statistical and conventional. Each of these he rejected, in favour of a single, integrated conception which relies upon the notion of a "linking construct" and which he termed "construct comparability".…
Descriptors: Psychometrics, Measurement Techniques, Foreign Countries, Tests
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Baumert, Jurgen; Ludtke, Oliver; Trautwein, Ulrich; Brunner, Martin – Educational Research Review, 2009
Given the relatively high intercorrelations observed between mathematics achievement, reading achievement, and cognitive ability, it has recently been claimed that student assessment studies (e.g., TIMSS, PISA) and intelligence tests measure a single cognitive ability that is practically identical to general intelligence. The present article uses…
Descriptors: Intelligence, Reading Achievement, Mathematics Achievement, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Samuelsen, Karen – Educational Researcher, 2007
This article raises a number of questions about the current unified theory of test validity that has construct validity at its center. The authors suggest a different way of conceptualizing the problem of establishing validity by considering whether the focus of the investigation of a test is internal to the test itself or focuses on constructs…
Descriptors: Vocabulary, Evaluation Research, Construct Validity, Test Validity
Peer reviewed Peer reviewed
Kane, Michael T. – Journal of Educational Measurement, 2001
Provides a brief historical review of construct validity and discusses the current state of validity theory, emphasizing the role of arguments in validation. Examines the application of an argument-based approach with regard to the distinction between performance-based and theory-based interpretations and the role of consequences in validation.…
Descriptors: Construct Validity, Educational Testing, Performance Based Assessment, Theories
Peer reviewed Peer reviewed
Embretson, Susan; Gorin, Joanna – Journal of Educational Measurement, 2001
Examines testing practices in: (1) the past, in which the traditional paradigm left little room for cognitive psychology principles; (2) the present, in which testing research is enhanced by principles of cognitive psychology; and (3) the future, in which the potential of cognitive psychology should be fully realized through item design.…
Descriptors: Cognitive Psychology, Construct Validity, Educational Research, Educational Testing
Peer reviewed Peer reviewed
Sandals, Lauran H. – Canadian Journal of Educational Communication, 1992
Presents an overview of the applications of microcomputer-based assessment and diagnosis for both educational and psychological placement and interventions. Advantages of computer-based assessment (CBA) over paper-based testing practices are described, the history of computer testing is reviewed, and the construct validity of computer-based tests…
Descriptors: Comparative Analysis, Computer Assisted Testing, Construct Validity, Educational Testing
Peer reviewed Peer reviewed
Maguire, Thomas; And Others – Alberta Journal of Educational Research, 1994
Criticizes an article by Messick (1989) that emphasizes consequential validity (the potential and actual social consequences of test score interpretation and use) as a component of construct validity. Shows the profitability of separating the construct-indicator link from the indicator-score link, and the greater importance of the former.…
Descriptors: Academic Achievement, Cognitive Psychology, Construct Validity, Educational Testing
Kennedy, Cathleen A. – 2000
This paper discusses the measurement of unobservable or latent variables of students and how they contribute to learning in an online environment. It also examines the construct validity of two questionnaires: the College Experience Survey and the Computer Experience Study, which both measure different aspects of student attitudes and behavior…
Descriptors: Construct Validity, Distance Education, Educational Technology, Educational Testing
Previous Page | Next Page ยป
Pages: 1  |  2