NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Salmani Nodoushan, Mohammad Ali – Online Submission, 2021
This paper follows a line of logical argumentation to claim that what Samuel Messick conceptualized about construct validation has probably been misunderstood by some educational policy makers, practicing educators, and classroom teachers. It argues that, while Messick's unified theory of test validation aimed at (a) warning educational…
Descriptors: Construct Validity, Test Theory, Test Use, Affordances
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lim Hooi Lian; Wun Thiam Yew – International Journal of Assessment Tools in Education, 2023
The majority of students from elementary to tertiary levels have misunderstandings and challenges acquiring various statistical concepts and skills. However, the existing statistics assessment frameworks challenge practice in a classroom setting. The purpose of this research is to develop and validate a statistical thinking assessment tool…
Descriptors: Psychometrics, Grade 7, Middle School Mathematics, Statistics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Measurement: Interdisciplinary Research and Perspectives, 2012
The 1999 "Standards for Educational and Psychological Testing" defines validity as the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests. Although quite explicit, there are ways in which this definition lacks precision, consistency, and clarity. The history of validity has taught us…
Descriptors: Evidence, Validity, Educational Testing, Risk
Peer reviewed Peer reviewed
Direct linkDirect link
Evers, Arne; Sijtsma, Klaas; Lucassen, Wouter; Meijer, Rob R. – International Journal of Testing, 2010
This article describes the 2009 revision of the Dutch Rating System for Test Quality and presents the results of test ratings from almost 30 years. The rating system evaluates the quality of a test on seven criteria: theoretical basis, quality of the testing materials, comprehensiveness of the manual, norms, reliability, construct validity, and…
Descriptors: Rating Scales, Documentation, Educational Quality, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Coe, Robert – Research Papers in Education, 2010
Much of the argument about comparability of examination standards is at cross-purposes; contradictory positions are in fact often both defensible, but they are using the same words to mean different things. To clarify this, two broad conceptualisations of standards can be identified. One sees the standard in the observed phenomena of performance…
Descriptors: Foreign Countries, Tests, Evaluation Methods, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Research Papers in Education, 2010
Robert Coe has claimed that three broad conceptions of comparability can be identified from the literature: performance, statistical and conventional. Each of these he rejected, in favour of a single, integrated conception which relies upon the notion of a "linking construct" and which he termed "construct comparability".…
Descriptors: Psychometrics, Measurement Techniques, Foreign Countries, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Baumert, Jurgen; Ludtke, Oliver; Trautwein, Ulrich; Brunner, Martin – Educational Research Review, 2009
Given the relatively high intercorrelations observed between mathematics achievement, reading achievement, and cognitive ability, it has recently been claimed that student assessment studies (e.g., TIMSS, PISA) and intelligence tests measure a single cognitive ability that is practically identical to general intelligence. The present article uses…
Descriptors: Intelligence, Reading Achievement, Mathematics Achievement, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Samuelsen, Karen – Educational Researcher, 2007
This article raises a number of questions about the current unified theory of test validity that has construct validity at its center. The authors suggest a different way of conceptualizing the problem of establishing validity by considering whether the focus of the investigation of a test is internal to the test itself or focuses on constructs…
Descriptors: Vocabulary, Evaluation Research, Construct Validity, Test Validity
Peer reviewed Peer reviewed
Kane, Michael T. – Journal of Educational Measurement, 2001
Provides a brief historical review of construct validity and discusses the current state of validity theory, emphasizing the role of arguments in validation. Examines the application of an argument-based approach with regard to the distinction between performance-based and theory-based interpretations and the role of consequences in validation.…
Descriptors: Construct Validity, Educational Testing, Performance Based Assessment, Theories
Peer reviewed Peer reviewed
Embretson, Susan; Gorin, Joanna – Journal of Educational Measurement, 2001
Examines testing practices in: (1) the past, in which the traditional paradigm left little room for cognitive psychology principles; (2) the present, in which testing research is enhanced by principles of cognitive psychology; and (3) the future, in which the potential of cognitive psychology should be fully realized through item design.…
Descriptors: Cognitive Psychology, Construct Validity, Educational Research, Educational Testing
Peer reviewed Peer reviewed
Sandals, Lauran H. – Canadian Journal of Educational Communication, 1992
Presents an overview of the applications of microcomputer-based assessment and diagnosis for both educational and psychological placement and interventions. Advantages of computer-based assessment (CBA) over paper-based testing practices are described, the history of computer testing is reviewed, and the construct validity of computer-based tests…
Descriptors: Comparative Analysis, Computer Assisted Testing, Construct Validity, Educational Testing
Peer reviewed Peer reviewed
Maguire, Thomas; And Others – Alberta Journal of Educational Research, 1994
Criticizes an article by Messick (1989) that emphasizes consequential validity (the potential and actual social consequences of test score interpretation and use) as a component of construct validity. Shows the profitability of separating the construct-indicator link from the indicator-score link, and the greater importance of the former.…
Descriptors: Academic Achievement, Cognitive Psychology, Construct Validity, Educational Testing
Peer reviewed Peer reviewed
Camara, Wayne J.; Brown, Dianne C. – Educational Measurement: Issues and Practice, 1995
The implications for educational and employment testing of recent measurement, social, and policy issues are compared and contrasted. Changes are proposed relative to technical conceptualization of assessment, an increased focus on performance-based evaluation, and expanded expectations and uses of assessment. (SLD)
Descriptors: Construct Validity, Educational Assessment, Educational Change, Educational Policy
Peer reviewed Peer reviewed
Behrman, Edward H. – Journal of Developmental Education, 2000
Discusses validity issues associated with three popular content-general reading college placement tests and then presents a theoretical argument to support an alternative placement procedure using content-specific testing. Proposes that content-specific tests would have improved content-related and construct-related validity. States that more…
Descriptors: Construct Validity, Content Validity, Educational Testing, Equivalency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Previous Page | Next Page ยป
Pages: 1  |  2