Publication Date
| In 2026 | 0 |
| Since 2025 | 5 |
| Since 2022 (last 5 years) | 68 |
| Since 2017 (last 10 years) | 169 |
| Since 2007 (last 20 years) | 391 |
Descriptor
Source
Author
| Sireci, Stephen G. | 9 |
| Kitao, Kenji | 4 |
| Kitao, S. Kathleen | 4 |
| Papageorgiou, Spiros | 4 |
| Thurlow, Martha L. | 4 |
| Winnick, Joseph P. | 4 |
| van der Linden, Wim J. | 4 |
| Chang, Hua-Hua | 3 |
| Donovan, Jenny | 3 |
| Ewing, Maureen | 3 |
| Hau, Kit-Tai | 3 |
| More ▼ | |
Publication Type
Education Level
Audience
| Teachers | 68 |
| Practitioners | 59 |
| Administrators | 20 |
| Students | 15 |
| Policymakers | 9 |
| Researchers | 7 |
| Parents | 6 |
| Counselors | 3 |
| Community | 2 |
| Support Staff | 1 |
Location
| Australia | 18 |
| California | 15 |
| Canada | 14 |
| China | 13 |
| United States | 12 |
| Massachusetts | 9 |
| United Kingdom | 9 |
| Europe | 8 |
| Georgia | 8 |
| Japan | 8 |
| Rhode Island | 8 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
National Assessment Governing Board, 2010
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. The NAEP Assessment in mathematics has two components that differ in…
Descriptors: Mathematics Achievement, Academic Achievement, Audiences, National Competency Tests
Rodeck, Elaine M.; Chin, Tzu-Yun; Davis, Susan L.; Plake, Barbara S. – Journal of Applied Testing Technology, 2008
This study examined the relationships between the evaluations obtained from standard setting panelists and changes in ratings between different rounds of a standard setting study that involved setting standards on different language versions of an exam. We investigated panelists' evaluations to determine if their perceptions of the standard…
Descriptors: Mathematics Tests, Standard Setting (Scoring), French, Evaluation Research
Sireci, Stephen G. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) propose a new framework for conceptualizing test validity that separates analysis of test properties from analysis of the construct measured. In response, the author of this article reviews fundamental characteristics of test validity, drawing largely from seminal writings as well as from the accepted standards. He…
Descriptors: Test Content, Test Validity, Guidelines, Test Items
Young, John W. – Educational Assessment, 2009
In this article, I specify a conceptual framework for test validity research on content assessments taken by English language learners (ELLs) in U.S. schools in grades K-12. This framework is modeled after one previously delineated by Willingham et al. (1988), which was developed to guide research on students with disabilities. In this framework…
Descriptors: Test Validity, Evaluation Research, Achievement Tests, Elementary Secondary Education
Sawaki, Yasuyo; Kim, Hae-Jin; Gentile, Claudia – Language Assessment Quarterly, 2009
In cognitive diagnosis a Q-matrix (Tatsuoka, 1983, 1990), which is an incidence matrix that defines the relationships between test items and constructs of interest, has great impact on the nature of performance feedback that can be provided to score users. The purpose of the present study was to identify meaningful skill coding categories that…
Descriptors: Feedback (Response), Test Items, Test Content, Identification
Peer reviewedRyan, Gina J.; Nykamp, Diane – American Journal of Pharmaceutical Education, 2000
Surveyed department of pharmacy chairs at 77 schools of pharmacy about current use of cumulative exams. Found that more than 80 percent do not administer cumulative exams and that the primary rationale for such exams is to encourage students to review material prior to advancement; they are rarely used to determine advancement. (EV)
Descriptors: Pharmaceutical Education, School Surveys, Test Content, Tests
Oakland, Thomas; Lane, Holly B. – International Journal of Testing, 2004
Issues pertaining to language and reading while developing and adapting tests are examined. Strengths and limitations associated with the use of readability formulas are discussed. Their use should be confined to paragraphs and longer passages, not items. Readability methods that consider both quantitative and qualitative variables and are…
Descriptors: Test Content, Readability, Readability Formulas, Test Construction
Breithaupt, Krista; Hare, Donovan R. – Educational and Psychological Measurement, 2007
Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…
Descriptors: Testing Programs, Psychometrics, Certification, Accounting
Tanguma, Jesus – 2000
This paper addresses four steps in test construction specification: (1) the purpose of the test; (2) the content of the test; (3) the format of the test; and (4) the pool of items. If followed, such steps not only will assist the test constructor but will also enhance the students' learning. Within the "Content of the Test" section, two…
Descriptors: Test Construction, Test Content, Test Format, Test Items
Peer reviewedTurner, Ronna C.; Carlson, Laurie – International Journal of Testing, 2003
Item-objective congruence as developed by R. Rovinelli and R. Hambleton is used in test development for evaluating content validity at the item development stage. Provides a mathematical extension to the Rovinelli and Hambleton index that is applicable for the multidimensional case. (SLD)
Descriptors: Content Validity, Test Construction, Test Content, Test Items
Ferne, Tracy; Rupp, Andre A. – Language Assessment Quarterly, 2007
This article reviews research on differential item functioning (DIF) in language testing conducted primarily between 1990 and 2005 with an eye toward providing methodological guidelines for developing, conducting, and disseminating research in this area. The article contains a synthesis of 27 studies with respect to five essential sets of…
Descriptors: Test Bias, Evaluation Research, Testing, Language Tests
Hager, Karen D.; Slocum, Timothy A. – Education and Training in Developmental Disabilities, 2008
Alternate assessments are the means through which students with significant cognitive disabilities participate in accountability testing, thus measurement validity of alternate assessments is a critical aspect of state educational accountability systems. When evaluating the validity of assessment systems, it is important to take a broad view of…
Descriptors: Test Content, Student Evaluation, Alternative Assessment, Test Validity
Shelton, Alison R.; Brown, Richard S. – Online Submission, 2008
More than 60% of all community college students are placed into remedial, non-credit bearing courses. Concerns over the lack of articulation across the K-12 and postsecondary educational systems have led to concerns over whether students have had the opportunity to learn and demonstrate the skills required for success in college level classes. To…
Descriptors: Community Colleges, College Students, Student Placement, Remedial Instruction
Liang, Ling L.; Yuan, Haiquan – International Journal of Science Education, 2008
This study reports findings from an analysis of the 2002 Chinese National Physics Curriculum Guidelines and the alignment between the curriculum guidelines and two most recent provincial-level 12th-grade exit examinations in China. Both curriculum guidelines and test content were represented using two-dimensional matrices (i.e., topic by level of…
Descriptors: Test Content, Exit Examinations, Physics, Guidelines
Ferrara, Steve; Perie, Marianne; Johnson, Eugene – Journal of Applied Testing Technology, 2008
Psychometricians continue to introduce new approaches to setting cut scores for educational assessments in an attempt to improve on current methods. In this paper we describe the Item-Descriptor (ID) Matching method, a method based on IRT item mapping. In ID Matching, test content area experts match items (i.e., their judgments about the knowledge…
Descriptors: Test Results, Test Content, Testing Programs, Educational Testing

Direct link
