NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 346 to 360 of 830 results Save | Export
New York State Education Department, 2014
This technical report provides an overview of the New York State Alternate Assessment (NYSAA), including a description of the purpose of the NYSAA, the processes utilized to develop and implement the NYSAA program, and Stakeholder involvement in those processes. The purpose of this report is to document the technical aspects of the 2013-14 NYSAA.…
Descriptors: Alternative Assessment, Educational Assessment, State Departments of Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Broekkamp, H.; Van Hout-Wolters, B. H. A. M.; Van den Bergh, H.; Rijlaarsdam, G. – British Journal of Educational Psychology, 2004
Background: Previous studies on instructional importance show that individual students and their teachers differ in the topics that they consider important in the context of an upcoming teacher-made test. Aims: This study aimed to examine whether such differences between students' test expectations and teachers' intended task demands can be…
Descriptors: Student Attitudes, Probability, Test Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tourkin, Steven; Thomas, Teresa; Swaim, Nancy; Cox, Shawna; Parmer, Randall; Jackson, Betty; Cole, Cornette; Zhang, Bei – National Center for Education Statistics, 2010
The Schools and Staffing Survey (SASS) is conducted by the National Center for Education Statistics (NCES) on behalf of the United States Department of Education in order to collect extensive data on American public and private elementary and secondary schools. SASS provides data on the characteristics and qualifications of teachers and…
Descriptors: Elementary Secondary Education, National Surveys, Public Schools, Private Schools
ACT, Inc., 2009
As part of its College Readiness System, ACT offers the PLAN[R] program as a way for tenth-grade students to review their progress toward college readiness while there is still time to make necessary interventions. PLAN contains four tests--English, Mathematics, Reading, and Science. These tests are designed to measure students' curriculum-related…
Descriptors: Grade 10, Correlation, Alignment (Education), Predictor Variables
National Assessment Governing Board, 2010
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. The NAEP Assessment in mathematics has two components that differ in…
Descriptors: Mathematics Achievement, Academic Achievement, Audiences, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Rodeck, Elaine M.; Chin, Tzu-Yun; Davis, Susan L.; Plake, Barbara S. – Journal of Applied Testing Technology, 2008
This study examined the relationships between the evaluations obtained from standard setting panelists and changes in ratings between different rounds of a standard setting study that involved setting standards on different language versions of an exam. We investigated panelists' evaluations to determine if their perceptions of the standard…
Descriptors: Mathematics Tests, Standard Setting (Scoring), French, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G. – Educational Researcher, 2007
Lissitz and Samuelsen (2007) propose a new framework for conceptualizing test validity that separates analysis of test properties from analysis of the construct measured. In response, the author of this article reviews fundamental characteristics of test validity, drawing largely from seminal writings as well as from the accepted standards. He…
Descriptors: Test Content, Test Validity, Guidelines, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Young, John W. – Educational Assessment, 2009
In this article, I specify a conceptual framework for test validity research on content assessments taken by English language learners (ELLs) in U.S. schools in grades K-12. This framework is modeled after one previously delineated by Willingham et al. (1988), which was developed to guide research on students with disabilities. In this framework…
Descriptors: Test Validity, Evaluation Research, Achievement Tests, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Kim, Hae-Jin; Gentile, Claudia – Language Assessment Quarterly, 2009
In cognitive diagnosis a Q-matrix (Tatsuoka, 1983, 1990), which is an incidence matrix that defines the relationships between test items and constructs of interest, has great impact on the nature of performance feedback that can be provided to score users. The purpose of the present study was to identify meaningful skill coding categories that…
Descriptors: Feedback (Response), Test Items, Test Content, Identification
Peer reviewed Peer reviewed
Ryan, Gina J.; Nykamp, Diane – American Journal of Pharmaceutical Education, 2000
Surveyed department of pharmacy chairs at 77 schools of pharmacy about current use of cumulative exams. Found that more than 80 percent do not administer cumulative exams and that the primary rationale for such exams is to encourage students to review material prior to advancement; they are rarely used to determine advancement. (EV)
Descriptors: Pharmaceutical Education, School Surveys, Test Content, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Oakland, Thomas; Lane, Holly B. – International Journal of Testing, 2004
Issues pertaining to language and reading while developing and adapting tests are examined. Strengths and limitations associated with the use of readability formulas are discussed. Their use should be confined to paragraphs and longer passages, not items. Readability methods that consider both quantitative and qualitative variables and are…
Descriptors: Test Content, Readability, Readability Formulas, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Breithaupt, Krista; Hare, Donovan R. – Educational and Psychological Measurement, 2007
Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…
Descriptors: Testing Programs, Psychometrics, Certification, Accounting
Tanguma, Jesus – 2000
This paper addresses four steps in test construction specification: (1) the purpose of the test; (2) the content of the test; (3) the format of the test; and (4) the pool of items. If followed, such steps not only will assist the test constructor but will also enhance the students' learning. Within the "Content of the Test" section, two…
Descriptors: Test Construction, Test Content, Test Format, Test Items
Peer reviewed Peer reviewed
Turner, Ronna C.; Carlson, Laurie – International Journal of Testing, 2003
Item-objective congruence as developed by R. Rovinelli and R. Hambleton is used in test development for evaluating content validity at the item development stage. Provides a mathematical extension to the Rovinelli and Hambleton index that is applicable for the multidimensional case. (SLD)
Descriptors: Content Validity, Test Construction, Test Content, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ferne, Tracy; Rupp, Andre A. – Language Assessment Quarterly, 2007
This article reviews research on differential item functioning (DIF) in language testing conducted primarily between 1990 and 2005 with an eye toward providing methodological guidelines for developing, conducting, and disseminating research in this area. The article contains a synthesis of 27 studies with respect to five essential sets of…
Descriptors: Test Bias, Evaluation Research, Testing, Language Tests
Pages: 1  |  ...  |  20  |  21  |  22  |  23  |  24  |  25  |  26  |  27  |  28  |  ...  |  56