Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 22 |
| Since 2017 (last 10 years) | 50 |
| Since 2007 (last 20 years) | 101 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 39 |
| Teachers | 36 |
| Administrators | 12 |
| Researchers | 10 |
| Policymakers | 6 |
| Students | 3 |
| Parents | 1 |
Location
| California | 8 |
| Canada | 8 |
| United Kingdom (England) | 5 |
| United Kingdom | 4 |
| United Kingdom (Great Britain) | 4 |
| Australia | 3 |
| Georgia | 3 |
| Israel | 3 |
| Japan | 3 |
| New Jersey | 3 |
| Spain | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
van der Linden, Wim J. – Journal of Educational Measurement, 2011
A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…
Descriptors: Test Format, Reaction Time, Test Construction
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
National Assessment Governing Board, 2017
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. They inform citizens about the nature of students' comprehension of the…
Descriptors: Mathematics Tests, Mathematics Achievement, Mathematics Instruction, Grade 4
GED Testing Service, 2016
This guide is designed to help adult educators and administrators better understand the content of the GED® test. This guide is tailored to each test subject and highlights the test's item types, assessment targets, and guidelines for how items will be scored. This 2016 edition has been updated to include the most recent information about the…
Descriptors: Guidelines, Teaching Guides, High School Equivalency Programs, Test Items
Young, Arthur; Shawl, Stephen J. – Astronomy Education Review, 2013
Professors who teach introductory astronomy to students not majoring in science desire them to comprehend the concepts and theories that form the basis of the science. They are usually less concerned about the myriad of
detailed facts and information that accompanies the science. As such, professors prefer to test the students for such…
Descriptors: Multiple Choice Tests, Classification, Astronomy, Introductory Courses
National Assessment Governing Board, 2014
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. They inform citizens about the nature of students' comprehension of the…
Descriptors: National Competency Tests, Mathematics Achievement, Mathematics Skills, Grade 4
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Salend, Spencer J. – Educational Leadership, 2011
Creating a fair, reliable, teacher-made test is a challenge. Every year poorly designed tests fail to accurately measure many students' learning--and negatively affect their academic futures. Salend, a well-known writer on assessment for at-risk students who consults with schools on assessment procedures, offers guidelines for creating tests that…
Descriptors: At Risk Students, Test Construction, Student Evaluation, Evaluation Methods
Holme, Thomas; Murphy, Kristen – Journal of Chemical Education, 2011
In 2005, the ACS Examinations Institute released an exam for first-term general chemistry in which items are intentionally paired with one conceptual and one traditional item. A second-term, paired-questions exam was released in 2007. This paper presents an empirical study of student performances on these two exams based on national samples of…
Descriptors: Chemistry, Science Tests, College Science, Undergraduate Students
Westhuizen, Duan vd – Commonwealth of Learning, 2016
This work starts with a brief overview of education in developing countries, to contextualise the use of the guidelines. Although this document is intended to be a practical tool, it is necessary to include some theoretical analysis of the concept of online assessment. This is given in Sections 3 and 4, together with the identification and…
Descriptors: Guidelines, Student Evaluation, Computer Assisted Testing, Evaluation Methods
Kolen, Michael J.; Lee, Won-Chan – Educational Measurement: Issues and Practice, 2011
This paper illustrates that the psychometric properties of scores and scales that are used with mixed-format educational tests can impact the use and interpretation of the scores that are reported to examinees. Psychometric properties that include reliability and conditional standard errors of measurement are considered in this paper. The focus is…
Descriptors: Test Use, Test Format, Error of Measurement, Raw Scores
Carr, Nathan T.; Xi, Xiaoming – Language Assessment Quarterly, 2010
This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they…
Descriptors: Scoring, Automation, Reading Tests, Test Format
Al-Amri, Majid N. – English Language Teaching, 2010
This paper introduces and discusses issues related to the challenge of obtaining more valid and reliable assessment and positive backwash of direct spoken English language performance of students in real-life situations. For this purpose, the paper is divided into four sections. The first section is the introduction to the article. The second part…
Descriptors: English (Second Language), Second Language Instruction, Second Language Learning, Language Tests
Maxwell, Alexander – History Teacher, 2010
The in-class essay is not an effective means to assess student ability in a history exam. History teachers should instead ask short-answer questions in order to test what the American Historical Association calls "objective" knowledge: the ability to identify concepts, historical actors, organizations, events, and so forth. Such questions,…
Descriptors: History Instruction, Student Evaluation, Essay Tests, Questioning Techniques
Olinghouse, Natalie G.; Colwell, Ryan P. – Intervention in School and Clinic, 2013
This article provides recommendations for teachers to better prepare 3rd through 12th grade students with learning disabilities for large-scale writing assessments. The variation across large-scale writing assessments and the multiple needs of struggling writers indicate the need for test preparation to be embedded within a comprehensive,…
Descriptors: Learning Disabilities, Elementary Secondary Education, Writing Evaluation, Test Wiseness

Peer reviewed
Direct link
