Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 7 |
| Since 2017 (last 10 years) | 14 |
| Since 2007 (last 20 years) | 30 |
Descriptor
| Test Format | 87 |
| Test Items | 87 |
| Test Construction | 53 |
| Computer Assisted Testing | 21 |
| Higher Education | 16 |
| Scoring | 13 |
| Test Content | 13 |
| Elementary Secondary Education | 12 |
| Foreign Countries | 12 |
| Test Validity | 12 |
| Testing | 12 |
| More ▼ | |
Source
Author
Publication Type
Education Level
| Higher Education | 9 |
| Elementary Education | 7 |
| Elementary Secondary Education | 7 |
| Postsecondary Education | 7 |
| Secondary Education | 7 |
| Junior High Schools | 5 |
| Middle Schools | 5 |
| Grade 12 | 4 |
| Grade 4 | 4 |
| Grade 8 | 4 |
| High Schools | 4 |
| More ▼ | |
Location
| Canada | 3 |
| Australia | 2 |
| Israel | 2 |
| Florida | 1 |
| Georgia | 1 |
| Japan | 1 |
| Kuwait | 1 |
| Nebraska | 1 |
| New Jersey | 1 |
| Sweden | 1 |
| Thailand | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Khagendra Raj Dhakal; Richard Watson Todd; Natjiree Jaturapitakkul – rEFLections, 2024
Test input has often been taken as a given in test design practice. Nearly all guides for test designers provide extensive coverage of how to design test items but pay little attention to test input. This paper presents the case that test input plays a crucial role in designing tests of soft skills that have rarely been assessed in existing tests.…
Descriptors: Critical Thinking, Perspective Taking, Social Media, Computer Mediated Communication
Cobern, William W.; Adams, Betty A. J. – International Journal of Assessment Tools in Education, 2020
What follows is a practical guide for establishing the validity of a survey for research purposes. The motivation for providing this guide is our observation that researchers, not necessarily being survey researchers per se, but wanting to use a survey method, lack a concise resource on validity. There is far more to know about surveys and survey…
Descriptors: Surveys, Test Validity, Test Construction, Test Items
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
NWEA, 2022
This technical report documents the processes and procedures employed by NWEA® to build and support the English MAP® Reading Fluency™ assessments administered during the 2020-2021 school year. It is written for measurement professionals and administrators to help evaluate the quality of MAP Reading Fluency. The seven sections of this report: (1)…
Descriptors: Achievement Tests, Reading Tests, Reading Achievement, Reading Fluency
Item Order and Speededness: Implications for Test Fairness in Higher Educational High-Stakes Testing
Becker, Benjamin; van Rijn, Peter; Molenaar, Dylan; Debeer, Dries – Assessment & Evaluation in Higher Education, 2022
A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue…
Descriptors: Information Security, High Stakes Tests, Computer Security, Test Items
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Nebraska Department of Education, 2024
The Nebraska Student-Centered Assessment System (NSCAS) is a statewide assessment system that embodies Nebraska's holistic view of students and helps them prepare for success in postsecondary education, career, and civic life. It uses multiple measures throughout the year to provide educators and decision-makers at all levels with the insights…
Descriptors: Student Evaluation, Evaluation Methods, Elementary School Students, Middle School Students
National Assessment Governing Board, 2019
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. The NAEP assessment in mathematics has two components that differ in purpose. One assessment measures long-term trends in achievement among 9-, 13-, and 17-year-old students by using the same basic design each time.…
Descriptors: National Competency Tests, Mathematics Achievement, Grade 4, Grade 8
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Harlacher, Jason – Regional Educational Laboratory Central, 2016
Educators have many decisions to make and it's important that they have the right data to inform those decisions and access to questionnaires that can gather that data. This guide, developed by REL Central and based on work done through separate projects with the Wyoming Office of Public Instruction and the Nebraska Department of Education,…
Descriptors: Questionnaires, Test Construction, Student Surveys, Teacher Surveys
Haladyna, Thomas M. – IDEA Center, Inc., 2018
Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Higher Education
Mullis, Ina V. S., Ed.; Martin, Michael O., Ed.; von Davier, Matthias, Ed. – International Association for the Evaluation of Educational Achievement, 2021
TIMSS (Trends in International Mathematics and Science Study) is a long-standing international assessment of mathematics and science at the fourth and eighth grades that has been collecting trend data every four years since 1995. About 70 countries use TIMSS trend data for monitoring the effectiveness of their education systems in a global…
Descriptors: Achievement Tests, International Assessment, Science Achievement, Mathematics Achievement
Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a group of states working together to develop a modern assessment that replaces previous state standardized tests. It provides better information for teachers and parents to identify where a student needs help, or is excelling, so they are able to enhance instruction to…
Descriptors: Literacy, Language Arts, Scoring Formulas, Scoring

Peer reviewed
Direct link
