Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 15 |
Since 2006 (last 20 years) | 34 |
Descriptor
Source
Author
Publication Type
Education Level
Higher Education | 14 |
Postsecondary Education | 9 |
Elementary Secondary Education | 4 |
Elementary Education | 3 |
Grade 12 | 3 |
Grade 4 | 3 |
Grade 8 | 3 |
High Schools | 3 |
Intermediate Grades | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Khagendra Raj Dhakal; Richard Watson Todd; Natjiree Jaturapitakkul – rEFLections, 2024
Test input has often been taken as a given in test design practice. Nearly all guides for test designers provide extensive coverage of how to design test items but pay little attention to test input. This paper presents the case that test input plays a crucial role in designing tests of soft skills that have rarely been assessed in existing tests.…
Descriptors: Critical Thinking, Perspective Taking, Social Media, Computer Mediated Communication
Read, John – Language Testing, 2023
Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Test Format
Cobern, William W.; Adams, Betty A. J. – International Journal of Assessment Tools in Education, 2020
What follows is a practical guide for establishing the validity of a survey for research purposes. The motivation for providing this guide is our observation that researchers, not necessarily being survey researchers per se, but wanting to use a survey method, lack a concise resource on validity. There is far more to know about surveys and survey…
Descriptors: Surveys, Test Validity, Test Construction, Test Items
Davis-Berg, Elizabeth C.; Minbiole, Julie – School Science Review, 2020
The completion rates were compared for long-form questions where a large blank answer space is provided and for long-form questions where the answer space has bullet-points prompts corresponding to the parts of the question. It was found that students were more likely to complete a question when bullet points were provided in the answer space.…
Descriptors: Test Format, Test Construction, Academic Achievement, Educational Testing
NWEA, 2022
This technical report documents the processes and procedures employed by NWEA® to build and support the English MAP® Reading Fluency™ assessments administered during the 2020-2021 school year. It is written for measurement professionals and administrators to help evaluate the quality of MAP Reading Fluency. The seven sections of this report: (1)…
Descriptors: Achievement Tests, Reading Tests, Reading Achievement, Reading Fluency
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Educational Measurement: Issues and Practice, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-in-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this article, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Nirode, Wayne – Mathematics Teacher, 2019
To address student misconceptions and promote student learning, use discussion questions as an alternative to reviewing assessments. This article describes how using discussion questions as an alternative to going over the test can address student misconceptions and can promote student learning.
Descriptors: Mathematics Instruction, Misconceptions, Mathematics Teachers, Grading
Crowther, Gregory J.; Wiggins, Benjamin L.; Jenkins, Lekelia D. – HAPS Educator, 2020
Many undergraduate biology instructors incorporate active learning exercises into their lessons while continuing to assess students with traditional exams. To better align practice and exams, we present an approach to question-asking that emphasizes templates instead of specific questions. Students and instructors can use these Test Question…
Descriptors: Science Tests, Active Learning, Biology, Undergraduate Students
National Assessment Governing Board, 2019
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. The NAEP assessment in mathematics has two components that differ in purpose. One assessment measures long-term trends in achievement among 9-, 13-, and 17-year-old students by using the same basic design each time.…
Descriptors: National Competency Tests, Mathematics Achievement, Grade 4, Grade 8
Harlacher, Jason – Regional Educational Laboratory Central, 2016
Educators have many decisions to make and it's important that they have the right data to inform those decisions and access to questionnaires that can gather that data. This guide, developed by REL Central and based on work done through separate projects with the Wyoming Office of Public Instruction and the Nebraska Department of Education,…
Descriptors: Questionnaires, Test Construction, Student Surveys, Teacher Surveys
Haladyna, Thomas M. – IDEA Center, Inc., 2018
Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Higher Education
Boone, William J. – CBE - Life Sciences Education, 2016
This essay describes Rasch analysis psychometric techniques and how such techniques can be used by life sciences education researchers to guide the development and use of surveys and tests. Specifically, Rasch techniques can be used to document and evaluate the measurement functioning of such instruments. Rasch techniques also allow researchers to…
Descriptors: Item Response Theory, Psychometrics, Science Education, Educational Research
Fink, Arlene – SAGE Publications Ltd (CA), 2016
Packed with new topics that reflect today's challenges, the Sixth Edition of the bestselling "How to Conduct Surveys" guides readers through the process of developing their own rigorous surveys and evaluating the credibility and transparency of surveys created by others. Offering practical, step-by-step advice and written in the same…
Descriptors: Surveys, Guides, Research Methodology, Test Construction