Publication Date
| In 2026 | 0 |
| Since 2025 | 18 |
| Since 2022 (last 5 years) | 120 |
| Since 2017 (last 10 years) | 262 |
| Since 2007 (last 20 years) | 435 |
Descriptor
| Test Format | 956 |
| Test Items | 956 |
| Test Construction | 363 |
| Multiple Choice Tests | 260 |
| Foreign Countries | 227 |
| Difficulty Level | 199 |
| Higher Education | 179 |
| Computer Assisted Testing | 160 |
| Item Response Theory | 151 |
| Item Analysis | 149 |
| Scores | 146 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 62 |
| Teachers | 47 |
| Researchers | 32 |
| Students | 15 |
| Administrators | 13 |
| Parents | 6 |
| Policymakers | 5 |
| Community | 1 |
| Counselors | 1 |
Location
| Turkey | 27 |
| Canada | 15 |
| Germany | 15 |
| Australia | 13 |
| Israel | 13 |
| Japan | 12 |
| Netherlands | 10 |
| United Kingdom | 10 |
| United States | 9 |
| Arizona | 6 |
| Iran | 6 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 2 |
| No Child Left Behind Act 2001 | 2 |
| Elementary and Secondary… | 1 |
| Head Start | 1 |
| Job Training Partnership Act… | 1 |
| Perkins Loan Program | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Lesnov, Roman Olegovich – International Journal of Computer-Assisted Language Learning and Teaching, 2018
This article compares second language test-takers' performance on an academic listening test in an audio-only mode versus an audio-video mode. A new method of classifying video-based visuals was developed and piloted, which used L2 expert opinions to place the video on a continuum from being content-deficient (not helpful for answering…
Descriptors: Second Language Learning, Second Language Instruction, Video Technology, Classification
Levi-Keren, Michal – Cogent Education, 2016
This study explains mathematical difficulties of students who immigrated from the Former Soviet Union (FSU) vis-à-vis Israeli students, by identifying the existing bias factors in achievement tests. These factors are irrelevant to the mathematical knowledge being measured, and therefore threaten the test results. The bias factors were identified…
Descriptors: Mathematics Achievement, Mathematics Tests, Immigrants, Interviews
Kremmel, Benjamin; Schmitt, Norbert – Language Assessment Quarterly, 2016
The scores from vocabulary size tests have typically been interpreted as demonstrating that the target words are "known" or "learned." But "knowing" a word should entail the ability to use it in real language communication in one or more of the four skills. It should also entail deeper knowledge, such as knowing the…
Descriptors: Vocabulary Development, Language Tests, Scores, Test Items
Christ, Tanya; Chiu, Ming Ming; Currie, Ashelin; Cipielewski, James – Reading Psychology, 2014
This study tested how 53 kindergarteners' expressions of depth of vocabulary knowledge and use in novel contexts were related to in-context and out-of-context test formats for 16 target words. Applying multilevel, multi-categorical Logit to all 1,696 test item responses, the authors found that kindergarteners were more likely to express deep…
Descriptors: Correlation, Test Format, Kindergarten, Vocabulary Development
McIntyre, Joe; Gehlbach, Hunter – Society for Research on Educational Effectiveness, 2014
Of all the approaches to collecting data in the social sciences, the administration of questionnaires to respondents is among the most prevalent. Despite their popularity, there is broad consensus among survey design experts that using these items introduces excessive error into respondents' ratings. The authors attempt to answer the following…
Descriptors: Questionnaires, Surveys, Likert Scales, Test Items
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N. – Applied Measurement in Education, 2013
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
Descriptors: Test Format, Test Items, Item Analysis, Goodness of Fit
Shilo, Gila – Educational Research Quarterly, 2015
The purpose of the study was to examine the quality of open test questions directed to high school and college students. One thousand five hundred examination questions from various fields of study were examined using criteria based on the writing centers directions and guidelines. The 273 questions that did not fulfill the criteria were analyzed…
Descriptors: Questioning Techniques, Questionnaires, Test Construction, High School Students
Lakin, Joni M. – Educational Assessment, 2014
The purpose of test directions is to familiarize examinees with a test so that they respond to items in the manner intended. However, changes in educational measurement as well as the U.S. student population present new challenges to test directions and increase the impact that differential familiarity could have on the validity of test score…
Descriptors: Test Content, Test Construction, Best Practices, Familiarity
Edwards, Michael C.; Flora, David B.; Thissen, David – Applied Measurement in Education, 2012
This article describes a computerized adaptive test (CAT) based on the uniform item exposure multi-form structure (uMFS). The uMFS is a specialization of the multi-form structure (MFS) idea described by Armstrong, Jones, Berliner, and Pashley (1998). In an MFS CAT, the examinee first responds to a small fixed block of items. The items comprising…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Test Items
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Schwichow, Martin; Christoph, Simon; Boone, William J.; Härtig, Hendrik – International Journal of Science Education, 2016
The so-called control-of-variables strategy (CVS) incorporates the important scientific reasoning skills of designing controlled experiments and interpreting experimental outcomes. As CVS is a prominent component of science standards appropriate assessment instruments are required to measure these scientific reasoning skills and to evaluate the…
Descriptors: Thinking Skills, Science Instruction, Science Experiments, Science Tests
Wang, Xinrui – ProQuest LLC, 2013
The computer-adaptive multistage testing (ca-MST) has been developed as an alternative to computerized adaptive testing (CAT), and been increasingly adopted in large-scale assessments. Current research and practice only focus on ca-MST panels for credentialing purposes. The ca-MST test mode, therefore, is designed to gauge a single scale. The…
Descriptors: Computer Assisted Testing, Adaptive Testing, Diagnostic Tests, Comparative Analysis
GED Testing Service, 2016
This guide is designed to help adult educators and administrators better understand the content of the GED® test. This guide is tailored to each test subject and highlights the test's item types, assessment targets, and guidelines for how items will be scored. This 2016 edition has been updated to include the most recent information about the…
Descriptors: Guidelines, Teaching Guides, High School Equivalency Programs, Test Items

Peer reviewed
Direct link
