Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Construct Validity | 7 |
| Difficulty Level | 7 |
| Test Format | 7 |
| Test Items | 7 |
| Test Construction | 4 |
| Comparative Analysis | 3 |
| Multiple Choice Tests | 3 |
| Computer Assisted Testing | 2 |
| Correlation | 2 |
| Higher Education | 2 |
| Item Analysis | 2 |
| More ▼ | |
Author
| Allen, Nancy L. | 1 |
| Borowski, Andreas | 1 |
| Chapman, Mark | 1 |
| Fischer, Hans E. | 1 |
| Freedle, Roy | 1 |
| Gess-Newsome, Julie | 1 |
| Katz, Irvin R. | 1 |
| Kiely, Gerard L. | 1 |
| Kim, Ahyoung Alicia | 1 |
| Kirschner, Sophie | 1 |
| Kostin, Irene | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 6 |
| Journal Articles | 2 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Audience
Location
| Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Kirschner, Sophie; Borowski, Andreas; Fischer, Hans E.; Gess-Newsome, Julie; von Aufschnaiter, Claudia – International Journal of Science Education, 2016
Teachers' professional knowledge is assumed to be a key variable for effective teaching. As teacher education has the goal to enhance professional knowledge of current and future teachers, this knowledge should be described and assessed. Nevertheless, only a limited number of studies quantitatively measures physics teachers' professional…
Descriptors: Evaluation Methods, Tests, Test Format, Science Instruction
Freedle, Roy; Kostin, Irene – 1993
Prediction of the difficulty (equated delta) of a large sample (n=213) of reading comprehension items from the Test of English as a Foreign Language (TOEFL) was studied using main idea, inference, and supporting statement items. A related purpose was to examine whether text and text-related variables play a significant role in predicting item…
Descriptors: Construct Validity, Difficulty Level, Multiple Choice Tests, Prediction
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response
Wainer, Howard; Kiely, Gerard L. – 1986
Recent experience with the Computerized Adaptive Test (CAT) has raised a number of concerns about its practical applications. The concerns are principally involved with the concept of having the computer construct the test from a precalibrated item pool, and substituting statistical characteristics for the test developer's skills. Problems with…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Construct Validity
Ward, William C.; And Others – 1986
The keylist format (rather than the conventional multiple-choice format) for item presentation provides a machine-scorable surrogate for a truly free-response test. In this format, the examinee is required to think of an answer, look it up in a long ordered list, and enter its number on an answer sheet. The introduction of keylist items into…
Descriptors: Analogy, Aptitude Tests, Construct Validity, Correlation
Park, Chung; Allen, Nancy L. – 1994
This study is part of continuing research into the meaning of future National Assessment of Educational Progress (NAEP) science scales. In this study, the test framework, as examined by NAEP's consensus process, and attributes of the items, identified by science experts, cognitive scientists, and measurement specialists, are examined. Preliminary…
Descriptors: Communication (Thought Transfer), Comparative Analysis, Construct Validity, Content Validity

Peer reviewed
Direct link
