NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 91 to 105 of 956 results Save | Export
Kim, Dong-In; Julian, Marc; Hermann, Pam – Online Submission, 2022
In test equating, one critical equating property is the group invariance property which indicates that the equating function used to convert performance on each alternate form to the reporting scale should be the same for various subgroups. To mitigate the impact of disrupted learning on the item parameters during the COVID-19 pandemic, a…
Descriptors: COVID-19, Pandemics, Test Format, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
David Bell; Vikki O'Neill; Vivienne Crawford – Practitioner Research in Higher Education, 2023
We compared the influence of open-book extended duration versus closed book time-limited format on reliability and validity of written assessments of pharmacology learning outcomes within our medical and dental courses. Our dental cohort undertake a mid-year test (30xfree-response short answer to a question, SAQ) and end-of-year paper (4xSAQ,…
Descriptors: Undergraduate Students, Pharmacology, Pharmaceutical Education, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sharareh Sadat Sarsarabi; Zeinab Sazegar – International Journal of Language Testing, 2023
The statement stated in a multiple-choice question can be developed regarding two types of sentences: Interruptive (periodic) and cumulative (or loose). This study deals with different kinds of stems in designing multiple-choice (MC) items. To fill the existing gap in the literature, two groups of teacher students passing general English courses…
Descriptors: Language Tests, Test Format, Multiple Choice Tests, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Yihua Shen – ProQuest LLC, 2024
This research examined the Chinese National College Entrance Examination (NCEE) in mathematics before and after the Great Proletarian Cultural Revolution, specifically covering the periods 1952-1965 and 1977-1984. The central focus was on the organization, structure, and content of the examinations, as well as their influence on and interaction…
Descriptors: Mathematics Achievement, Mathematics Tests, Foreign Countries, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Scanlon, Paul J. – Field Methods, 2019
Web, or online, probing has the potential to supplement existing questionnaire design processes by providing structured cognitive data on a wider sample than typical qualitative-only question evaluation methods can achieve. One of the practical impediments to the further integration of web probing is the concern of survey managers about how the…
Descriptors: Online Surveys, Questionnaires, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Merzougui, Wassim H.; Myers, Matthew A.; Hall, Samuel; Elmansouri, Ahmad; Parker, Rob; Robson, Alistair D.; Kurn, Octavia; Parrott, Rachel; Geoghegan, Kate; Harrison, Charlotte H.; Anbu, Deepika; Dean, Oliver; Border, Scott – Anatomical Sciences Education, 2021
Methods of assessment in anatomy vary across medical schools in the United Kingdom (UK) and beyond; common methods include written, spotter, and oral assessment. However, there is limited research evaluating these methods in regards to student performance and perception. The National Undergraduate Neuroanatomy Competition (NUNC) is held annually…
Descriptors: Multiple Choice Tests, Test Format, Medical Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Liao, Ray J. T. – Language Testing, 2023
Among the variety of selected response formats used in L2 reading assessment, multiple-choice (MC) is the most commonly adopted, primarily due to its efficiency and objectiveness. Given the impact of assessment results on teaching and learning, it is necessary to investigate the degree to which the MC format reliably measures learners' L2 reading…
Descriptors: Reading Tests, Language Tests, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Delican, Burak – International Journal of Curriculum and Instruction, 2022
In this research, the questions in the Turkish Course (2,3,4) Worksheets were examined in terms of various classification systems. In this direction, the questions in the worksheets were evaluated with the document-material analysis technique in accordance with the structure of the qualitative research. During the research process, Turkish Course…
Descriptors: Worksheets, Elementary School Students, Turkish, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Lin – ETS Research Report Series, 2019
Rearranging response options in different versions of a test of multiple-choice items can be an effective strategy against cheating on the test. This study investigated if rearranging response options would affect item performance and test score comparability. A study test was assembled as the base version from which 3 variant versions were…
Descriptors: Multiple Choice Tests, Test Items, Test Format, Scores
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  64