NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 61 to 75 of 426 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ait bentaleb, Khalid; Dachraoui, Saddik; Hassouni, Taoufik; Alibrahmi, El mehdi; Chakir, Elmahjoub; Belboukhari, Aimad – European Journal of Educational Research, 2022
We developed a Quantum Mechanics Conceptual Understanding Survey (QMCUS) in this study. The survey was conducted using a quantitative methodology. A multiple-choice survey of 35 questions was administered to 338 undergraduate students. Three experienced quantum mechanics instructors examined the validity of the survey. The reliability of our…
Descriptors: Scientific Concepts, Concept Formation, Physics, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Roman O. Lesnov – International Journal of Listening, 2024
Whether visual information belongs in second language (L2) listening tests has long been a subject for scholarly debate, with L2 learners' performance on and perceptions of video-based tests being the primary sources of evidence. The research into L2 teachers' perceptions, however, is scarce, as is the research into stakeholders' views of content…
Descriptors: Listening Comprehension Tests, Language Tests, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Atalmis, Erkan Hasan; Kingston, Neal Martin – SAGE Open, 2018
This study explored the impact of homogeneity of answer choices on item difficulty and discrimination. Twenty-two matched pairs of elementary and secondary mathematics items were administered to randomly equivalent samples of students. Each item pair comparison was treated as a separate study with the set of effect sizes analyzed using…
Descriptors: Test Items, Difficulty Level, Multiple Choice Tests, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sunbul, Onder; Yormaz, Seha – International Journal of Evaluation and Research in Education, 2018
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
Descriptors: Difficulty Level, Cheating, Duplication, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sunbul, Onder; Yormaz, Seha – Eurasian Journal of Educational Research, 2018
Purpose: Several studies can be found in the literature that investigate the performance of ? under various conditions. However no study for the effects of item difficulty, item discrimination, and ability restrictions on the performance of ? could be found. The current study aims to investigate the performance of ? for the conditions given below.…
Descriptors: Test Items, Difficulty Level, Ability, Cheating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oguguo, Basil; Lotobi, Regina Awele – European Journal of Educational Sciences, 2019
This study determined the psychometric properties of the examination items in 2011 Basic Education Certificate Examination for Basic Science. The design adopted was survey research design. The instrument for data collection was the 2011 Delta State Basic Education Certificate Examination (BECE) in Basic Science Multiple Choice Test Items. The IRT…
Descriptors: Foreign Countries, Science Tests, Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2019
The "Next Generation Science Standards" calls for new assessments that measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments utilize a combination of item formats including constructed-response and multiple-choice. In this study, students were randomly assigned…
Descriptors: Science Tests, Multiple Choice Tests, Test Format, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mahroof, Ameema; Saeed, Muhammad – Bulletin of Education and Research, 2021
This small scale study aims to analyze the question papers of Board of Intermediate and Secondary Education in the subject of computer science with reference to item analysis and Bloom's taxonomy. Data were collected from 100 students of Grade 9th and 10th from the schools of Lahore city using convenient sampling technique. Data collected on the…
Descriptors: Foreign Countries, Secondary Education, Computer Science Education, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guven Demir, Elif; Öksuz, Yücel – Participatory Educational Research, 2022
This research aimed to investigate animation-based achievement tests according to the item format, psychometric features, students' performance, and gender. The study sample consisted of 52 fifth-grade students in Samsun/Turkey in 2017-2018. Measures of the research were open-ended (OE), animation-based open-ended (AOE), multiple-choice (MC), and…
Descriptors: Animation, Achievement Tests, Test Items, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Lions, Séverin; Dartnell, Pablo; Toledo, Gabriela; Godoy, María Inés; Córdova, Nora; Jiménez, Daniela; Lemarié, Julie – Educational and Psychological Measurement, 2023
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps because too few studies have obtained experimental data from large-sized samples in a real-world context and have manipulated the position of both…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Responses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rafi, Ibnu; Retnawati, Heri; Apino, Ezi; Hadiana, Deni; Lydiati, Ida; Rosyada, Munaya Nikma – Pedagogical Research, 2023
This study describes the characteristics of the test and its items used in the national-standardized school examination by applying classical test theory and focusing on the item difficulty, item discrimination, test reliability, and distractor analysis. We analyzed response data of 191 12th graders from one of public senior high schools in…
Descriptors: Foreign Countries, National Competency Tests, Standardized Tests, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rintayati, Peduk; Lukitasari, Hafizhah; Syawaludin, Ahmad – International Journal of Instruction, 2021
Assessment of higher-order thinking skills (HOTS) provides few opportunities for students to develop more in-depth knowledge, serving students' ability to identify and solve their problems. One type of instrument for measuring HOTS objectively is the two-tier multiple-choice test (TTMCT). This research is part of the research and development…
Descriptors: Foreign Countries, Elementary School Students, Thinking Skills, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Klender, Sara; Ferriby, Andrew; Notebaert, Andrew – HAPS Educator, 2019
Multiple-choice questions (MCQ) are commonly used on histology examinations. There are many guidelines for how to properly write MCQ and many of them recommend avoiding negatively worded stems. The current study aims to investigate differences between positively and negatively worded stems in a medical histology course by comparing the item…
Descriptors: Multiple Choice Tests, Science Tests, Biology, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Anthony; Nekrasova-Beker, Tatiana – Educational Assessment, 2018
While previous research has identified numerous factors that contribute to item difficulty, studies involving large-scale reading tests have provided mixed results. This study examined five selected-response item types used to measure reading comprehension in the Pearson Test of English Academic: a) multiple-choice (choose one answer), b)…
Descriptors: Reading Comprehension, Test Items, Reading Tests, Test Format
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  29