NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Güntay Tasçi – Science Insights Education Frontiers, 2024
The present study has aimed to develop and validate a protein concept inventory (PCI) consisting of 25 multiple-choice (MC) questions to assess students' understanding of protein, which is a fundamental concept across different biology disciplines. The development process of the PCI involved a literature review to identify protein-related content,…
Descriptors: Science Instruction, Science Tests, Multiple Choice Tests, Biology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Soeharto, Soeharto – Journal of Turkish Science Education, 2021
This study aims to evaluate the psychometric properties of the developed diagnostic assessment test and to identify student misconceptions in science in terms of school grades. 153 students were gathered by using random sample from 10th to 12th grade in senior high schools. The 32 items of the two-tier multiple-choice diagnostic test were…
Descriptors: Grade 12, High School Students, Scientific Attitudes, Misconceptions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Coniam, David; Lee, Tony; Milanovic, Michael; Pike, Nigel; Zhao, Wen – Language Education & Assessment, 2022
The calibration of test materials generally involves the interaction between empirical analysis and expert judgement. This paper explores the extent to which scale familiarity might affect expert judgement as a component of test validation in the calibration process. It forms part of a larger study that investigates the alignment of the…
Descriptors: Specialists, Language Tests, Test Validity, College Faculty
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vera Frith; Robert N. Prince – Numeracy, 2018
The National Benchmark Test Project (NBTP) was commissioned by Higher Education South Africa in 2005 to assess the academic proficiency of prospective students. The competencies assessed include quantitative literacy using the NBTP QL test. This instrument is a criterion-referenced multiple-choice test developed collaboratively by South African…
Descriptors: National Competency Tests, Numeracy, Mathematics Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Foley, Brett P. – Practical Assessment, Research & Evaluation, 2016
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
Descriptors: Guessing (Tests), Multiple Choice Tests, Case Studies, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Slepkov, Aaron D.; Shiell, Ralph C. – Physical Review Special Topics - Physics Education Research, 2014
Constructed-response (CR) questions are a mainstay of introductory physics textbooks and exams. However, because of the time, cost, and scoring reliability constraints associated with this format, CR questions are being increasingly replaced by multiple-choice (MC) questions in formal exams. The integrated testlet (IT) is a recently developed…
Descriptors: Science Tests, Physics, Responses, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Frey, Bruce B.; Ellis, James D.; Bulgreen, Janis A.; Hare, Jana Craig; Ault, Marilyn – Electronic Journal of Science Education, 2015
"Scientific argumentation," defined as the ability to develop and analyze scientific claims, support claims with evidence from investigations of the natural world, and explain and evaluate the reasoning that connects the evidence to the claim, is a critical component of current science standards and is consistent with "Common Core…
Descriptors: Test Construction, Science Tests, Persuasive Discourse, Science Process Skills
Zahner, Doris; Steedle, Jeffrey T. – Council for Aid to Education, 2014
The Organisation for Economic Co-operation and Development (OECD) launched the Assessment of Higher Education Learning Outcomes (AHELO) in an effort to measure learning in international postsecondary education. This paper presents a study of scoring equivalence across nine countries for two translated and adapted performance tasks. Results reveal…
Descriptors: International Assessment, Performance Based Assessment, Postsecondary Education, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Chung, Siuman; Espin, Christine A. – Assessment for Effective Intervention, 2013
The reliability and validity of three curriculum-based measures as indicators of learning English as a foreign language were examined. Participants were 260 Dutch students in Grades 8 and 9 who were receiving English-language instruction. Predictor measures were maze-selection, Dutch-to-English word translation, and English-to-Dutch word…
Descriptors: Curriculum Based Assessment, Progress Monitoring, Secondary School Students, Second Language Learning
Reshetar, Rosemary; Melican, Gerald J. – College Board, 2010
This paper discusses issues related to the design and psychometric work for mixed-format tests --tests containing both multiple-choice (MC) and constructed-response (CR) items. The issues of validity, fairness, reliability and score consistency can be addressed but for mixed-format tests there are many decisions to be made and no examination or…
Descriptors: Psychometrics, Test Construction, Multiple Choice Tests, Test Items
Collet, LeVerne S. – 1970
A critical review of systems of scoring multiple choice tests is presented and the superiority of a system based upon elimination method over one based upon the best answer mode is hypothesized. This is discussed in terms of the capacity of the mode to reveal the relationships among decoy options and the effects of partial information,…
Descriptors: Multiple Choice Tests, Scoring, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Hanna, Gerald S. – Journal of Educational Measurement, 1975
An alternative to the conventional right-wrong scoring method used on multiple-choice tests was presented. In the experiment, the examinee continued to respond to a multiple-choice item until feedback signified a correct answer. Findings showed that experimental scores were more reliable but less valid than inferred conventional scores.…
Descriptors: Feedback, Higher Education, Multiple Choice Tests, Scoring
Peer reviewed Peer reviewed
Webster, G. D.; And Others – Evaluation and the Health Professions, 1988
Whether alternative scoring strategies result in improved measurement properties of patient management problems (PMPs) was studied. Nine scoring systems (proficiency, efficiency, select, omit, data gathering, therapy, absolute, goal-oriented, and empiric expert score) were applied to 16 PMPs used in a certifying examination taken by 4,590…
Descriptors: Certification, Licensing Examinations (Professions), Multiple Choice Tests, Physicians
Frary, Robert B. – Educ Psychol Meas, 1969
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4