NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,446 to 2,460 of 4,794 results Save | Export
Peer reviewed Peer reviewed
Nishisato, Shizuhiko; Sheu, Wen-Jenn – Psychometrika, 1980
A modification of the method of reciprocal averages for scaling multiple choice data is proposed. The proposed method handles the data in a piecewise fashion and allows for faster convergence to a solution. (Author/JKS)
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Test Reliability
Trieber, J. Marshall – Training and Development Journal, 1980
Aims to help instructors make more valid test questions, particularly multiple-choice ones. Emphasis is placed on multiple-choice questions to show the wealth of opportunities they offer for testing because of their uses, objectivity, and ease of grading. Discusses test scheduling, construction, and evaluation and follow-up. (CT)
Descriptors: Multiple Choice Tests, Test Construction, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Hutchinson, T. P. – Contemporary Educational Psychology, 1980
In scoring multiple-choice tests, a score of 1 is given to right answers, 0 to unanswered questions, and some negative score to wrong answers. This paper discusses the relation of this negative score to the assumption made about the partial knowledge with the subjects may have. (Author/GDC)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Houston, John P. – Journal of Educational Psychology, 1977
When correlations between actual university classroom cheating behavior (multiple-choice answer copying) and pretest estimates of success, confidence, and test importance were studied answer copying correlated positively with estimates of success. Confidence in these estimates of success also correlated positively with answer copying, while…
Descriptors: Cheating, Failure, Multiple Choice Tests, Risk
Peer reviewed Peer reviewed
Henning, Grant – Journal of Educational Measurement, 1989
Divgi's contention that the Rasch model should not be applied to multiple-choice test items is refuted. As with any measurement model, the burden is on the model user to establish that assumptions have been met and that generalizations are warranted for any application, including applications involving multiple-choice tests. (TJH)
Descriptors: Goodness of Fit, Latent Trait Theory, Multiple Choice Tests
Peer reviewed Peer reviewed
Hasan, Saleem; Bagayoko, Diola; Kelley, Ella L. – Physics Education, 1999
Describes a simple and novel method for identifying misconceptions. Utilizes the Certainty of Response Index (CRI) in conjunction with answers to multiple-choice questions. (Author/CCM)
Descriptors: Higher Education, Misconceptions, Multiple Choice Tests, Physics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Haynie, William J., III – Journal of Technology Education, 1994
Undergraduates (n=187) were given information booklets; experimentals took either a multiple-choice (n=64) or short-answer (n=64) test afterward. Both groups and 60 controls were tested 3 weeks later on new and previously tested information. Both multiple-choice and short-answer tests were better than no test in improving retention. Multiple…
Descriptors: Higher Education, Multiple Choice Tests, Retention (Psychology), Technology Education
Peer reviewed Peer reviewed
Direct linkDirect link
Buckles, Stephen; Siegfried, John J. – Journal of Economic Education, 2006
Multiple-choice questions are the basis of a significant portion of assessment in introductory economics courses. However, these questions, as found in course assessments, test banks, and textbooks, often fail to evaluate students' abilities to use and apply economic analysis. The authors conclude that multiple-choice questions can be used to…
Descriptors: Multiple Choice Tests, Economics Education, Academic Ability, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Brannick, Michael T.; Miles, Donald E.; Kisamore, Jennifer L. – Studies in Higher Education, 2005
Content mastery and self-efficacy were measured at the beginning and at the end of two classes in Research Methods in Psychology. A multiple-choice test measured content mastery and a questionnaire measured self-efficacy. Self-efficacy reports improved significantly over the course of instruction, as did examination performance. The correlation…
Descriptors: Research Methodology, Self Efficacy, Multiple Choice Tests, Performance Factors
Sinharay, Sandip; Holland, Paul W. – Educational Testing Service, 2008
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three popular equating methods that can be used with a NEAT design are the poststratification equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. These three methods each…
Descriptors: Equated Scores, Test Items, Item Response Theory, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Hodges, Charles B.; Stackpole-Hodges, Christene L.; Cox, Kenneth M. – Journal of Educational Computing Research, 2008
The purpose of this study was to investigate possible factors that may affect academic achievement when instruction is delivered via podcast. Seventeen female Communication Sciences and Disorders students participated in this exploratory study conducted in the fall of 2006. Measurements of participants' individual differences on four variables…
Descriptors: Cognitive Style, Self Efficacy, Academic Achievement, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jooyong; Choi, Byung-Chul – British Journal of Educational Technology, 2008
A new computerised testing system was used at home to promote learning and also to save classroom instruction time. The testing system combined the features of short-answer and multiple-choice formats. The questions of the multiple-choice problems were presented without the options so that students had to generate answers for themselves; they…
Descriptors: Experimental Groups, Control Groups, Computer Assisted Testing, Instructional Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shapiro, Amy – Journal of the Scholarship of Teaching and Learning, 2009
Student evaluations of a large General Psychology course indicate that students enjoy the class a great deal, yet attendance is low. An experiment was conducted to evaluate a personal response system as a solution. Attendance rose by 30% as compared to extra credit as an inducement, but was equivalent to offering pop quizzes. Performance on test…
Descriptors: Test Items, Instructional Effectiveness, Learning Strategies, Classroom Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Burgin, John; Hughes, Gail D. – Assessing Writing, 2009
The authors explored the credibility of using informal reading inventories and writing samples for 138 students (K-4) to evaluate the effectiveness of a summer literacy program. Running Records (a measure of a child's reading level) and teacher experience during daily reading instruction were used to estimate the reliability of the more formal…
Descriptors: Informal Reading Inventories, Multiple Choice Tests, Program Effectiveness, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Callender, Aimee A.; McDaniel, Mark A. – Journal of Educational Psychology, 2007
The effectiveness of two types of adjunct questions, standard embedded questions and "why" questions (elaborative interrogation), was investigated for readers differing in structure-building ability (Gernsbacher, 1990). Participants read a textbook chapter either with or without the adjunct questions. Learning was assessed with typical…
Descriptors: Reading Comprehension, Questioning Techniques, Instructional Effectiveness, Performance Based Assessment
Pages: 1  |  ...  |  160  |  161  |  162  |  163  |  164  |  165  |  166  |  167  |  168  |  ...  |  320