NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,276 to 1,290 of 3,126 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Anakwe, Bridget – Journal of Education for Business, 2008
The author investigated the impact of assessment methods on student performance on accounting tests. Specifically, the author used analysis of variance to determine whether the use of computer-based tests instead of paper-based tests affects students' traditional test scores in accounting examinations. The author included 2 independent variables,…
Descriptors: Student Evaluation, Testing, Statistical Analysis, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Glenda C. Rakes – Journal of Interactive Online Learning, 2008
One continuing concern associated with online courses is assessment of student performance. One option for online assessment is the use of open book tests. This study investigated the impact of training in open book test-taking strategies on student test performance in online, timed, unproctored, open book tests. When the tutorial was required…
Descriptors: Online Courses, Electronic Learning, Test Format, Test Wiseness
Peer reviewed Peer reviewed
Holland, Paul W.; Hoskens, Machteld – Psychometrika, 2003
Gives an account of classical test theory that shows how it can be viewed as a mean and variance approximation to a general version of item response theory and then shows how this approach can give insight into predicting the true score of a test and the true scores of tests not necessarily parallel to the given test. (SLD)
Descriptors: Prediction, Test Format, Test Theory, True Scores
Peer reviewed Peer reviewed
Costenbader, V. K.; Adams, John W. – Journal of School Psychology, 1991
Reviews Peabody Individual Achievement Test-Revised (PIAT-R), a revision of the 1970 PIAT. Describes administration and scoring criteria, psychometric properties, and new features of test design. Emphasizes implications for the practitioner. Concludes that PIAT-R may be useful screening tool when general information concerning student achievement…
Descriptors: Achievement Tests, Psychometrics, Scores, Test Format
Peer reviewed Peer reviewed
Herrmann, Robert O.; Sterngold, Arthur; Warland, Rex H. – Journal of Consumer Affairs, 1998
Three forms of consumer questionnaires were compared:(1) asking how concerned people are about an issue; (2) using concern filters; and (3) using awareness filters. The forms with filters (2 and 3) produced lower percentages of "very concerned" and higher percentages of "unaware" responses. Awareness filters improved the…
Descriptors: Consumer Education, Information Utilization, Questionnaires, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Ducasse, Ana Maria; Brown, Annie – Language Testing, 2009
Speaking tasks involving peer-to-peer candidate interaction are increasingly being incorporated into language proficiency assessments, in both large-scale international testing contexts, and in smaller-scale, for example course-related, ones. This growth in the popularity and use of paired and group orals has stimulated research, particularly into…
Descriptors: Oral Language, Interpersonal Communication, Second Language Learning, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Schumacker, Randall E.; Smith, Everett V., Jr. – Educational and Psychological Measurement, 2007
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
Descriptors: Measurement Techniques, Error of Measurement, Item Sampling, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Balch, William R. – Teaching of Psychology, 2007
Undergraduates studied the definitions of 16 psychology terms, expecting either a multiple-choice (n = 132) or short-answer (n = 122) test. All students then received the same multiple-choice test, requiring them to recognize the definitions as well as novel examples of the terms. Compared to students expecting a multiple-choice test, those…
Descriptors: Expectation, Definitions, Multiple Choice Tests, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Allalouf, Avi; Abramzon, Andrea – Language Assessment Quarterly, 2008
Differential item functioning (DIF) analysis can be used to great advantage in second language (L2) assessments. This study examined the differences in performance on L2 test items between groups from different first language backgrounds and suggested ways of improving L2 assessments. The study examined DIF on L2 (Hebrew) test items for two…
Descriptors: Test Items, Test Format, Second Language Learning, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuming; Schulz, E. Matthew; Yu, Lei – Journal of Educational and Behavioral Statistics, 2008
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Descriptors: Reading Comprehension, Test Format, Markov Processes, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Nehm, Ross H.; Schonfeld, Irvin Sam – Journal of Research in Science Teaching, 2008
Growing recognition of the central importance of fostering an in-depth understanding of natural selection has, surprisingly, failed to stimulate work on the development and rigorous evaluation of instruments that measure knowledge of it. We used three different methodological tools, the Conceptual Inventory of Natural Selection (CINS), a modified…
Descriptors: Evolution, Science Education, Interviews, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tzu-Hua – Computers & Education, 2008
This research aims to develop a multiple-choice Web-based quiz-game-like formative assessment system, named GAM-WATA. The unique design of "Ask-Hint Strategy" turns the Web-based formative assessment into an online quiz game. "Ask-Hint Strategy" is composed of "Prune Strategy" and "Call-in Strategy".…
Descriptors: Formative Evaluation, Foreign Countries, Grade 5, Internet
Hanson, Bradley A.; Feinstein, Zachary S. – 1995
This paper discusses loglinear models for assessing differential item functioning (DIF). Loglinear and logit models that have been suggested for studying DIF are reviewed, and loglinear formulations of the logit models are given. A polynomial loglinear model for assessing DIF is introduced. Two examples using the polynomial loglinear model for…
Descriptors: Equated Scores, Item Bias, Test Format, Test Items
Stansfield, Charles W. – 1990
A discussion of the simulated oral proficiency interview (SOPI), a type of semi-direct speaking test that models the format of the oral proficiency interview (OPI), describes its development and research and examines its usefulness. The test used for discussion is a tape-recorded test consisting of six parts, scored by a trained rater using the…
Descriptors: Interviews, Language Proficiency, Language Tests, Simulation
Roe, Andrew G. – Graduating Engineer, 1985
Presents the case for taking the Engineer in Training examination (EIT), also called the Fundamentals of Engineering Examination, and the Graduate Record Examinations (GRE), indicating that they can affect future employment opportunities, career advancement, and post-graduate studies. Includes subject areas tested, test format, and how to prepare…
Descriptors: Engineering, Engineering Education, Higher Education, Test Format
Pages: 1  |  ...  |  82  |  83  |  84  |  85  |  86  |  87  |  88  |  89  |  90  |  ...  |  209