NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 841 to 855 of 3,093 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Buonviri, Nathan – International Journal of Music Education, 2015
The purpose of this study was to investigate effects of music notation reinforcement on aural memory for melodies. Participants were 41 undergraduate and graduate music majors in a within-subjects design. Experimental trials tested melodic memory through a sequence of target melodies, distraction melodies, and matched and unmatched answer choices.…
Descriptors: Music Education, Musical Composition, Reinforcement, Aural Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Lucy; Hartley, James – British Educational Research Journal, 2012
Research with adults has shown that variations in verbal labels and numerical scale values on rating scales can affect the responses given. However, few studies have been conducted with children. The study aimed to examine potential differences in children's responses to Likert-type rating scales according to their anchor points and scale…
Descriptors: Likert Scales, Children, Test Format, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Anthony; Nekrasova-Beker, Tatiana; Petrashova, Tamara – TESL-EJ, 2017
This study was conducted at a large technical university in Russia, which offers English language courses to students majoring in nine different degree programs. Each degree program develops and delivers its own English language curriculum. While all degree programs followed the same curriculum development model to design language courses, each…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kolomuç, Ali – Asia-Pacific Forum on Science Learning and Teaching, 2017
This study aimed to discover subject-specific science teachers' views of alternative assessment. The questionnaire by Okur (2008) was adapted and deployed for data collection. The sample consisted of 80 subject-specific science teachers drawn from the cities of Trabzon, Rize and Erzurum in Turkey. In analyzing data, descriptive analysis was…
Descriptors: Science Teachers, Teacher Attitudes, Alternative Assessment, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
O'Connor, Kevin J. – Teaching of Psychology, 2014
Two studies measured the impact on student exam performance and exam completion time of strategies aimed to reduce the amount of paper used for printing multiple-choice course exams. Study 1 compared single-sided to double-sided printed exams. Study 2 compared a single-column arrangement of multiple-choice answer options to a space (and paper)…
Descriptors: Paper (Material), Multiple Choice Tests, Conservation (Environment), Time on Task
Sriram, Rishi – NASPA - Student Affairs Administrators in Higher Education, 2014
When student affairs professionals assess their work, they often employ some type of survey. The use of surveys stems from a desire to objectively measure outcomes, a demand from someone else (e.g., supervisor, accreditation committee) for data, or the feeling that numbers can provide an aura of competence. Although surveys are effective tools for…
Descriptors: Surveys, Test Construction, Student Personnel Services, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Moses, Tim – International Journal of Testing, 2013
The major purpose of this study is to assess the conditions under which single scoring for constructed-response (CR) items is as effective as double scoring in the licensure testing context. We used both empirical datasets of five mixed-format licensure tests collected in actual operational settings and simulated datasets that allowed for the…
Descriptors: Scoring, Test Format, Licensing Examinations (Professions), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Dorans, Neil J. – Educational Measurement: Issues and Practice, 2013
We make a distinction between two types of test changes: inevitable deviations from specifications versus planned modifications of specifications. We describe how score equity assessment (SEA) can be used as a tool to assess a critical aspect of construct continuity, the equivalence of scores, whenever planned changes are introduced to testing…
Descriptors: Tests, Test Construction, Test Format, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Quaid, Ethan Douglas – International Journal of Computer-Assisted Language Learning and Teaching, 2018
The present trend in developing and using semi-direct speaking tests has been supported by test developers and researchers' claim of their increased practicality, higher reliability and concurrent validity with test scores in direct oral proficiency interviews. However, it is universally agreed within the language testing and assessment community…
Descriptors: Case Studies, Speech Communication, Language Tests, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Moshinsky, Avital; Ziegler, David; Gafni, Naomi – International Journal of Testing, 2017
Many medical schools have adopted multiple mini-interviews (MMI) as an advanced selection tool. MMIs are expensive and used to test only a few dozen candidates per day, making it infeasible to develop a different test version for each test administration. Therefore, some items are reused both within and across years. This study investigated the…
Descriptors: Interviews, Medical Schools, Test Validity, Test Reliability
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bokyoung Park – English Teaching, 2017
This study investigated Korean college students' performance as measured by two different vocabulary assessment tools (the Productive Vocabulary Levels Test (PVLT) and the Productive Vocabulary Use Task (PVUT)) and the relationship these assessments have with students' writing proficiency. A total of 72 students participated in the study. The…
Descriptors: Foreign Countries, Vocabulary Development, Language Tests, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Pages: 1  |  ...  |  53  |  54  |  55  |  56  |  57  |  58  |  59  |  60  |  61  |  ...  |  207