NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 256 to 270 of 567 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Gregory L.; Cox, Troy L.; Knapp, Nieves – Foreign Language Annals, 2016
While studies have been done to rate the validity and reliability of the Oral Proficiency Interview (OPI) and Oral Proficiency Interview-Computer (OPIc) independently, a limited amount of research has analyzed the interexam reliability of these tests, and studies have yet to be conducted comparing the results of Spanish language learners who take…
Descriptors: Comparative Analysis, Oral Language, Language Proficiency, Spanish
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Öz, Hüseyin; Özturan, Tuba – Journal of Language and Linguistic Studies, 2018
This article reports the findings of a study that sought to investigate whether computer-based vs. paper-based test-delivery mode has an impact on the reliability and validity of an achievement test for a pedagogical content knowledge course in an English teacher education program. A total of 97 university students enrolled in the English as a…
Descriptors: Computer Assisted Testing, Testing, Test Format, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thompson, Robyn; Johnston, Susan S. – Journal of the American Academy of Special Education Professionals, 2017
The purpose of this investigation was to explore whether a difference existed between the effectiveness of paper-based format and tablet computer-based format Social Story interventions on frequency of undesired behaviors. An adapted alternating treatment design was implemented with four children with autism spectrum disorder (ASD). Data regarding…
Descriptors: Intervention, Behavior Problems, Pervasive Developmental Disorders, Autism
Peer reviewed Peer reviewed
Direct linkDirect link
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Brallier, Sara; Palm, Linda – International Journal of Teaching and Learning in Higher Education, 2015
This study examined test performance as a function of test format (proctored versus unproctored) and course type (traditional versus distance). The participants were 246 undergraduate students who completed introductory sociology courses during four semesters at a southeastern university. During each semester, the same instructor taught a…
Descriptors: Undergraduate Students, Introductory Courses, Sociology, Conventional Instruction
Peters, Joshua A. – ProQuest LLC, 2016
There is a lack of knowledge in whether there is a difference in results for students on paper and pencil high stakes assessments and computer-based high stakes assessments when considering race and/or free and reduced lunch status. The purpose of this study was to add new knowledge to this field of study by determining whether there is a…
Descriptors: Comparative Analysis, Computer Assisted Testing, Lunch Programs, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jaeger, Martin; Adair, Desmond – European Journal of Engineering Education, 2017
Online quizzes have been shown to be effective learning and assessment approaches. However, if scenario-based online construction safety quizzes do not include time pressure similar to real-world situations, they reflect situations too ideally. The purpose of this paper is to compare engineering students' performance when carrying out an online…
Descriptors: Engineering Education, Quasiexperimental Design, Tests, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghaderi, Marzieh; Mogholi, Marzieh; Soori, Afshin – International Journal of Education and Literacy Studies, 2014
Testing subject has many subsets and connections. One important issue is how to assess or measure students or learners. What would be our tools, what would be our style, what would be our goal and so on. So in this paper the author attended to the style of testing in school and other educational settings. Since the purposes of educational system…
Descriptors: Testing, Testing Programs, Intermode Differences, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wibowo, Santoso; Grandhi, Srimannarayana; Chugh, Ritesh; Sawir, Erlenawati – Journal of Educational Technology Systems, 2016
This study sought academic staff and students' views of electronic exams (e-exams) system and the benefits and challenges of e-exams in general. The respondents provided useful feedback for future adoption of e-exams at an Australian university and elsewhere too. The key findings show that students and academic staff are optimistic about the…
Descriptors: Pilot Projects, Computer Assisted Testing, Student Attitudes, College Faculty
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ashraf, Hamid; Motallebzadeh, Khalil; Ghazizadeh, Faezeh – International Journal of Language Testing, 2016
This study investigated the impact of electronic-based dynamic assessment on the listening skill of Iranian EFL learners to achieve this goal, a group of 40 female EFL upper-intermediate students (aged between 26 to 38 years old) from to language institutes were selected as the participants of the study after administering a Quick Placement Test…
Descriptors: Language Tests, Computer Assisted Testing, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Edwards, Michael C.; Flora, David B.; Thissen, David – Applied Measurement in Education, 2012
This article describes a computerized adaptive test (CAT) based on the uniform item exposure multi-form structure (uMFS). The uMFS is a specialization of the multi-form structure (MFS) idea described by Armstrong, Jones, Berliner, and Pashley (1998). In an MFS CAT, the examinee first responds to a small fixed block of items. The items comprising…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Test Items
Pages: 1  |  ...  |  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  ...  |  38