NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 181 to 195 of 3,081 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Araneda, Sergio; Lee, Dukjae; Lewis, Jennifer; Sireci, Stephen G.; Moon, Jung Aa; Lehman, Blair; Arslan, Burcu; Keehner, Madeleine – Education Sciences, 2022
Students exhibit many behaviors when responding to items on a computer-based test, but only some of these behaviors are relevant to estimating their proficiencies. In this study, we analyzed data from computer-based math achievement tests administered to elementary school students in grades 3 (ages 8-9) and 4 (ages 9-10). We investigated students'…
Descriptors: Student Behavior, Academic Achievement, Computer Assisted Testing, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Tam, Angela Choi Fung – Assessment & Evaluation in Higher Education, 2022
Students' perception and learning practices about online timed take-home examinations and the factors affecting students' learning practices in the presence of COVID-19 have largely been unexplored. Nine students of arts, business and science sub-degree programmes participated in this study. Semi-structured interviews and reflective journals were…
Descriptors: Foreign Countries, Two Year College Students, Student Attitudes, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Andrea Cedeno – ProQuest LLC, 2022
The relationship between computer-based and paper-based testing may vary. Students with special needs may or may not perform better on computer-based testing compared to paper-based testing. Over the past few decades, computers and technology have increased in society and in classrooms. The use of technology has increased during the era of the…
Descriptors: Computer Assisted Testing, Test Format, Special Needs Students, Reading Comprehension
Edwin Ambrosio – ProQuest LLC, 2022
Assessments are some of the most common tools used to evaluate student learning. While exams have always been a part of evaluating how well students learn and retain information, the most effective way to administer them has always been debated. However, remarkably few studies have compared online and paper testing, and even fewer have examined…
Descriptors: Computer Science Education, Computer Assisted Testing, Test Format, Performance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shahid A. Choudhry; Timothy J. Muckle; Christopher J. Gill; Rajat Chadha; Magnus Urosev; Matt Ferris; John C. Preston – Practical Assessment, Research & Evaluation, 2024
The National Board of Certification and Recertification for Nurse Anesthetists (NBCRNA) conducted a one-year research study comparing performance on the traditional continued professional certification assessment, administered at a test center or online with remote proctoring, to a longitudinal assessment that required answering quarterly…
Descriptors: Nurses, Certification, Licensing Examinations (Professions), Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdullah Albalawi – Vocabulary Learning and Instruction, 2024
Despite the substantial expansion in vocabulary research since the 1980s, we still know very little about how vocabulary develops over time and what factors influence this development. This methodological overview discusses key issues and considerations in vocabulary breadth growth assessment to help advance research in this area. The report…
Descriptors: Vocabulary Development, Teaching Methods, Language Tests, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Hsuan Lee; Yue Jia – Applied Measurement in Education, 2024
Test-taking experience is a consequence of the interaction between students and assessment properties. We define a new notion, rapid-pacing behavior, to reflect two types of test-taking experience -- disengagement and speededness. To identify rapid-pacing behavior, we extend existing methods to develop response-time thresholds for individual items…
Descriptors: Adaptive Testing, Reaction Time, Item Response Theory, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Steven R. Hiner – ProQuest LLC, 2023
The purpose of this study was to determine if there were significant statistical differences between scores on constructed response and computer-scorable questions on an accelerated middle school math placement test in a large urban school district in Ohio, and to ensure that all students have an opportunity to take the test. Five questions on a…
Descriptors: Scores, Middle Schools, Mathematics Tests, Placement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Herzing, Jessica M. E. – International Journal of Social Research Methodology, 2020
This study aims to address the questionnaire design challenges in cases wherein questions involve a large number of response options. Traditionally, these long-list questions are asked in open-ended or closed-ended formats. However, alternative interface design options are emerging in computer-assisted surveys that combine both interface designs.…
Descriptors: Foreign Countries, Questionnaires, Online Surveys, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gurdil Ege, Hatice; Demir, Ergul – Eurasian Journal of Educational Research, 2020
Purpose: The present study aims to evaluate how the reliabilities computed using a, Stratified a, Angoff-Feldt, and Feldt-Raju estimators may differ when sample size (500, 1000, and 2000) and item type ratio of dichotomous to polytomous items (2:1; 1:1, 1:2) included in the scale are varied. Research Methods: In this study, Cronbach's a,…
Descriptors: Test Format, Simulation, Test Reliability, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Hiller, Sara; Rumann, Stefan; Berthold, Kirsten; Roelle, Julian – Instructional Science: An International Journal of the Learning Sciences, 2020
In learning from examples, students are often first provided with basic instructional explanations of new principles and concepts and second with examples thereof. In this sequence, it is important that learners self-explain by generating links between the basic instructional explanations' content and the examples. Therefore, it is well…
Descriptors: Problem Solving, Test Format, Prompting, Learning Strategies
Pages: 1  |  ...  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  ...  |  206