NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 151 to 165 of 3,089 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Heeseon; Lee, Hee Seung – Educational Psychology Review, 2020
Recent studies suggest that testing on prior material enhances subsequent learning of new material. Although such forward testing effect has received extensive empirical support, it is not yet clear how testing facilitates subsequent learning. One possible explanation suggests that interim testing informs learners about the format of an upcoming…
Descriptors: Testing, Test Format, Test Wiseness, Learning Strategies
Peer reviewed Peer reviewed
Direct linkDirect link
Pengelley, James; Whipp, Peter R.; Rovis-Hermann, Nina – Educational Psychology Review, 2023
The aim of the present study is to reconcile previous findings (a) that testing mode has no effect on test outcomes or cognitive load (Comput Hum Behav 77:1-10, 2017) and (b) that younger learners' working memory processes are more sensitive to computer-based test formats (J Psychoeduc Assess 37(3):382-394, 2019). We addressed key methodological…
Descriptors: Scores, Cognitive Processes, Difficulty Level, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Chen, Fu; Lutsyk, Alina; Leighton, Jacqueline P.; Cutumisu, Maria – Assessment in Education: Principles, Policy & Practice, 2023
With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and…
Descriptors: Data, Information Literacy, 21st Century Skills, Competence
Peer reviewed Peer reviewed
Direct linkDirect link
Kimpo, Rhea R.; Puder, Barb – Anatomical Sciences Education, 2023
The traditional format for neuroanatomy lab practical exams involves stations with a time limit for each station and inability to revisit stations. Timed exams have been associated with anxiety, which can lead to poor performance. In alignment with the universal design for learning (UDL), "Timed Image Question" and "Untimed Image…
Descriptors: Anatomy, Neurosciences, Comparative Analysis, Laboratory Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Cecilia Ka Yuk – Assessment & Evaluation in Higher Education, 2023
With the advances of technologies, possessing digital and information literacy is crucial for the selection of candidates by employers in this digital AI era. For most students, receiving and outputting electronic text has become the norm, and thus examinations with writing components done by hand may not accurately reflect their abilities. It…
Descriptors: Test Format, Handwriting, Stakeholders, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Peer reviewed Peer reviewed
Direct linkDirect link
Ivan D. Mardini G.; Christian G. Quintero M.; César A. Viloria N.; Winston S. Percybrooks B.; Heydy S. Robles N.; Karen Villalba R. – Education and Information Technologies, 2024
Today reading comprehension is considered an essential skill in modern life, therefore, higher education students require more specific skills to understand, interpret and evaluate texts effectively. Short answer questions (SAQs) are one of the relevant and proper tools for assessing reading comprehension skills. Unlike multiple-choice questions,…
Descriptors: Reading Comprehension, Reading Tests, Learning Strategies, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Celeste Combrinck – SAGE Open, 2024
We have less time and focus than ever before, while the demand for attention is increasing. Therefore, it is no surprise that when answering questionnaires, we often choose to strongly agree or be neutral, producing problematic and unusable data. The current study investigated forced-choice (ipsative) format compared to the same questions on a…
Descriptors: Likert Scales, Test Format, Surveys, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Wenjing; Wind, Stefanie A. – Journal of Educational Measurement, 2021
The use of mixed-format tests made up of multiple-choice (MC) items and constructed response (CR) items is popular in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district- and state-level assessments in the United States. Rater effects, or raters' scoring tendencies that result in…
Descriptors: Test Format, Multiple Choice Tests, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Agarwal, Pooja K.; Nunes, Ludmila D.; Blunt, Janell R. – Educational Psychology Review, 2021
Given the growing interest in retrieval practice among educators, it is valuable to know when retrieval practice does and does not improve student learning--particularly for educators who have limited classroom time and resources. In this literature review, we developed a narrow operational definition for "classroom research" compared to…
Descriptors: Educational Research, Information Retrieval, Effect Size, Instructional Program Divisions
Peer reviewed Peer reviewed
Direct linkDirect link
Schulte, Niklas; Holling, Heinz; Bürkner, Paul-Christian – Educational and Psychological Measurement, 2021
Forced-choice questionnaires can prevent faking and other response biases typically associated with rating scales. However, the derived trait scores are often unreliable and ipsative, making interindividual comparisons in high-stakes situations impossible. Several studies suggest that these problems vanish if the number of measured traits is high.…
Descriptors: Questionnaires, Measurement Techniques, Test Format, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Han, Suhwa; Kim, Doyoung; Kao, Shu-Chuan – Educational and Psychological Measurement, 2022
The development of technology-enhanced innovative items calls for practical models that can describe polytomous testlet items. In this study, we evaluate four measurement models that can characterize polytomous items administered in testlets: (a) generalized partial credit model (GPCM), (b) testlet-as-a-polytomous-item model (TPIM), (c)…
Descriptors: Goodness of Fit, Item Response Theory, Test Items, Scoring
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  206