NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joseph; Pollard, Benjamin; Aiken, John M.; Lewandowski, H. J. – Physical Review Physics Education Research, 2022
Surveys have long been used in physics education research to understand student reasoning and inform course improvements. However, to make analysis of large sets of responses practical, most surveys use a closed-response format with a small set of potential responses. Open-ended formats, such as written free response, can provide deeper insights…
Descriptors: Natural Language Processing, Science Education, Physics, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Qingwei; Zhu, Guangtian; Liu, Qiaoyi; Han, Jing; Fu, Zhao; Bao, Lei – Physical Review Physics Education Research, 2020
Problem-solving categorization tasks have been well studied and used as an effective tool for assessment of student knowledge structure. In this study, a traditional free-response categorization test has been modified into a multiple-choice format, and the effectiveness of this new assessment is evaluated. Through randomized testing with Chinese…
Descriptors: Foreign Countries, Test Construction, Multiple Choice Tests, Problem Solving
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Yuan; Shah, Rajat; Chi, Min – International Educational Data Mining Society, 2016
In this work we tackled the task of Automatic Short Answer Grading (ASAG). While conventional ASAG research makes prediction mainly based on student answers referred as Answer-based, we leveraged the information about questions and student models into consideration. More specifically, we explore the Answer-based, Question, and Student models…
Descriptors: Automation, Grading, Artificial Intelligence, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Kerry J.; Meir, Eli; Pope, Denise S.; Wendel, Daniel – Journal of Educational Data Mining, 2017
Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a…
Descriptors: Classification, Computer Assisted Testing, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Menold, Natalja; Tausch, Anja – Sociological Methods & Research, 2016
Effects of rating scale forms on cross-sectional reliability and measurement equivalence were investigated. A randomized experimental design was implemented, varying category labels and number of categories. The participants were 800 students at two German universities. In contrast to previous research, reliability assessment method was used,…
Descriptors: Rating Scales, Test Reliability, Measurement, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Lesnov, Roman Olegovich – International Journal of Computer-Assisted Language Learning and Teaching, 2018
This article compares second language test-takers' performance on an academic listening test in an audio-only mode versus an audio-video mode. A new method of classifying video-based visuals was developed and piloted, which used L2 expert opinions to place the video on a continuum from being content-deficient (not helpful for answering…
Descriptors: Second Language Learning, Second Language Instruction, Video Technology, Classification
Hensley, Wayne E. – 1992
Two studies among U.S. college students (n=88 and n=329) examined the relationships between the order in which responses are offered on a questionnaire and the ranked importance of those responses. Study 1 included 36 males and 52 females, and Study 2 included 127 males and 202 females. Both studies found that approximately one-third (32 percent…
Descriptors: Classification, College Students, Higher Education, Questionnaires
Peer reviewed Peer reviewed
Schriesheim, Chester A.; And Others – Educational and Psychological Measurement, 1989
Three studies explored the effects of grouping versus randomized items in questionnaires on internal consistency and test-retest reliability with samples of 80, 80, and 100, respectively, university students and undergraduates. The 2 correlational and 1 experimental studies were reasonably consistent in demonstrating that neither format was…
Descriptors: Classification, College Students, Evaluation Methods, Higher Education