Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 9 |
Descriptor
Source
Author
| Bulut, Okan | 9 |
| Butterfuss, Reese | 2 |
| Kendeou, Panayiota | 2 |
| Kim, Jasmine | 2 |
| McMaster, Kristen L. | 2 |
| Slater, Susan | 2 |
| Arce-Ferrer, Alvaro J. | 1 |
| Barbosa, Denilson | 1 |
| Cui, Ying | 1 |
| Daniels, Lia M. | 1 |
| Demmans Epp, Carrie | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 9 |
| Journal Articles | 8 |
| Speeches/Meeting Papers | 1 |
Education Level
| Elementary Education | 3 |
| Higher Education | 2 |
| Postsecondary Education | 2 |
| High Schools | 1 |
| Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Gates MacGinitie Reading Tests | 2 |
What Works Clearinghouse Rating
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Yildirim-Erbasli, Seyma N.; Bulut, Okan; Demmans Epp, Carrie; Cui, Ying – Journal of Educational Technology Systems, 2023
Conversational agents have been widely used in education to support student learning. There have been recent attempts to design and use conversational agents to conduct assessments (i.e., conversation-based assessments: CBA). In this study, we developed CBA with constructed and selected-response tests using Rasa--an artificial intelligence-based…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Computer Mediated Communication, Formative Evaluation
Daniels, Lia M.; Bulut, Okan – Journal of Computer Assisted Learning, 2020
In computer-based testing (CBT) environments instructors can provide students with feedback immediately. Commonly, instructors give students their percentage correct without additional descriptive feedback. Our objectives were (a) to compare students' perceived usefulness of a percentage-only score report vs. a descriptive feedback report in a CBT…
Descriptors: Computer Assisted Testing, Feedback (Response), Value Judgment, Student Attitudes
Kendeou, Panayiota; McMaster, Kristen L.; Butterfuss, Reese; Kim, Jasmine; Slater, Susan; Bulut, Okan – Assessment for Effective Intervention, 2021
The overall aim of the current investigation was to develop and validate the initial version of the Minnesota Inference Assessment (MIA). MIA is a web-based measure of inference processes in Grades K-2. MIA leverages the affordances of different media to evaluate inference processes in a nonreading context, using age-appropriate fiction and…
Descriptors: Test Construction, Test Validity, Inferences, Computer Assisted Testing
Kendeou, Panayiota; McMaster, Kristen L.; Butterfuss, Reese; Kim, Jasmine; Slater, Susan; Bulut, Okan – Grantee Submission, 2020
The overall aim of the current investigation was to develop and validate the initial version of the Minnesota Inference Assessment (MIA). MIA is a web-based measure of inference processes in K-2. MIA leverages the affordances of different media to evaluate inference processes in a nonreading context, using age-appropriate fiction and nonfiction…
Descriptors: Test Construction, Test Validity, Inferences, Computer Assisted Testing
Arce-Ferrer, Alvaro J.; Bulut, Okan – Journal of Experimental Education, 2019
This study investigated the performance of four widely used data-collection designs in detecting test-mode effects (i.e., computer-based versus paper-based testing). The experimental conditions included four data-collection designs, two test-administration modes, and the availability of an anchor assessment. The test-level and item-level results…
Descriptors: Data Collection, Test Construction, Test Format, Computer Assisted Testing
Bulut, Okan; Lei, Ming; Guo, Qi – International Journal of Research & Method in Education, 2018
Item positions in educational assessments are often randomized across students to prevent cheating. However, if altering item positions results in any significant impact on students' performance, it may threaten the validity of test scores. Two widely used approaches for detecting position effects -- logistic regression and hierarchical…
Descriptors: Alternative Assessment, Disabilities, Computer Assisted Testing, Structural Equation Models
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students

Peer reviewed
Direct link
