NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research14
Journal Articles11
Speeches/Meeting Papers2
Tests/Questionnaires2
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Olney, Andrew M. – Grantee Submission, 2021
In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and…
Descriptors: Feedback (Response), Error Patterns, Artificial Intelligence, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Phuong, Do Thi Vu – Pegem Journal of Education and Instruction, 2022
This study investigated common written errors and the causes that language learners frequently commit. A descriptive qualitative approach was conducted with the participation of 57 eleventh-graders at a high school in Hung Yen province during the second term of the academic year 2021-2022. The respondents were requested to participate in five…
Descriptors: Writing Evaluation, Error Analysis (Language), English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Ling – Journal of Educational Multimedia and Hypermedia, 2021
Running records is an important reading assessment for diagnosing early readers' needs in diverse instructional settings across grade levels. This study develops an innovative app to help teachers administer running records assessment and investigates teachers' perceptions of its functionality and usability in practical classrooms. The app offers…
Descriptors: Miscue Analysis, Reading Comprehension, Reading Tests, Computer Software
Sinharay, Sandip – Grantee Submission, 2018
Tatsuoka (1984) suggested several extended caution indices and their standardized versions that have been used as person-fit statistics by researchers such as Drasgow, Levine, and McLaughlin (1987), Glas and Meijer (2003), and Molenaar and Hoijtink (1990). However, these indices are only defined for tests with dichotomous items. This paper extends…
Descriptors: Test Format, Goodness of Fit, Item Response Theory, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Hubbard, Joanna K.; Potts, Macy A.; Couch, Brian A. – CBE - Life Sciences Education, 2017
Assessments represent an important component of undergraduate courses because they affect how students interact with course content and gauge student achievement of course objectives. To make decisions on assessment design, instructors must understand the affordances and limitations of available question formats. Here, we use a crossover…
Descriptors: Test Format, Questioning Techniques, Undergraduate Students, Objective Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bokyoung Park – English Teaching, 2017
This study investigated Korean college students' performance as measured by two different vocabulary assessment tools (the Productive Vocabulary Levels Test (PVLT) and the Productive Vocabulary Use Task (PVUT)) and the relationship these assessments have with students' writing proficiency. A total of 72 students participated in the study. The…
Descriptors: Foreign Countries, Vocabulary Development, Language Tests, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shilo, Gila – Educational Research Quarterly, 2015
The purpose of the study was to examine the quality of open test questions directed to high school and college students. One thousand five hundred examination questions from various fields of study were examined using criteria based on the writing centers directions and guidelines. The 273 questions that did not fulfill the criteria were analyzed…
Descriptors: Questioning Techniques, Questionnaires, Test Construction, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format
Peer reviewed Peer reviewed
Benson, Philip G.; Dickinson, Terry L. – Educational and Psychological Measurement, 1983
The mixed standard scale is a rating format that allows researchers to count internally inconsistent response patterns. This study investigated the meaning of these counts, using 943 accountants as raters. The counts of internally inconsistent response patterns were not related to reliability as measured by Cronbach's alpha. (Author/BW)
Descriptors: Accountants, Adults, Error Patterns, Rating Scales
Peer reviewed Peer reviewed
Barnett-Foster, Debora; Nagy, Philip – Higher Education, 1996
A study compared response strategies and error patterns of 272 college freshmen on chemistry test items in multiple choice and constructed response formats. Analysis of test data indicated no significant difference in solution strategies used or types of errors committed across test formats. However, interviews with 21 participants revealed…
Descriptors: Chemistry, College Freshmen, Comparative Analysis, Error Patterns
Gradman, Harry L.; Hanania, Edith – 1988
A study investigated the variability of language performance on different types of testing task, global versus discrete-focus. Three tests (cloze, multiple-choice, and fill-in-the-blank) were developed to measure learners' knowledge of five verb forms. The tests, containing corresponding items designed to elicit equivalent structures, were…
Descriptors: Cloze Procedure, College Students, Comparative Analysis, English (Second Language)