NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Phuong, Do Thi Vu – Pegem Journal of Education and Instruction, 2022
This study investigated common written errors and the causes that language learners frequently commit. A descriptive qualitative approach was conducted with the participation of 57 eleventh-graders at a high school in Hung Yen province during the second term of the academic year 2021-2022. The respondents were requested to participate in five…
Descriptors: Writing Evaluation, Error Analysis (Language), English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Ling – Journal of Educational Multimedia and Hypermedia, 2021
Running records is an important reading assessment for diagnosing early readers' needs in diverse instructional settings across grade levels. This study develops an innovative app to help teachers administer running records assessment and investigates teachers' perceptions of its functionality and usability in practical classrooms. The app offers…
Descriptors: Miscue Analysis, Reading Comprehension, Reading Tests, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Hubbard, Joanna K.; Potts, Macy A.; Couch, Brian A. – CBE - Life Sciences Education, 2017
Assessments represent an important component of undergraduate courses because they affect how students interact with course content and gauge student achievement of course objectives. To make decisions on assessment design, instructors must understand the affordances and limitations of available question formats. Here, we use a crossover…
Descriptors: Test Format, Questioning Techniques, Undergraduate Students, Objective Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bokyoung Park – English Teaching, 2017
This study investigated Korean college students' performance as measured by two different vocabulary assessment tools (the Productive Vocabulary Levels Test (PVLT) and the Productive Vocabulary Use Task (PVUT)) and the relationship these assessments have with students' writing proficiency. A total of 72 students participated in the study. The…
Descriptors: Foreign Countries, Vocabulary Development, Language Tests, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shilo, Gila – Educational Research Quarterly, 2015
The purpose of the study was to examine the quality of open test questions directed to high school and college students. One thousand five hundred examination questions from various fields of study were examined using criteria based on the writing centers directions and guidelines. The 273 questions that did not fulfill the criteria were analyzed…
Descriptors: Questioning Techniques, Questionnaires, Test Construction, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format
Peer reviewed Peer reviewed
Benson, Philip G.; Dickinson, Terry L. – Educational and Psychological Measurement, 1983
The mixed standard scale is a rating format that allows researchers to count internally inconsistent response patterns. This study investigated the meaning of these counts, using 943 accountants as raters. The counts of internally inconsistent response patterns were not related to reliability as measured by Cronbach's alpha. (Author/BW)
Descriptors: Accountants, Adults, Error Patterns, Rating Scales
Peer reviewed Peer reviewed
Twigg, Helen Parramore – Teaching English in the Two-Year College, 1981
Examines students' amusing responses to essay test questions, while maintaining that such responses can still give a teacher a better indication of what students are learning in the classroom than can objective tests. (HTH)
Descriptors: Error Patterns, Essay Tests, Higher Education, Objective Tests
Peer reviewed Peer reviewed
Aiken, Lewis R. – Research in Higher Education, 1991
Research and practice in detecting and controlling for cheating on objective tests are reviewed and a small survey of attitudes and practices is reported. Potential of two computer programs to detect error similarities and use of multiple answer-sheet forms to control cheating are discussed. Teacher cheating is also addressed. (Author/MSE)
Descriptors: Answer Sheets, Cheating, Computer Software, Error Patterns
Peer reviewed Peer reviewed
Barnett-Foster, Debora; Nagy, Philip – Higher Education, 1996
A study compared response strategies and error patterns of 272 college freshmen on chemistry test items in multiple choice and constructed response formats. Analysis of test data indicated no significant difference in solution strategies used or types of errors committed across test formats. However, interviews with 21 participants revealed…
Descriptors: Chemistry, College Freshmen, Comparative Analysis, Error Patterns