NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ALKursheh, Taha Okleh; Al-zboon, Habis Saad; AlNasraween, Mo'en Salman – International Journal of Instruction, 2022
This study aimed at comparing the effect of two test item formats (multiple-choice and complete) on estimating person's ability, item parameters and the test information function (TIF).To achieve the aim of the study, two format of mathematics(1) test have been created: multiple-choice and complete, In its final format consisted of (31) items. The…
Descriptors: Comparative Analysis, Test Items, Item Response Theory, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheybani, Elias; Zeraatpishe, Mitra – International Journal of Language Testing, 2018
Test method is deemed to affect test scores along with examinee ability (Bachman, 1996). In this research the role of method facet in reading comprehension tests is studied. Bachman divided method facet into five categories, one category is the nature of input and the nature of expected response. This study examined the role of method effect in…
Descriptors: Reading Comprehension, Reading Tests, Test Items, Test Format
Wang, Wei – ProQuest LLC, 2013
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Descriptors: Equated Scores, Test Format, Test Items, Test Length
Mbella, Kinge Keka – ProQuest LLC, 2012
Mixed-format assessments are increasingly being used in large scale standardized assessments to measure a continuum of skills ranging from basic recall to higher order thinking skills. These assessments are usually comprised of a combination of (a) multiple-choice items which can be efficiently scored, have stable psychometric properties, and…
Descriptors: Educational Assessment, Test Format, Evaluation Methods, Multiple Choice Tests
Peer reviewed Peer reviewed
Straton, Ralph G.; Catts, Ralph M. – Educational and Psychological Measurement, 1980
Multiple-choice tests composed entirely of two-, three-, or four-choice items were investigated. Results indicated that number of alternatives per item was inversely related to item difficulty, but directly related to item discrimination. Reliability and standard error of measurement of three-choice item tests was equivalent or superior.…
Descriptors: Difficulty Level, Error of Measurement, Foreign Countries, Higher Education
Murchan, Damian P. – 1989
The reliability, content validity, and construct validity were compared for two test formats in a public examination used to assess a secondary school geography course. The 11-item geography portion of the Intermediate Certificate Examination (essay examination) was administered in June 1987 to 400 secondary school students in Ireland who also…
Descriptors: Achievement Tests, Comparative Testing, Construct Validity, Content Validity
Macpherson, Colin R.; Rowley, Glenn L. – 1986
Teacher-made mastery tests were administered in a classroom-sized sample to study their decision consistency. Decision-consistency of criterion-referenced tests is usually defined in terms of the proportion of examinees who are classified in the same way after two test administrations. Single-administration estimates of decision consistency were…
Descriptors: Classroom Research, Comparative Testing, Criterion Referenced Tests, Cutting Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Brantmeier, Cindy – Forum on Public Policy Online, 2006
Bernhardt (2003) claims that half of the variance in second language (L2) reading is accounted for by first language literacy (20%) and second language knowledge (30%), and that one of the central goals of current L2 reading research should be to investigate the 50% of variance that remains unexplained. Part of this variance takes consists of…
Descriptors: Second Language Learning, Reading Research, Gender Differences, Test Format