NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Alpizar, David; Li, Tongyun; Norris, John M.; Gu, Lixiong – Language Testing, 2023
The C-test is a type of gap-filling test designed to efficiently measure second language proficiency. The typical C-test consists of several short paragraphs with the second half of every second word deleted. The words with deleted parts are considered as items nested within the corresponding paragraph. Given this testlet structure, it is commonly…
Descriptors: Psychometrics, Language Tests, Second Language Learning, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Ikkyu – Language Testing, 2017
Language proficiency constitutes a crucial barrier for prospective international teaching assistants (ITAs). Many US universities administer screening tests to ensure that ITAs possess the required academic oral English proficiency for their TA duties. Such ITA screening tests often elicit a sample of spoken English, which is evaluated in terms of…
Descriptors: Oral English, Academic Discourse, Language Proficiency, Screening Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Min, Shangchao; He, Lianzhen – Language Testing, 2014
This study examined the relative effectiveness of the multidimensional bi-factor model and multidimensional testlet response theory (TRT) model in accommodating local dependence in testlet-based reading assessment with both dichotomously and polytomously scored items. The data used were 14,089 test-takers' item-level responses to the testlet-based…
Descriptors: Foreign Countries, Item Response Theory, Reading Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Elder, Catherine; McNamara, Tim – Language Testing, 2016
Gaining insights from domain experts into how they view communication in real world settings is recognized as an important authenticity consideration in the development of criteria to assess language proficiency for specific academic or occupational purposes. These "indigenous" criteria represent an articulation of the test construct and…
Descriptors: Feedback (Response), Language Proficiency, Language Tests, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hsieh, Mingchuan – Language Testing, 2013
When implementing standard setting procedures, there are two major concerns: variance between panelists and efficiency in conducting multiple rounds of judgments. With regard to the former, there is concern over the consistency of the cutoff scores made by different panelists. If the cut scores show an inordinately wide range then further rounds…
Descriptors: Item Response Theory, Standard Setting (Scoring), Language Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Pill, John – Language Testing, 2016
The "indigenous assessment practices" (Jacoby & McNamara, 1999) in selected health professions were investigated to inform a review of the scope of assessment in the speaking sub-test of a specific-purpose English language test for health professionals, the Occupational English Test (OET). The assessment criteria in current use on…
Descriptors: Health Personnel, Grammar, Language Usage, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Bo – Language Testing, 2010
This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…
Descriptors: Language Tests, Classification, Item Response Theory, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Mark; Moore, Stephen – Language Testing, 2011
This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…
Descriptors: Reading Comprehension, Testing, Social Sciences, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Bae, Jungok; Bachman, Lyle F. – Language Testing, 2010
This study investigated the validity of four theoretically motivated traits of writing ability across English and Korean, based on elementary school students' responses to letter- and story-writing tasks. Their responses were scored analytically and analyzed using confirmatory factor analysis. The findings include the following. A model of writing…
Descriptors: Elementary School Students, Validity, Korean, English (Second Language)
Peer reviewed Peer reviewed
Choi, Inn-Chull; Bachman, Lyle F. – Language Testing, 1992
This study is part of a larger one examining the comparability of the First Certificate in English and the Test of English as a Foreign Language. The general assumption of unidimensionality and goodness-of-fit were tested. Findings raise questions about the consequences of rejecting or retaining misfitting items. (60 references) (LB)
Descriptors: Comparative Analysis, English (Second Language), Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Boldt, Robert F. – Language Testing, 1992
The assumption called PIRC (proportional item response curve) was tested in which PIRC was used to predict item scores of selected examinees on selected items. Findings show approximate accuracies of prediction for PIRC, the three-parameter logist model, and a modified Rasch model. (12 references) (Author/LB)
Descriptors: Comparative Analysis, English (Second Language), Factor Analysis, Item Response Theory