NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Testing45
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 45 results Save | Export
Peer reviewed Peer reviewed
Raatz, Ulrich – Language Testing, 1985
Argues that classical test theory cannot be used at the item level on "authentic" language tests. However, if the total score is derived by adding the scores of a number of different and independent parts, test reliability can be estimated. Suggests using the Classical Latent Additives model to examine test-part homogeneity. (Author/SED)
Descriptors: Item Analysis, Latent Trait Theory, Models, Second Language Learning
Peer reviewed Peer reviewed
Kim, Mikyung – Language Testing, 2001
Investigates differential item functioning (DIF) across two different broad language groupings, Asian and European, in a speaking test in which the test takers' responses were rated polytomously. Data were collected from 1038 nonnative speakers of English from France, Hong Kong, Japan, Spain, Switzerland, and Thailand who took the SPEAK test in…
Descriptors: English (Second Language), Foreign Countries, Item Analysis, Language Tests
Peer reviewed Peer reviewed
Alderson, J. Charles; Percsich, Richard; Szabo, Gabor – Language Testing, 2000
Reports on the potential problems in scoring responses to sequencing tests, the development of a computer program to overcome these difficulties, and an exploration of the value of scoring procedures. (Author/VWL)
Descriptors: Computer Software, Foreign Countries, Item Analysis, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Min-Young – Language Testing, 2008
This paper concerns the divisibility of comprehension subskills measured in L2 listening and reading tests. Motivated by the administration of the new Web-based English as a Second Language Placement Exam (WB-ESLPE) at UCLA, this study addresses the following research questions: first, to what extent do the WB-ESLPE listening and reading items…
Descriptors: Structural Equation Models, Second Language Learning, Reading Tests, Inferences
Peer reviewed Peer reviewed
Takala, Sauli; Kaftandjieva, Felianka – Language Testing, 2000
Analyzes gender-uniform differential item functioning (DIF) in a second language vocabulary test with the tools of item response theory to study potential gender impact on the test performance measured by different item composites. Results show that while there are test items with indications of DIF in favor of either females or males, the test as…
Descriptors: English (Second Language), Foreign Countries, Item Analysis, Language Tests
Peer reviewed Peer reviewed
Henning, Grant – Language Testing, 1988
Violations of item unidimensionality on language tests produced distorted estimates of person ability, and violations of person unidimensionality produced distorted estimates of item difficulty. The Bejar Method was sensitive to such distortions. (Author)
Descriptors: Construct Validity, Content Validity, Difficulty Level, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Abbott, Marilyn L. – Language Testing, 2007
In this article, I describe a practical application of the Roussos and Stout (1996) multidimensional analysis framework for interpreting group performance differences on an ESL reading proficiency test. Although a variety of statistical methods have been developed for flagging test items that function differentially for equal ability examinees…
Descriptors: Test Bias, Test Items, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Spelberg, Henk C. Lutje; de Boer, Paulien; van den Bos, Kees P. – Language Testing, 2000
Compares two language tests with different item types. The tests are the Dutch Reynell test and the BELL test. Both tests were administered to 64 Dutch kindergarten children with an average age of 70.3 months. Regression analyses indicate that item type does not contribute significantly to prediction of item difficulty, but the linguistic…
Descriptors: Comparative Analysis, Dutch, Foreign Countries, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Pae, Tae-Il; Park, Gi-Pyo – Language Testing, 2006
The present study utilized both the IRT-LR (item response theory likelihood ratio) and a series of CFA (confirmatory factor analysis) multi-sample analyses to systematically examine the relationships between DIF (differential item functioning) and DTF (differential test functioning) with a random sample of 15 000 Korean examinees. Specifically,…
Descriptors: Item Response Theory, Factor Analysis, Test Bias, Test Validity
Peer reviewed Peer reviewed
Brown, James Dean – Language Testing, 1988
The reliability and validity of a cloze procedure used as an English-as-a-second-language (ESL) test in China were improved by applying traditional item analysis and selection techniques. The 'best' test items were chosen on the basis of item facility and discrimination indices, and were administered as a 'tailored cloze.' 29 references listed.…
Descriptors: Adaptive Testing, Cloze Procedure, English (Second Language), Foreign Countries
Peer reviewed Peer reviewed
Reynolds, Trudy; And Others – Language Testing, 1994
Presents a study conducted to provide a comparative analysis of five item analysis indices using both IRT and non-IRT indices to describe the characteristics of flagged items and to investigate the appropriateness of logistic regression as an item analysis technique for further studies. The performance of five item analysis indices was examined.…
Descriptors: College Students, Comparative Analysis, English (Second Language), Item Analysis
Peer reviewed Peer reviewed
Bachman, Lyle F.; And Others – Language Testing, 1996
Discusses the value of content considerations in the design of language tests and the implications of the findings of various investigations of content analysis. The article argues that content analysis can be viewed as the application of a model of test design to a particular measurement instrument, using judgments of trained analysts. (26…
Descriptors: College Students, Content Analysis, English (Second Language), Item Analysis
Peer reviewed Peer reviewed
Henning, Grant; And Others – Language Testing, 1994
Examines the effectiveness of an automated language proficiency test assembly system at an air force base English Language Center. The study focuses on the equivalence of mean score difficulty, total score variance, and intercorrelation covariance across test norms and finds a high level of test-form equivalence and internal consistency. (nine…
Descriptors: Computer Assisted Testing, English (Second Language), Foreign Nationals, Item Analysis
Peer reviewed Peer reviewed
Chapelle, Carol – Language Testing, 1988
Investigates the relationship between field independence and language measures. Results indicate varying relationships of field independence with cloze, dictation, and multiple-choice language tests. These relationships also differ for native speakers in regular or remedial English classes, and for nonnative speakers. 53 references cited. Cloze…
Descriptors: Cloze Procedure, College Freshmen, Dictation, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Carr, Nathan T. – Language Testing, 2006
The present study focuses on the task characteristics of reading passages and key sentences in a test of second language reading. Using a new methodological approach to describe variation in test task characteristics and explore how differences in these characteristics might relate to examinee performance, it posed the two following research…
Descriptors: English for Academic Purposes, Sentences, Reading Comprehension, Factor Analysis
« Previous Page | Next Page
Pages: 1  |  2  |  3