NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çakiroglu, Ünal; Saylan, Esin; Çevik, Isak; Özkan, Adem – International Review of Research in Open and Distributed Learning, 2022
This quasi-experimental study explored how different online exam types differentiate learners' academic achievement and perceived learning. The participants comprised 95 undergraduate students enrolled in an English course at a Turkish university in three groups, each taking a different type of quiz: with multiple-choice, open-ended, and mixed…
Descriptors: Test Format, Computer Assisted Testing, Electronic Learning, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stradiotova, Eva; Nemethova, Ildiko; Stefancik, Radoslav – Advanced Education, 2021
In the paper, we deal with the comparison of the evaluation of students' language skills in two environments, on-site and online. Testing took place through the moodle platform, where we used e-test, during the first and second waves of the pandemic. The main aim of the research was to find out what impact the environment in which the testing…
Descriptors: Computer Assisted Testing, Test Format, Synchronous Communication, Physical Environment
Peer reviewed Peer reviewed
Direct linkDirect link
Nicklin, Christopher; Vitta, Joseph P. – Language Testing, 2022
Instrument measurement conducted with Rasch analysis is a common process in language assessment research. A recent systematic review of 215 studies involving Rasch analysis in language testing and applied linguistics research reported that 23 different software packages had been utilized. However, none of the analyses were conducted with one of…
Descriptors: Programming Languages, Vocabulary Development, Language Tests, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kiliçkaya, Ferit – Online Submission, 2019
The current study aims to determine the effect of multiple-choice, matching, gap-fill and word formation items used in assessing L2 vocabulary on learners' performance and to obtain the learners' views regarding the use of these types of items in vocabulary assessment. The convenience sampling method was selected, and the participants of the study…
Descriptors: Second Language Learning, Second Language Instruction, English (Second Language), Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koretz, Daniel; Yu, Carol; Mbekeani, Preeya P.; Langi, Meredith; Dhaliwal, Tasmin; Braslow, David – AERA Open, 2016
The current focus on assessing "college and career readiness" raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of…
Descriptors: Predictor Variables, College Freshmen, Grade Point Average, College Entrance Examinations
Peer reviewed Peer reviewed
Chapelle, Carol – Language Testing, 1988
Investigates the relationship between field independence and language measures. Results indicate varying relationships of field independence with cloze, dictation, and multiple-choice language tests. These relationships also differ for native speakers in regular or remedial English classes, and for nonnative speakers. 53 references cited. Cloze…
Descriptors: Cloze Procedure, College Freshmen, Dictation, English (Second Language)
Ross, Steven – 1987
A study investigated the use of a narrative discourse task to test oral English proficiency in non-native speakers for screening and placement. The subjects were Japanese university freshmen entering a five-level course in spoken English. The subjects were shown an animated cartoon of a Japanese folk tale, with narration in Japanese, and later…
Descriptors: Audiolingual Skills, College Freshmen, English (Second Language), Foreign Countries
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability