NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,066 to 1,080 of 9,530 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayako Aizawa – Vocabulary Learning and Instruction, 2024
The Vocabulary Size Test (VST) measures English learners' decontextualised receptive vocabulary knowledge of written English and has nine bilingual versions with multiple-choice options written in other languages. This study used the English-Japanese version of the VST to investigate the extent to which loanword items were answered correctly by…
Descriptors: Linguistic Borrowing, Second Language Learning, Native Language, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Dave Kush; Anne Dahl; Filippa Lindahl – Second Language Research, 2024
Embedded questions (EQs) are islands for filler--gap dependency formation in English, but not in Norwegian. Kush and Dahl (2022) found that first language (L1) Norwegian participants often accepted filler-gap dependencies into EQs in second language (L2) English, and proposed that this reflected persistent transfer from Norwegian of the functional…
Descriptors: Transfer of Training, Norwegian, Native Language, Grammar
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura S. Kabiri; Catherine R. Barber; Thomas M. McCabe; Augusto X. Rodriguez – HAPS Educator, 2024
Multiple-choice questions (MCQs) are commonly used in undergraduate introductory science, technology, engineering, and mathematics (STEM) courses, and substantial evidence supports the use of student-created questions to promote learning. However, research on student-created MCQ exams as an assessment method is more limited, and no studies have…
Descriptors: Physiology, Science Tests, Student Developed Materials, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jessica B. Koslouski; Sandra M. Chafouleas; Amy Briesch; Jacqueline M. Caemmerer; Brittany Melo – School Mental Health, 2024
We are developing the Equitable Screening to Support Youth (ESSY) Whole Child Screener to address concerns prevalent in existing school-based screenings that impede goals to advance educational equity using universal screeners. Traditional assessment development does not include end users in the early development phases, instead relying on a…
Descriptors: Screening Tests, Psychometrics, Validity, Child Development
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Annenberg Institute for School Reform at Brown University, 2024
Analyzing heterogeneous treatment effects (HTE) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and pre-intervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2024
Longitudinal models of individual growth typically emphasize between-person predictors of change but ignore how growth may vary "within" persons because each person contributes only one point at each time to the model. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Applied Measurement in Education, 2024
Longitudinal models typically emphasize between-person predictors of change but ignore how growth varies "within" persons because each person contributes only one data point at each time. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift over time. While traditionally…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balta, Nuri; Japashov, Nursultan; Glamocic, Džana Salibašic; Mešic, Vanes – Journal of Turkish Science Education, 2022
In this paper, we described the development of the High School Wave Optics Test (HSWOT). Firstly, 56 conceptual, multiple-choice items with a single correct answer and three distractors were created. Next, we conducted an initial review of the items which resulted in reducing the item pool to 44 highest quality items. Validity evidence and…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
McClurg, Virginia M.; Codalata, Bonnie M.; Bell, Sherry M.; McCallum, R. Steve – Gifted Child Today, 2022
The psychometric integrity of a curriculum-based measure to screen for academic giftedness (Monitoring Instructional Responsiveness: Reading [MIR:R]) was evaluated by examining its ceiling, item gradient, and predictive capacity using 460 fourth grade students. Eighty fourth graders (17.39%) scored one standard deviation above the MIR:R mean. Ten…
Descriptors: Psychometrics, Screening Tests, Talent Identification, Academically Gifted
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Delican, Burak – International Journal of Curriculum and Instruction, 2022
In this research, the questions in the Turkish Course (2,3,4) Worksheets were examined in terms of various classification systems. In this direction, the questions in the worksheets were evaluated with the document-material analysis technique in accordance with the structure of the qualitative research. During the research process, Turkish Course…
Descriptors: Worksheets, Elementary School Students, Turkish, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saenna, Watcharaporn; Phusee-orn, Songsak – Higher Education Studies, 2022
The purposes of the research were to: (1) create a scientific creativity measure for high school students; (2) examine the quality of the science creativity scale of the created test; (3) establish a benchmark for scientific creativity scores for high school students; and (4) study a scientific creativity level of students in the senior high…
Descriptors: Foreign Countries, Test Construction, High School Students, Creativity
Peer reviewed Peer reviewed
Direct linkDirect link
Heine, Jörg-Henrik; Robitzsch, Alexander – Large-scale Assessments in Education, 2022
Research Question: This paper examines the overarching question of to what extent different analytic choices may influence the inference about country-specific cross-sectional and trend estimates in international large-scale assessments. We take data from the assessment of PISA mathematics proficiency from the four rounds from 2003 to 2012 as a…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Pages: 1  |  ...  |  68  |  69  |  70  |  71  |  72  |  73  |  74  |  75  |  76  |  ...  |  636