Publication Date
In 2025 | 34 |
Since 2024 | 128 |
Since 2021 (last 5 years) | 467 |
Since 2016 (last 10 years) | 873 |
Since 2006 (last 20 years) | 1353 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 195 |
Teachers | 159 |
Researchers | 92 |
Administrators | 49 |
Students | 34 |
Policymakers | 14 |
Parents | 12 |
Counselors | 2 |
Community | 1 |
Media Staff | 1 |
Support Staff | 1 |
More ▼ |
Location
Canada | 62 |
Turkey | 59 |
Germany | 40 |
United Kingdom | 36 |
Australia | 35 |
Japan | 35 |
China | 32 |
United States | 32 |
California | 25 |
United Kingdom (England) | 25 |
Netherlands | 24 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Vedyanto – Advances in Language and Literary Studies, 2016
This study investigated the correlation between picture use in test format and vocabulary achievement of 41 male and female seventh graders of Santu Petrus Junior High School. Hence, in this paper, the writer presents how strong the correlation between picture use in test format and vocabulary achievement of the seventh graders, and the students'…
Descriptors: Correlation, Test Format, Vocabulary Development, Pictorial Stimuli
Thompson, Gregory L.; Cox, Troy L.; Knapp, Nieves – Foreign Language Annals, 2016
While studies have been done to rate the validity and reliability of the Oral Proficiency Interview (OPI) and Oral Proficiency Interview-Computer (OPIc) independently, a limited amount of research has analyzed the interexam reliability of these tests, and studies have yet to be conducted comparing the results of Spanish language learners who take…
Descriptors: Comparative Analysis, Oral Language, Language Proficiency, Spanish
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Wolf, Raffaela – ProQuest LLC, 2013
Preservation of equity properties was examined using four equating methods--IRT True Score, IRT Observed Score, Frequency Estimation, and Chained Equipercentile--in a mixed-format test under a common-item nonequivalent groups (CINEG) design. Equating of mixed-format tests under a CINEG design can be influenced by factors such as attributes of the…
Descriptors: Testing, Item Response Theory, Equated Scores, Test Items
Brame, Cynthia J.; Biel, Rachel – CBE - Life Sciences Education, 2015
Testing within the science classroom is commonly used for both formative and summative assessment purposes to let the student and the instructor gauge progress toward learning goals. Research within cognitive science suggests, however, that testing can also be a learning event. We present summaries of studies that suggest that repeated retrieval…
Descriptors: Undergraduate Students, Testing Programs, Feedback (Response), Test Format
Culligan, Brent – Language Testing, 2015
This study compared three common vocabulary test formats, the Yes/No test, the Vocabulary Knowledge Scale (VKS), and the Vocabulary Levels Test (VLT), as measures of vocabulary difficulty. Vocabulary difficulty was defined as the item difficulty estimated through Item Response Theory (IRT) analysis. Three tests were given to 165 Japanese students,…
Descriptors: Language Tests, Test Format, Comparative Analysis, Vocabulary
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Wu, Yi-Fang – ProQuest LLC, 2015
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
Descriptors: Item Response Theory, Test Items, Accuracy, Computation
Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a group of states working together to develop a modern assessment that replaces previous state standardized tests. It provides better information for teachers and parents to identify where a student needs help, or is excelling, so they are able to enhance instruction to…
Descriptors: Literacy, Language Arts, Scoring Formulas, Scoring
Macis, Marijana; Schmitt, Norbert – Language Teaching Research, 2017
This study investigated learner knowledge of the figurative meanings of 30 collocations that can be both literal and figurative. One hundred and seven Chilean Spanish-speaking university students of English were asked to complete a meaning-recall collocation test in which the target items were embedded in non-defining sentences. Results showed…
Descriptors: Figurative Language, Second Language Learning, Phrase Structure, Spanish Speaking
Reisenleutner, Sandra – Language Learning in Higher Education, 2016
Oral language exams at university often consist of an individual or group conversation with the examiner about topics dealt with during the semester. A question-and-answer format is applied and vocabulary and structures tested. As a consequence, the oral exam does not reflect action-oriented approaches that often form part of classroom activities…
Descriptors: Oral Language, Language Tests, Vocabulary Development, Universities
Öz, Hüseyin; Özturan, Tuba – Journal of Language and Linguistic Studies, 2018
This article reports the findings of a study that sought to investigate whether computer-based vs. paper-based test-delivery mode has an impact on the reliability and validity of an achievement test for a pedagogical content knowledge course in an English teacher education program. A total of 97 university students enrolled in the English as a…
Descriptors: Computer Assisted Testing, Testing, Test Format, Teaching Methods
Thompson, Robyn; Johnston, Susan S. – Journal of the American Academy of Special Education Professionals, 2017
The purpose of this investigation was to explore whether a difference existed between the effectiveness of paper-based format and tablet computer-based format Social Story interventions on frequency of undesired behaviors. An adapted alternating treatment design was implemented with four children with autism spectrum disorder (ASD). Data regarding…
Descriptors: Intervention, Behavior Problems, Pervasive Developmental Disorders, Autism
Alpayar, Cagla; Gulleroglu, H. Deniz – Educational Research and Reviews, 2017
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Student Evaluation
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment