NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,681 to 1,695 of 9,530 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikmenli, Yurdal; Yakar, Hamza; Konca, Ahmet Sami – Review of International Geographical Education Online, 2018
The main purpose of this study is to develop a disaster awareness scale to determine the disaster consciousness of teacher candidates. The study group consisted of 820 preservice teacher who studied in different departments of Kirsehir Kirsehir Ahi Evran University Faculty of Education in the 2016- 2017 academic year. Of the pre-service teachers…
Descriptors: Foreign Countries, Preservice Teachers, Elementary Education, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yi-Jui I.; Chen, Yi-Hsin; Anthony, Jason L.; Erazo, Noé A. – Journal of Psychoeducational Assessment, 2022
The Computer-based Orthographic Processing Assessment (COPA) is a newly developed assessment to measure orthographic processing skills, including rapid perception, access, differentiation, correction, and arrangement. In this study, cognitive diagnostic models were used to test if the dimensionality of the COPA conforms to theoretical expectation,…
Descriptors: Elementary School Students, Grade 2, Computer Assisted Testing, Orthographic Symbols
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ehara, Yo – International Educational Data Mining Society, 2022
Language learners are underserved if there are unlearned meanings of a word that they think they have already learned. For example, "circle" as a noun is well known, whereas its use as a verb is not. For artificial-intelligence-based support systems for learning vocabulary, assessing each learner's knowledge of such atypical but common…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Eun-Young; Seo, Hyojeong; Blair, Kwang-Sun Cho; Kang, Min-Chae – SAGE Open, 2021
This study examined the validity of the Korean version of the Child Behavior Checklist (K-CBCL) with 180 children with autism spectrum disorder (ASD) in South Korea. Rasch analysis was applied to examine item fit, item difficulty, suitability of the response scale, and person and item separation indices of the K-CBCL. The results indicated that,…
Descriptors: Autism, Pervasive Developmental Disorders, Foreign Countries, Child Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Van Vo, De; Csapó, Beno – International Journal of Science Education, 2021
Control of variables strategy (CVS) is a core scientific reasoning skill related to domain-general experimentation for evaluating an experimental system and deducing valid conclusions. This study aims to develop, validate, and explore latent factors predicting item difficulty in the test, measuring CVS in physics for high school students. The…
Descriptors: Test Construction, Science Tests, Science Process Skills, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
Pools, Elodie; Monseur, Christian – Large-scale Assessments in Education, 2021
Background: The idea of using low-stakes assessment results is often mentioned when designing educational system reforms. However, when tests have no consequences for the students, test takers may not make enough effort when completing the test, and their lack of engagement may negatively affect the validity of the conclusions of the studies that…
Descriptors: Science Tests, Test Validity, Student Motivation, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Brann, Kristy L.; Boone, William J.; Splett, Joni W.; Clemons, Courtney; Bidwell, Sarah L. – Journal of Psychoeducational Assessment, 2021
Given the important role that teachers play in supporting student mental health, it is critical teachers feel confident in their ability to fill such roles. To inform strategies intended to improve teacher confidence in supporting student mental health, a psychometrically sound tool assessing teacher school mental health self-efficacy is needed.…
Descriptors: Teacher Surveys, Test Construction, Psychometrics, Mental Health
Peer reviewed Peer reviewed
Direct linkDirect link
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Supriyati, Yetti; Iriyadi, Deni; Falani, Ilham – Journal of Technology and Science Education, 2021
This study aims to develop a score equating application for computer-based school exams using parallel test kits with 25% anchor items. The items are arranged according to HOTS (High Order Thinking Skill) category, and use a scientific approach according to the physics lessons characteristics. Therefore, the questions were made using stimulus,…
Descriptors: Physics, Science Instruction, Teaching Methods, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Richardson, Connor J.; Smith, Trevor I.; Walter, Paul J. – Physical Review Physics Education Research, 2021
Ishimoto, Davenport, and Wittmann have previously reported analyses of data from student responses to the Force and Motion Conceptual Evaluation (FMCE), in which they used item response curves (IRCs) to make claims about American and Japanese students' relative likelihood to choose certain incorrect responses to some questions. We have used an…
Descriptors: Motion, Physics, Science Instruction, Concept Formation
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Ali, Usama S.; van Rijn, Peter W. – Journal of Educational Measurement, 2017
Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…
Descriptors: Test Format, Test Construction, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wrigley, Terry – FORUM: for promoting 3-19 comprehensive education, 2017
This text represents two extracts from a submission to the House of Commons Select Committee's investigation into primary school tests. The first part is a critique of the 2016 tests, particularly the Reading and Grammar tests for 11-year-olds and also the highly regulated "teacher assessment" of Writing. The second part is a set of…
Descriptors: Elementary Education, Tests, Student Evaluation, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Peterson, Christina Hamme; Peterson, N. Andrew; Powell, Kristen Gilmore – Measurement and Evaluation in Counseling and Development, 2017
Cognitive interviewing (CI) is a method to identify sources of confusion in assessment items and to assess validity evidence on the basis of content and response processes. We introduce readers to CI and describe a process for conducting such interviews and analyzing the results. Recommendations for best practice are provided.
Descriptors: Test Items, Test Construction, Interviews, Test Validity
Pages: 1  |  ...  |  109  |  110  |  111  |  112  |  113  |  114  |  115  |  116  |  117  |  ...  |  636