NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 376 to 390 of 9,520 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Wang, Tengfei; Ren, Xuezhu – Journal of Experimental Education, 2022
The essay reports two studies on confirmatory factor analysis of speeded data with an effect of selective responding. This response strategy leads test takers to choose their own working order instead of completing the items along with the given order. Methods for detecting speededness despite such a deviation from the given order are proposed and…
Descriptors: Factor Analysis, Response Style (Tests), Decision Making, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hyland, Diarmaid; O'Shea, Ann – Teaching Mathematics and Its Applications, 2022
In this study, we conducted a survey of all tertiary level institutions in Ireland to find out how many of them use diagnostic tests, and what kind of mathematical content areas and topics appear on these tests. The information gathered provides an insight into what instructors expect students to know on entry to university and what they expect…
Descriptors: Foreign Countries, Diagnostic Tests, Mathematics Tests, College Freshmen
Peer reviewed Peer reviewed
Direct linkDirect link
Arikan, Serkan; Erktin, Emine; Pesen, Melek – International Journal of Science and Mathematics Education, 2022
The aim of this study is to construct a STEM competencies assessment framework and provide validity evidence by empirically testing its structure. Common interdisciplinary assessment frameworks for STEM seem to be scarce in the literature. Many studies use students' mathematics or science scores obtained from large-scale assessments or exams to…
Descriptors: STEM Education, Competence, Interdisciplinary Approach, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ally, Said – International Journal of Education and Development using Information and Communication Technology, 2022
Moodle software has become the heart of teaching and learning services in education. The software is viewed as a trusted modern platform for transforming learning and teaching modes from conventional face-to-face to fully online classes. However, its use for online examination is very limited despite having a state-of-the-art Quiz Module with…
Descriptors: Integrated Learning Systems, Computer Assisted Testing, Information Security, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Student, Sanford R.; Gong, Brian – Educational Measurement: Issues and Practice, 2022
We address two persistent challenges in large-scale assessments of the Next Generation Science Standards: (a) the validity of score interpretations that target the standards broadly and (b) how to structure claims for assessments of this complex domain. The NGSS pose a particular challenge for specifying claims about students that evidence from…
Descriptors: Science Tests, Test Validity, Test Items, Test Construction
Jiajing Huang – ProQuest LLC, 2022
The nonequivalent-groups anchor-test (NEAT) data-collection design is commonly used in large-scale assessments. Under this design, different test groups take different test forms. Each test form has its own unique items and all test forms share a set of common items. If item response theory (IRT) models are applied to analyze the test data, the…
Descriptors: Item Response Theory, Test Format, Test Items, Test Construction
Wang, Weimeng – ProQuest LLC, 2022
Recent advancements in testing differential item functioning (DIF) have greatly relaxed restrictions made by the conventional multiple group item response theory (IRT) model with respect to the number of grouping variables and the assumption of predefined DIF-free anchor items. The application of the L[subscript 1] penalty in DIF detection has…
Descriptors: Factor Analysis, Item Response Theory, Statistical Inference, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Dahl, Laura S.; Staples, B. Ashley; Mayhew, Matthew J.; Rockenbach, Alyssa N. – Innovative Higher Education, 2023
Surveys with rating scales are often used in higher education research to measure student learning and development, yet testing and reporting on the longitudinal psychometric properties of these instruments is rare. Rasch techniques allow scholars to map item difficulty and individual aptitude on the same linear, continuous scale to compare…
Descriptors: Surveys, Rating Scales, Higher Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Lin, Zhongtian; Chalmers, Robert Philip – Educational and Psychological Measurement, 2023
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori…
Descriptors: Models, Item Response Theory, Test Items, Intervals
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Türe, Ersin; Bikmaz, Fatma – Educational Policy Analysis and Strategic Research, 2023
In this research, teachers' orientations in curriculum theories were identified via an assessment tool which was grounded by Marsh and Willis (2003) where curriculum theorists were classified. "The Inventory of Orientations in Curriculum Theories" was developed to identify the teachers' orientations in curriculum theories in this…
Descriptors: Teacher Attitudes, Educational Attitudes, Curriculum, Educational Theories
Peer reviewed Peer reviewed
Direct linkDirect link
van Rijn, Peter W.; Attali, Yigal; Ali, Usama S. – Journal of Experimental Education, 2023
We investigated whether and to what extent different scoring instructions, timing conditions, and direct feedback affect performance and speed. An experimental study manipulating these factors was designed to address these research questions. According to the factorial design, participants were randomly assigned to one of twelve study conditions.…
Descriptors: Scoring, Time, Feedback (Response), Performance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deniz Arslan; Ömer Faruk Tamul; Murat Dogan Sahin; Ugur Sak – Journal of Pedagogical Research, 2023
An examination of gender-related differential item functioning was conducted on the verbal subtests of the Anadolu-Sak Intelligence Scale. Analyses were conducted using the scale standardization data (N = 4641). A Mantel-Haenszel statistic was used to detect differential item functioning (DIF). A total of 58 verbal analogical reasoning items, 20…
Descriptors: Foreign Countries, Intelligence Tests, Gender Bias, Gender Differences
Pages: 1  |  ...  |  22  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  ...  |  635