NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 391 to 405 of 9,530 results Save | Export
Jiajing Huang – ProQuest LLC, 2022
The nonequivalent-groups anchor-test (NEAT) data-collection design is commonly used in large-scale assessments. Under this design, different test groups take different test forms. Each test form has its own unique items and all test forms share a set of common items. If item response theory (IRT) models are applied to analyze the test data, the…
Descriptors: Item Response Theory, Test Format, Test Items, Test Construction
Wang, Weimeng – ProQuest LLC, 2022
Recent advancements in testing differential item functioning (DIF) have greatly relaxed restrictions made by the conventional multiple group item response theory (IRT) model with respect to the number of grouping variables and the assumption of predefined DIF-free anchor items. The application of the L[subscript 1] penalty in DIF detection has…
Descriptors: Factor Analysis, Item Response Theory, Statistical Inference, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Dahl, Laura S.; Staples, B. Ashley; Mayhew, Matthew J.; Rockenbach, Alyssa N. – Innovative Higher Education, 2023
Surveys with rating scales are often used in higher education research to measure student learning and development, yet testing and reporting on the longitudinal psychometric properties of these instruments is rare. Rasch techniques allow scholars to map item difficulty and individual aptitude on the same linear, continuous scale to compare…
Descriptors: Surveys, Rating Scales, Higher Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Lin, Zhongtian; Chalmers, Robert Philip – Educational and Psychological Measurement, 2023
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori…
Descriptors: Models, Item Response Theory, Test Items, Intervals
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Türe, Ersin; Bikmaz, Fatma – Educational Policy Analysis and Strategic Research, 2023
In this research, teachers' orientations in curriculum theories were identified via an assessment tool which was grounded by Marsh and Willis (2003) where curriculum theorists were classified. "The Inventory of Orientations in Curriculum Theories" was developed to identify the teachers' orientations in curriculum theories in this…
Descriptors: Teacher Attitudes, Educational Attitudes, Curriculum, Educational Theories
Peer reviewed Peer reviewed
Direct linkDirect link
van Rijn, Peter W.; Attali, Yigal; Ali, Usama S. – Journal of Experimental Education, 2023
We investigated whether and to what extent different scoring instructions, timing conditions, and direct feedback affect performance and speed. An experimental study manipulating these factors was designed to address these research questions. According to the factorial design, participants were randomly assigned to one of twelve study conditions.…
Descriptors: Scoring, Time, Feedback (Response), Performance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deniz Arslan; Ömer Faruk Tamul; Murat Dogan Sahin; Ugur Sak – Journal of Pedagogical Research, 2023
An examination of gender-related differential item functioning was conducted on the verbal subtests of the Anadolu-Sak Intelligence Scale. Analyses were conducted using the scale standardization data (N = 4641). A Mantel-Haenszel statistic was used to detect differential item functioning (DIF). A total of 58 verbal analogical reasoning items, 20…
Descriptors: Foreign Countries, Intelligence Tests, Gender Bias, Gender Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oluwaseyi Aina Gbolade Opesemowo – Research in Social Sciences and Technology, 2023
Local Item Dependence (LID) is a desecration of Local Item Independence (LII) which can lead to overestimating or underestimating a candidate's ability in mathematics items and create validity problems. The study investigated the intra and inter-LID of mathematics items. The study made use of ex-post facto research. The population encompassed all…
Descriptors: Foreign Countries, Secondary School Students, Item Response Theory, Test Items
Gorney, Kylie – ProQuest LLC, 2023
Aberrant behavior refers to any type of unusual behavior that would not be expected under normal circumstances. In educational and psychological testing, such behaviors have the potential to severely bias the aberrant examinee's test score while also jeopardizing the test scores of countless others. It is therefore crucial that aberrant examinees…
Descriptors: Behavior Problems, Educational Testing, Psychological Testing, Test Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baryktabasov, Kasym; Jumabaeva, Chinara; Brimkulov, Ulan – Research in Learning Technology, 2023
Many examinations with thousands of participating students are organized worldwide every year. Usually, this large number of students sit the exams simultaneously and answer almost the same set of questions. This method of learning assessment requires tremendous effort and resources to prepare the venues, print question books and organize the…
Descriptors: Information Technology, Computer Assisted Testing, Test Items, Adaptive Testing
Laura Laclede – ProQuest LLC, 2023
Because non-cognitive constructs can influence student success in education beyond academic achievement, it is essential that they are reliably conceptualized and measured. Within this context, there are several gaps in the literature related to correctly interpreting the meaning of scale scores when a non-standard response option like I do not…
Descriptors: High School Students, Test Wiseness, Models, Test Items
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balbuena, Sherwin – International Journal of Assessment Tools in Education, 2023
Depression is a latent characteristic that is measured through self-reported or clinician-mediated instruments such as scales and inventories. The precision of depression estimates largely depends on the validity of the items used and on the truthfulness of people responding to these items. The existing methodology in instrumentation based on a…
Descriptors: Depression (Psychology), Test Items, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Measurement in Education, 2021
Estimation of parameters for the many-facets Rasch model requires that conditional on the values of the facets, such as person ability, item difficulty, and rater severity, the observed responses within each facet are independent. This requirement has often been discussed for the Rasch models and 2PL and 3PL models, but it becomes more complex…
Descriptors: Item Response Theory, Test Items, Ability, Scores
Pages: 1  |  ...  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  ...  |  636