NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 39 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Marzieh Haghayeghi; Ali Moghadamzadeh; Hamdollah Ravand; Mohamad Javadipour; Hossein Kareshki – Journal of Psychoeducational Assessment, 2025
This study aimed to address the need for a comprehensive assessment tool to evaluate the mathematical abilities of first-grade students through cognitive diagnostic assessment (CDA). The primary challenge involved in this endeavor was to delineate the specific cognitive skills and sub-skills pertinent to first-grade mathematics (FG-M) and to…
Descriptors: Test Construction, Cognitive Measurement, Check Lists, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Ji-Eun Lee; Amisha Jindal; Sanika Nitin Patki; Ashish Gurung; Reilly Norum; Erin Ottmar – Interactive Learning Environments, 2024
This paper demonstrated how to apply Machine Learning (ML) techniques to analyze student interaction data collected in an online mathematics game. Using a data-driven approach, we examined 1) how different ML algorithms influenced the precision of middle-school students' (N = 359) performance (i.e. posttest math knowledge scores) prediction and 2)…
Descriptors: Teaching Methods, Algorithms, Mathematics Tests, Computer Games
Ji-Eun Lee; Amisha Jindal; Sanika Nitin Patki; Ashish Gurung; Reilly Norum; Erin Ottmar – Grantee Submission, 2023
This paper demonstrated how to apply Machine Learning (ML) techniques to analyze student interaction data collected in an online mathematics game. Using a data-driven approach, we examined: (1) how different ML algorithms influenced the precision of middle-school students' (N = 359) performance (i.e. posttest math knowledge scores) prediction; and…
Descriptors: Teaching Methods, Algorithms, Mathematics Tests, Computer Games
Peer reviewed Peer reviewed
Direct linkDirect link
Ketterlin-Geller, Leanne R.; Perry, Lindsey; Adams, Elizabeth – Applied Measurement in Education, 2019
Despite the call for an argument-based approach to validity over 25 years ago, few examples exist in the published literature. One possible explanation for this outcome is that the complexity of the argument-based approach makes implementation difficult. To counter this claim, we propose that the Assessment Triangle can serve as the overarching…
Descriptors: Validity, Educational Assessment, Models, Screening Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Terzi, Ragip; Sen, Sedat – SAGE Open, 2019
Large-scale assessments are generally designed for summative purposes to compare achievement among participating countries. However, these nondiagnostic assessments have also been adapted in the context of cognitive diagnostic assessment for diagnostic purposes. Following the large amount of investments in these assessments, it would be…
Descriptors: Achievement Tests, Elementary Secondary Education, Foreign Countries, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Pittalis, Marios; Pitta-Pantazi, Demetra; Christou, Constantinos – Journal for Research in Mathematics Education, 2020
A theoretical model describing young students' (Grades 1-3) functional-thinking modes was formulated and validated empirically (n = 345), hypothesizing that young students' functional-thinking modes consist of recursive patterning, covariational thinking, correspondence-particular, and correspondence-general factors. Data analysis suggested that…
Descriptors: Elementary School Students, Thinking Skills, Task Analysis, Profiles
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stone, Elizabeth; Wylie, E. Caroline – ETS Research Report Series, 2019
We describe the summative assessment component within a K-12 assessment program and our development of a validity argument to support its claims with respect to intended uses and interpretations. First, we describe the "Winsight"® assessment program theory of action, a logic model elucidating mechanisms for how use of the assessment…
Descriptors: Summative Evaluation, Educational Assessment, Test Validity, Test Use
Ji-Eun Lee; Amisha Jindal; Sanika Nitin Patki; Ashish Gurung; Reilly Norum; Erin Ottmar – Grantee Submission, 2022
This paper demonstrates how to apply Machine Learning (ML) techniques to analyze student interaction data collected in an online mathematics game. We examined: (1) how different ML algorithms influenced the precision of middle-school students' (N = 359) performance prediction; and (2) what types of in-game features were associated with student…
Descriptors: Teaching Methods, Algorithms, Mathematics Tests, Computer Games
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Hollylynne; Bradshaw, Laine; Famularo, Lisa; Masters, Jessica; Azevedo, Roger; Johnson, Sheri; Schellman, Madeline; Elrod, Emily; Sanei, Hamid – Grantee Submission, 2019
The research shared in this conference paper report illustrates how an iterative process to item development that involves expert review and cognitive lab interviews with students can be used to collect evidence of validity for assessment items. Analysis of students' reasoning was also used to expand a model for identifying conceptions and…
Descriptors: Middle School Students, Interviews, Misconceptions, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Brijmohan, Amanda; Khan, Gulam A.; Orpwood, Graham; Brown, Emily Sandford; Childs, Ruth A. – Canadian Journal of Education, 2018
Developing a new assessment requires the expertise of both content experts and assessment specialists. Using the example of an assessment developed for Ontario's Colleges Mathematics Assessment Program (CMAP), this article (1) describes the decisions that must be made in developing a new assessment, (2) explores the complementary contributions of…
Descriptors: Expertise, Mathematics Instruction, College Mathematics, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Schulz, Andreas; Leuders, Timo; Rangel, Ulrike – Journal of Psychoeducational Assessment, 2020
We provide evidence of validity for a newly developed diagnostic competence model of operation sense, by both (a) describing the theoretically substantiated development of the competence model in close association with its use within a large-scale formative assessment and (b) providing empirical evidence for the theoretically described cognitive…
Descriptors: Diagnostic Tests, Models, Criterion Referenced Tests, Cognitive Measurement
Oluwalana, Olasumbo O. – ProQuest LLC, 2019
A primary purpose of cognitive diagnosis models (CDMs) is to classify examinees based on their attribute patterns. The Q-matrix (Tatsuoka, 1985), a common component of all CDMs, specifies the relationship between the set of required dichotomous attributes and the test items. Since a Q-matrix is often developed by content-knowledge experts and can…
Descriptors: Classification, Validity, Test Items, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kogar, Esin Yilmaz; Kelecioglu, Hülya – Journal of Education and Learning, 2017
The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and…
Descriptors: Item Response Theory, Models, Mathematics Tests, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3