NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dönmez, Onur; Akbulut, Yavuz; Telli, Esra; Kaptan, Miray; Özdemir, Ibrahim H.; Erdem, Mukaddes – Education and Information Technologies, 2022
In the current study, we aimed to develop a reliable and valid scale to address individual cognitive load types. Existing scale development studies involved limited number of items without adequate convergent, discriminant and criterion validity checks. Through a multistep correlational study, we proposed a three-factor scale with 13 items to…
Descriptors: Test Construction, Content Validity, Construct Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Fauville, Géraldine; Strang, Craig; Cannady, Matthew A.; Chen, Ying-Fang – Environmental Education Research, 2019
The Ocean Literacy movement began in the U.S. in the early 2000s, and has recently become an international effort. The focus on marine environmental issues and marine education is increasing, and yet it has been difficult to show progress of the ocean literacy movement, in part, because no widely adopted measurement tool exists. The International…
Descriptors: Marine Education, Environmental Education, Comparative Analysis, Factor Structure
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alsadaawi, Abdullah Saleh – International Education Studies, 2017
The Saudi National Assessment Centre administers the Computer Science Teacher Test for teacher certification. The aim of this study is to explore gender differences in candidates' scores, and investigate dimensionality, reliability, and differential item functioning using confirmatory factor analysis and item response theory. The confirmatory…
Descriptors: Factor Structure, Test Items, Test Reliability, Teacher Certification
Deane, Paul; O'Reilly, Tenaha; Chao, Szu-Fu; Dreier, Kelsey – Grantee Submission, 2018
The purpose of the report is to explore some of the mechanisms involved in the writing process. In particular, we examine students' process data (keystroke log analysis) to uncover how students approach a knowledge-telling task using 2 different task types. In the first task, students were asked to list as many words as possible related to a…
Descriptors: Writing Processes, Prior Learning, Task Analysis, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Villafane, Sachel M.; Bailey, Cheryl P.; Loertscher, Jennifer; Minderhout, Vicky; Lewis, Jennifer E. – Biochemistry and Molecular Biology Education, 2011
Biochemistry is a challenging subject because student learning depends on the application of previously learned concepts from general chemistry and biology to new, biological contexts. This article describes the development of a multiple-choice instrument intended to measure five concepts from general chemistry and three from biology that are…
Descriptors: Biochemistry, Science Tests, Fundamental Concepts, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Rohaan, Ellen J.; Taconis, Ruurd; Jochems, Wim M. G. – EURASIA Journal of Mathematics, Science & Technology Education, 2011
In the study described in this article, primary school teachers' pedagogical content knowledge (PCK) of technology education was measured with a multiple choice test; the Teaching of Technology Test (TTT). The aim of the study was to explore the latent factor structure of PCK, which is considered to be a crucial and distinctive domain of teacher…
Descriptors: Teacher Characteristics, Multiple Choice Tests, Factor Structure, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Long, Susanne K. – Journal of Psychoeducational Assessment, 2009
This study investigated the convergent and discriminant validity of the high-stakes Florida Comprehensive Assessment Test (FCAT) in both reading and writing at grade levels 4, 8, and 10. The data from the 2006 FCAT administration were analyzed via traditional multitrait-multimethod (MTMM) analysis to identify the factor structure and structural…
Descriptors: Structural Equation Models, Multitrait Multimethod Techniques, Writing Tests, Validity
Hutten, Leah R. – 1980
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
Descriptors: Factor Structure, Goodness of Fit, Guessing (Tests), Latent Trait Theory
Peer reviewed Peer reviewed
Powell, J. C.; Isbister, Alvin G. – Educational and Psychological Measurement, 1974
Descriptors: Factor Analysis, Factor Structure, Information Utilization, Item Analysis
Capell, Frank J.; Quellmalz, Edys S. – 1980
In the area of large scale assessment, there is increasing interest in the measurement of students' written performance. At issue is whether the task demands in writing assessment can be simplified to involve the production of paragraph-length writing samples and/or multiple choice testing, rather than full-length essays. This study considers data…
Descriptors: Essay Tests, Factor Structure, High Schools, Multiple Choice Tests
Peer reviewed Peer reviewed
Wang, Wen-Chung – Journal of Applied Measurement, 2000
Proposes a factorial procedure for investigating differential distractor functioning in multiple choice items that models each distractor with a distinct distractibility parameter. Results of a simulation study show that the parameters of the proposed modeling were recovered very well. Analysis of 10 4-choice items from a college entrance…
Descriptors: College Entrance Examinations, Distractors (Tests), Factor Structure, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Stone, Clement A.; Yeh, Chien-Chi – Educational and Psychological Measurement, 2006
Examination of a test's internal structure can be used to identify what domains or dimensions are being measured, identify relationships between the dimensions, provide evidence for hypothesized multidimensionality and test score interpretations, and identify construct-irrelevant variance. The purpose of this research is to provide a…
Descriptors: Multiple Choice Tests, Factor Structure, Factor Analysis, Licensing Examinations (Professions)
Ryan, Joseph P.; Hamm, Debra W. – 1976
A procedure is described for increasing the reliability of tests after they have been given and for developing shorter but more reliable tests. Eight tests administered to 200 graduate students studying educational research are analyzed. The analysis considers the original tests, the items loading on the first factor of the test, and the items…
Descriptors: Career Development, Factor Analysis, Factor Structure, Item Analysis
Bennett, Randy Elliot; And Others – 1989
This study examined the relationship of a machine-scorable, constrained free-response computer science item that required the student to debug a faulty program to two other types of items: multiple-choice and free-response requiring production of a computer program. The free-response items were from the College Board's Advanced Placement Computer…
Descriptors: College Students, Computer Science, Computer Software, Debugging (Computers)
Melancon, Janet G.; Thompson, Bruce – 1989
This study investigated the nature of field independence by exploring the structure underlying responses to Forms A and B of a multiple-choice measure of field-independence, the Finding Embedded Figures Test (FEFT). Subjects included 302 students (52.7% male) enrolled in mathematics courses at a university in the southern United States. Students…
Descriptors: Cognitive Processes, College Students, Equated Scores, Factor Structure
Previous Page | Next Page »
Pages: 1  |  2