NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayanwale, Musa Adekunle; Ndlovu, Mdutshekelwa – Education Sciences, 2021
This study investigated the scalability of a cognitive multiple-choice test through the Mokken package in the R programming language for statistical computing. A 2019 mathematics West African Examinations Council (WAEC) instrument was used to gather data from randomly drawn K-12 participants (N = 2866; Male = 1232; Female = 1634; Mean age = 16.5…
Descriptors: Cognitive Tests, Multiple Choice Tests, Scaling, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Krell, Moritz; Samia Khan; Jan van Driel – Education Sciences, 2021
The development and evaluation of valid assessments of scientific reasoning are an integral part of research in science education. In the present study, we used the linear logistic test model (LLTM) to analyze how item features related to text complexity and the presence of visual representations influence the overall item difficulty of an…
Descriptors: Cognitive Processes, Difficulty Level, Science Tests, Logical Thinking
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Setiawan, Johan; Sudrajat, Ajat; Aman; Kumalasari, Dyah – International Journal of Evaluation and Research in Education, 2021
This study aimed to: (1) produce higher order thinking skill (HOTS) assessment instruments in learning Indonesian history; (2) know the validity of HOTS assessment instruments in learning Indonesian history; and (3) find out the characteristics of HOTS questions in learning Indonesian history. This study employed the research and development…
Descriptors: Foreign Countries, History Instruction, Thinking Skills, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saepuzaman, Duden; Istiyono, Edi; Haryanto – Pegem Journal of Education and Instruction, 2022
HOTS is one part of the skills that need to be developed in the 21st Century . This study aims to determine the characteristics of the Fundamental Physics Higher-order Thinking Skill (FundPhysHOTS) test for prospective physics teachers using Item Response Theory (IRT) analysis. This study uses a quantitative approach. 254 prospective physics…
Descriptors: Thinking Skills, Physics, Science Process Skills, Cognitive Tests
Arneson, Amy – ProQuest LLC, 2019
This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designed. There should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon.…
Descriptors: Item Response Theory, Test Items, Critical Thinking, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stevenson, Claire E.; Heiser, Willem J.; Resing, Wilma C. M. – Journal of Psychoeducational Assessment, 2016
Multiple-choice (MC) analogy items are often used in cognitive assessment. However, in dynamic testing, where the aim is to provide insight into potential for learning and the learning process, constructed-response (CR) items may be of benefit. This study investigated whether training with CR or MC items leads to differences in the strategy…
Descriptors: Logical Thinking, Multiple Choice Tests, Test Items, Cognitive Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Frey, Bruce B.; Ellis, James D.; Bulgreen, Janis A.; Hare, Jana Craig; Ault, Marilyn – Electronic Journal of Science Education, 2015
"Scientific argumentation," defined as the ability to develop and analyze scientific claims, support claims with evidence from investigations of the natural world, and explain and evaluate the reasoning that connects the evidence to the claim, is a critical component of current science standards and is consistent with "Common Core…
Descriptors: Test Construction, Science Tests, Persuasive Discourse, Science Process Skills
Peer reviewed Peer reviewed
Newman, Dianna L.; And Others – Applied Measurement in Education, 1988
The effect of using statistical and cognitive item difficulty to determine item order on multiple-choice tests was examined, using 120 undergraduate students. Students performed better when items were ordered by increasing cognitive difficulty rather than decreasing difficulty. The statistical ordering of difficulty had little effect on…
Descriptors: Cognitive Tests, Difficulty Level, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Wilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Brandon, E. P. – 1992
In his pioneer investigations of deductive logical reasoning competence, R. H. Ennis (R. H. Ennis and D. H. Paulus, 1965) used a multiple-choice format in which the premises are given, and it is asked whether the conclusion would then be true. In the adaptation of his work for use in Jamaica, the three possible answers were stated as…
Descriptors: Adults, Cognitive Tests, Comparative Testing, Competence
Peer reviewed Peer reviewed
Norris, Stephen P. – Journal of Educational Measurement, 1990
The relevance of verbal reports of thinking for validating multiple-choice critical thinking tests was examined. Results from 342 senior high school students in Newfoundland (Canada) indicate that verbal reports can meet a necessary condition of validation data and collecting data does not alter thinking and performance. (SLD)
Descriptors: Cognitive Tests, Critical Thinking, Foreign Countries, High School Students
Waller, Michael I. – 1986
This study compares the fit of the 3-parameter model to the Ability Removing Random Guessing (ARRG) model on data from a wide range of tests of cognitive ability in three representative samples. When the guessing parameters under the 3-parameter model are estimated individually for each item, the 3-parameter model yields the better fit to…
Descriptors: Cognitive Tests, Cohort Analysis, Elementary Secondary Education, Equations (Mathematics)
Masters, James R. – 1986
In 1985, for the first time, Pennsylvania's student assessment program included measures of a higher order thinking skills goal termed Analytical Thinking. These tests utilize a decision-making model to assess such skills as drawing inferences, identifying appropriate information to gather before making a decision, analogical reasoning,…
Descriptors: Abstract Reasoning, Academic Achievement, Age Differences, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shu-Nu; Chiu, Mei-Hung – International Journal of Science and Mathematics Education, 2005
Scientific literacy and authenticity have gained a lot of attention in the past few decades worldwide. The goal of the study was to develop various authentic assessments to investigate students' scientific literacy for corresponding to the new curriculum reform of Taiwan in 1997. In the process, whether ninth graders were able to apply school…
Descriptors: Curriculum Development, Test Items, Educational Assessment, Scientific Principles