NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)1
Since 2007 (last 20 years)3
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Avsec, Stanislav; Jamšek, Janez – International Journal of Technology and Design Education, 2016
Technological literacy is identified as a vital achievement of technology- and engineering-intensive education. It guides the design of technology and technical components of educational systems and defines competitive employment in technological society. Existing methods for measuring technological literacy are incomplete or complicated,…
Descriptors: Technological Literacy, Elementary School Students, Secondary School Students, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al-Habashneh, Maher Hussein; Najjar, Nabil Juma – Journal of Education and Practice, 2017
This study aimed at constructing a criterion-reference test to measure the research and statistical competencies of graduate students at the Jordanian governmental universities, the test has to be in its first form of (50) multiple choice items, then the test was introduced to (5) arbitrators with competence in measurement and evaluation to…
Descriptors: Foreign Countries, Criterion Referenced Tests, Graduate Students, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2012
This was a study of differential item functioning (DIF) for grades 4, 7, and 10 reading and mathematics items from state criterion-referenced tests. The tests were composed of multiple-choice and constructed-response items. Gender DIF was investigated using POLYSIBTEST and a Rasch procedure. The Rasch procedure flagged more items for DIF than did…
Descriptors: Test Bias, Gender Differences, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
When determining criterion-referenced test length, problems of guessing are shown to be more serious than expected. A new method of scoring is presented that corrects for guessing without assuming that guessing is random. Empirical investigations of the procedure are examined. Test length can be substantially reduced. (Author/CM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests, Scoring
Wilcox, Rand R. – 1981
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
Descriptors: Achievement Tests, Criterion Referenced Tests, Guessing (Tests), Mathematical Models
Bryce, Jennifer; And Others – Programmed Learning and Educational Technology, 1983
Describes development of a test using slides and corresponding multiple-choice questions for second-year occupational therapy students in child studies course. Reasons for choosing the test format are discussed and an outline of test construction procedures is given. An evaluation of the test indicates problems encountered and benefits gained.…
Descriptors: Child Development, Criterion Referenced Tests, Foreign Countries, Higher Education
Peer reviewed Peer reviewed
Johnstone, A. H.; And Others – School Science Review, 1983
Discusses problems in using multiple-choice items on criterion-referenced tests, offering alternative methods to determine if objectives have been met. These include batteries of true-false items, structural communication grids, concept linkages, and a scoring system which takes partial knowledge into consideration. (JN)
Descriptors: Chemistry, Criterion Referenced Tests, Foreign Countries, Multiple Choice Tests
Roid, Gale H.; And Others – 1980
An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…
Descriptors: Algorithms, Computer Assisted Testing, Criterion Referenced Tests, Difficulty Level
Morse, David T. – Florida Vocational Journal, 1978
Presents guidelines for constructing tests which accurately measure a student's cognitive skills and performance in a particular course. The advantages and disadvantages of two types of test items are listed (selected response and constructed response items). Both poor and good examples are given and general rules for test item writing are…
Descriptors: Cognitive Development, Criterion Referenced Tests, Essay Tests, Multiple Choice Tests
Roid, Gale; Finn, Patrick – 1978
The feasibility of generating multiple-choice test questions by transforming sentences from prose instructional materials was examined. A computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were then transformed into multiple-choice items by four writers who…
Descriptors: Algorithms, Criterion Referenced Tests, Difficulty Level, Form Classes (Languages)
Millman, Jason – 1978
Test items, all referencing the same instructional objective, are not equally difficult. This investigation attempts to identify some of the determinants of item difficulty within the context of a first course in educational statistics. Computer generated variations of items were used to provide the data. The results were used to investigate the…
Descriptors: Computer Assisted Testing, Content Analysis, Criterion Referenced Tests, Difficulty Level
Roid, Gale; And Others – 1979
Differences among test item writers and among different rules for writing multiple choice items were investigated. Items testing comprehension of a prose passage were varied according to five factors: (1) information density of the passage; (2) item writer; (3) deletion of nouns, as opposed to adjectives, from the sentence in order to construct…
Descriptors: Achievement Tests, Criterion Referenced Tests, Difficulty Level, Elementary Education
1978
Sample test questions are given for the Proficiency Testing Program adopted by the Idaho State Department of Education. Except for the writing skills test item, which requires a writing sample, all questions are objective, multiple choice items. Sample questions are given for reading, spelling, and mathematics. (MH)
Descriptors: Basic Skills, Criterion Referenced Tests, Minimum Competency Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Curren, Randall R. – Theory and Research in Education, 2004
This article addresses the capacity of high stakes tests to measure the most significant kinds of learning. It begins by examining a set of philosophical arguments pertaining to construct validity and alleged conceptual obstacles to attributing specific knowledge and skills to learners. The arguments invoke philosophical doctrines of holism and…
Descriptors: Test Items, Educational Testing, Construct Validity, High Stakes Tests
PDF pending restoration PDF pending restoration
Roid, Gale; And Others – 1978
Several measurement theorists have convincingly argued that methods of writing test questions, particularly for criterion-referenced tests, should be based on operationally defined rules. This study was designed to examine and further refine a method for objectively generating multiple-choice questions for prose instructional materials. Important…
Descriptors: Algorithms, Criterion Referenced Tests, High Schools, Higher Education
Previous Page | Next Page »
Pages: 1  |  2  |  3