NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 781 to 795 of 956 results Save | Export
Peer reviewed Peer reviewed
Carrick, Tessa – Journal of Biological Education, 1987
Aspects of practical assessment such as question design and management of the assessment are discussed. Reference is made to some of the problems and to the possibilities for innovation as a result of implementing practical assessment of the kind required by the General Certificate of Secondary Education examining groups. (CW)
Descriptors: Ability Identification, Certification, Foreign Countries, Science Education
Auchter, Joan E.; Stansfield, Charles W. – 1997
This paper describes the General Educational Development (GED) Testing Service's Spanish Test Adaptation Project. The GED Tests are designed to give those who have not graduated from high school the opportunity to earn a diploma that is recognized by institutions of higher education and employers. The purpose of this project is to develop, based…
Descriptors: Adult Education, Bilingual Students, Educational Attainment, Educational Certificates
Wainer, Howard; And Others – 1990
The initial development of a testlet-based algebra test was previously reported (Wainer and Lewis, 1990). This account provides the details of this excursion into the use of hierarchical testlets and validity-based scoring. A pretest of two 15-item hierarchical testlets was carried out in which examinees' performance on a 4-item subset of each…
Descriptors: Adaptive Testing, Algebra, Comparative Analysis, Computer Assisted Testing
Sykes, Robert C.; Truskosky, Denise; White, Hillory – 2001
The purpose of this research was to study the effect of the three different ways of increasing the number of points contributed by constructed response (CR) items on the reliability of test scores from mixed-item-format tests. The assumption of unidimensionality that underlies the accuracy of item response theory model-based standard error…
Descriptors: Constructed Response, Elementary Education, Elementary School Students, Error of Measurement
Buckendahl, Chad W.; Plake, Barbara S.; Impara, James C. – 1999
Many school districts are developing assessments that incorporate both selected response and constructed response formats. Scores on these assessments can be used for a variety of purposes ranging from subject remediation to promotion decisions. These policy decisions are informed by recommendations for Minimum Passing Scores (MPSs) from standard…
Descriptors: Academic Standards, Constructed Response, Cutting Scores, Educational Assessment
Lawrence, Ida M.; Rigol, Gretchen W.; Van Essen, Thomas; Jackson, Carol A. – College Entrance Examination Board, 2003
This paper provides an historical perspective on the content of the SAT. The review begins at the beginning, when the first College Board SAT (the Scholastic Aptitude Test) was administered to 8,040 students on June 23, 1926. At that time, the SAT consisted of nine subtests: Definitions, Arithmetical Problems, Classification, Artificial Language,…
Descriptors: Research Reports, Educational History, Test Content, Aptitude Tests
Peer reviewed Peer reviewed
Harasym, P. H.; And Others – Evaluation and the Health Professions, 1980
Coded, as opposed to free response items, in a multiple choice physiology test had a cueing effect which raised students' scores, especially for lower achievers. Reliability of coded items was also lower. Item format and scoring method had an effect on test results. (GDC)
Descriptors: Achievement Tests, Comparative Testing, Cues, Higher Education
Peer reviewed Peer reviewed
Barnett-Foster, Debora; Nagy, Philip – Higher Education, 1996
A study compared response strategies and error patterns of 272 college freshmen on chemistry test items in multiple choice and constructed response formats. Analysis of test data indicated no significant difference in solution strategies used or types of errors committed across test formats. However, interviews with 21 participants revealed…
Descriptors: Chemistry, College Freshmen, Comparative Analysis, Error Patterns
Kingsbury, G. Gage; And Others – Technological Horizons in Education, 1988
Explores what some deem the best way to objectively determine what a student knows. Adaptive Testing has been around since the early 1900's, but only with the advent of computers has it been effectively applied to day to day educational management. Cites a pilot study in Portland, Oregon, public schools. (MVL)
Descriptors: Administration, Computer Uses in Education, Diagnostic Teaching, Individual Needs
Peer reviewed Peer reviewed
Bresnock, Anne E.; And Others – Journal of Economic Education, 1989
Investigates the effects on multiple choice test performance of altering the order and placement of questions and responses. Shows that changing the response pattern appears to alter significantly the apparent degree of difficulty. Response patterns become more dissimilar under certain types of response alterations. (LS)
Descriptors: Cheating, Economics Education, Educational Research, Grading
Peer reviewed Peer reviewed
Martinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wiliam, Dylan – Review of Research in Education, 2010
The idea that validity should be considered a property of inferences, rather than of assessments, has developed slowly over the past century. In early writings about the validity of educational assessments, validity was defined as a property of an assessment. The most common definition was that an assessment was valid to the extent that it…
Descriptors: Educational Assessment, Validity, Inferences, Construct Validity
Rachor, Robert E.; Gray, George T. – 1996
Two frequently cited guidelines for writing multiple choice test item stems are: (1) the stem can be written in either a question or statement-to-be-completed format; and (2) only positively worded stems should be used. These guidelines were evaluated in a survey of the test item banks of 13 nationally administered examinations in the physician…
Descriptors: Allied Health Personnel, Difficulty Level, High Achievement, Item Banks
Bennett, Randy Elliot; And Others – 1991
This study investigated the convergent validity of expert-system scores for four mathematical constructed-response item formats. A five-factor model was proposed comprised of four constructed-response format factors and a Graduate Record Examinations (GRE) General Test quantitative factor. Subjects were drawn from examinees taking a single form of…
Descriptors: College Students, Constructed Response, Correlation, Expert Systems
Pages: 1  |  ...  |  49  |  50  |  51  |  52  |  53  |  54  |  55  |  56  |  57  |  ...  |  64