NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,666 to 1,680 of 3,128 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hastedt, Dirk; Sibberns, Heiko – Studies in Educational Evaluation, 2005
In international large-scale surveys, constructed response (CR) items are increasingly being used and multiple-choice (MC) items are being used less frequently. In this article the two item types will be compared in terms of any differences they have on national mean scores. TIMSS 1995 and TIMSS 1999 data have been used. Are there different…
Descriptors: Test Items, Multiple Choice Tests, Comparative Analysis, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lundervold, Duane A.; Dunlap, Angel L. – International Journal of Behavioral Consultation and Therapy, 2006
Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…
Descriptors: Test Format, Intervals, Observation, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Whiting, Hal; Kline, Theresa J. B. – International Journal of Training and Development, 2006
This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…
Descriptors: Test Format, Adult Literacy, Computer Assisted Testing, College Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rotou, Ourania; Patsula, Liane; Steffen, Manfred; Rizavi, Saba – ETS Research Report Series, 2007
Traditionally, the fixed-length linear paper-and-pencil (P&P) mode of administration has been the standard method of test delivery. With the advancement of technology, however, the popularity of administering tests using adaptive methods like computerized adaptive testing (CAT) and multistage testing (MST) has grown in the field of measurement…
Descriptors: Comparative Analysis, Test Format, Computer Assisted Testing, Models
Council of Chief State School Officers, Washington, DC. – 1994
This booklet presents the Reading Framework for the 1992 and 1994 National Assessment of Educational Progress (NAEP), which contains the rationale for the aspects of reading assessed and the criteria for the development of the assessment. The booklet notes that the new reading assessment examines students' abilities to construct, extend, and…
Descriptors: Intermediate Grades, Reading Achievement, Reading Skills, Reading Tests
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Stocking, Martha L.; Lewis, Charles – 1995
In the periodic testing environment associated with conventional paper-and-pencil tests, the frequency with which items are seen by test-takers is tightly controlled in advance of testing by policies that regulate both the reuse of test forms and the frequency with which candidates may take the test. In the continuous testing environment…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Test Construction
Pommerich, Mary; Nicewander, W. Alan – 1998
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT) -based domain score estimation methods were evaluated, under conditions of few items per content area per…
Descriptors: Ability, Estimation (Mathematics), Group Membership, Item Response Theory
Johanson, George; Motlomelo, Samuel – 1998
Many textbooks in educational measurement and classroom assessment have chapters devoted to specific item formats. There may be attempts to relate one item format to another, but the chapters and item formats are largely seem as distinct entities with only loose and uncertain connections. This paper synthesizes these discussions. An item format…
Descriptors: Educational Assessment, Essay Tests, Measurement Techniques, Objective Tests
Hambleton, Ronald K.; Bollwark, John – 1991
The validity of results from international assessments depends on the correctness of the test translations. If the tests presented in one language are more or less difficult because of the manner in which they are translated, the validity of any interpretation of the results can be questioned. Many test translation methods exist in the literature,…
Descriptors: Cultural Differences, Educational Assessment, English, Foreign Countries
Veccia, Ellen M.; Schroeder, David H. – 1990
A set of 150 experimental personality items was constructed for an alternate form of the word association personality worksample developed by the Johnson O'Connor Research Foundation. The items were intended to possess several semantic properties hypothesized to facilitate discrimination between objective and subjective examinees. Specifically,…
Descriptors: Adults, Correlation, Objectivity, Personality Measures
Anderson, Paul S. – 1987
A recent innovation in the area of educational measurement is MDT multi-digit testing, a machine-scored near-equivalent to "fill-in-the-blank" testing. The MDT method is based on long lists (or "Answer Banks") that contain up to 1,000 discrete answers, each with a three-digit label. Students taking an MDT multi-digit test mark…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Scoring
Schuldberg, David – 1988
Indices were constructed to measure individual differences in the effects of the automated testing format and repeated testing on Minnesota Multiphasic Personality Inventory (MMPI) responses. Two types of instability measures were studied within a data set from the responses of 150 undergraduate students who took a computer-administered and…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Individual Differences
Clark, Sheldon B.; Boser, Judith A. – 1989
A study was undertaken to develop a checklist of desirable characteristics of mail questionnaires. The checklist was to reflect some degree of consensus among experts in survey research and to be used as a general guide by novice questionnaire designers. A second objective was to take a first step toward development of an objective measure of…
Descriptors: Check Lists, Mail Surveys, Multiple Choice Tests, Quality Control
Ellington, Henry – 1987
The second of three sequels to the booklet "Student Assessment," this booklet begins by describing and giving examples of three different forms that short-answer questions can take: (1) completion items; (2) unique-answer questions; and (3) open short-answer questions. Guidelines are then provided for deciding which type of question to…
Descriptors: Foreign Countries, Higher Education, Instructional Material Evaluation, Questioning Techniques
Pages: 1  |  ...  |  108  |  109  |  110  |  111  |  112  |  113  |  114  |  115  |  116  |  ...  |  209