NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,816 to 1,830 of 3,093 results Save | Export
Peer reviewed Peer reviewed
McNeal, Joanne M.; Dwyer, Francis – International Journal of Instructional Media, 1999
Discussion of instructional design and student achievement focuses on a study that investigated the effectiveness of varied rehearsal strategies on student achievement on different test formats. Describes treatment groups that used print alone, print with content questions, print with visuals, and print with questions and visuals. (Author/LRW)
Descriptors: Academic Achievement, Elementary Secondary Education, Instructional Design, Instructional Effectiveness
Peer reviewed Peer reviewed
Upshur, John A.; Turner, Carolyn E. – Language Testing, 1999
Research on two approaches to assessment of second-language performance--second-language acquisition and language testing--is examined and compared with regard to systematic effects on language tests. Findings incidental to a test development project are then presented. It is concluded that a full account of performance testing requires a paradigm…
Descriptors: Discourse Analysis, Language Tests, Performance Tests, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Custer, Michael – Journal of Educational Computing Research, 2005
This study investigated the equivalence of scores from computerized and paper-and-pencil formats of a series of K-3 reading screening tests. Concerns about score equivalence on the computerized formats were warranted because of the use of reading passages, computer unfamiliarity of primary school students, and teacher versus computer…
Descriptors: Screening Tests, Reading Tests, Family Income, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Wilson, Mark – Educational and Psychological Measurement, 2005
This study presents a procedure for detecting differential item functioning (DIF) for dichotomous and polytomous items in testlet-based tests, whereby DIF is taken into account by adding DIF parameters into the Rasch testlet model. Simulations were conducted to assess recovery of the DIF and other parameters. Two independent variables, test type…
Descriptors: Test Format, Test Bias, Item Response Theory, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hinton-Bayre, Anton; Geffen, Gina – Psychological Assessment, 2005
The present study examined the comparability of 4 alternate forms of the Digit Symbol Substitution test and the Symbol Digit Modalities (written) test, including the original versions. Male contact-sport athletes (N=112) were assessed on 1 of the 4 forms of each test. Reasonable alternate form comparability was demonstrated through establishing…
Descriptors: Intervals, Test Format, Orthographic Symbols, Drills (Practice)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Juhu; Suen, Hoi K. – Early Childhood Research Quarterly, 2003
Although there have been numerous studies investigating the predictive validity of early assessment, observed predictive validity coefficients across studies are not stable. A validity generalization study was conducted in order to answer the question of whether the relationship between early assessment of children and later achievement is…
Descriptors: Test Format, Predictive Validity, Effect Size, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Downing, Steven M. – Advances in Health Sciences Education, 2005
The purpose of this research was to study the effects of violations of standard multiple-choice item writing principles on test characteristics, student scores, and pass-fail outcomes. Four basic science examinations, administered to year-one and year-two medical students, were randomly selected for study. Test items were classified as either…
Descriptors: Medical Education, Medical Students, Test Items, Test Format
Kopriva, Rebecca J.; Wiley, David E.; Emick, Jessica – Online Submission, 2007
The goal of the current study was to examine the influence of providing more optimal testing conditions and evaluate the effect this has on the validity of the score inferences across ELL students with different needs, strengths, and levels of language proficiency. It was expected that the validity of the score inferences would be similar for 3rd…
Descriptors: Grade 5, Test Format, Inferences, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
Fansler, A. Gigi; And Others – 1995
Comprehensive examinations, long a bastion in many doctoral programs, are one of many customs under scrutiny for possible change in a movement towards more authentic means of educational assessment. This preliminary study surveyed chairs of departments of educational administration from universities across the United States to learn how computers…
Descriptors: Alternative Assessment, Computer Uses in Education, Doctoral Degrees, Educational Administration
Stocking, Martha L. – 1994
As adaptive testing moves toward operational implementation in large scale testing programs, where it is important that adaptive tests be as parallel as possible to existing linear tests, a number of practical issues arise. This paper concerns three such issues. First, optimum item pool size is difficult to determine in advance of pool…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Standards
Yang, Wen-Ling; Houang, Richard T. – 1996
The influence of anchor length on the accuracy of test equating was studied using Tucker's linear method and two Item-Response-Theory (IRT) based methods, focusing on whether equating accuracy improved with more anchor items, whether the anchor effect depended on the equating method used, and the adequacy of the inclusion of the guessing parameter…
Descriptors: Equated Scores, Estimation (Mathematics), Guessing (Tests), Item Response Theory
PDF pending restoration PDF pending restoration
Ito, Kyoko; Sykes, Robert C. – 1996
Equating multiple test forms is frequently desired. When multiple forms are linked in a chain of equating, error tends to build up in the process. This paper compares three procedures for equating multiple forms in a common-form design where each school administered, in a spiraled fashion, only a subset of multiple forms. Data used were from a…
Descriptors: Comparative Analysis, Equated Scores, Error of Measurement, Grade 11
Stansfield, Charles W.; Kenyon, Dorry – 1996
The Simulated Oral Proficiency Interview (SOPI) is a semi-direct performance-based speaking test that emulates the face-to-face Oral Proficiency Interview (OPI) as closely as practical, using a tape recording and printed test booklet. The prototypical SOPI contains simple personal background questions typical of an initial encounter (warm-up) and…
Descriptors: Audiotape Recordings, Educational Trends, Interviews, Language Proficiency
Sireci, Stephen G.; Swaminathan, Hariharan – 1996
Procedures for evaluating differential item functioning (DIF) are commonly used to investigate the statistical equivalence of items that are translated from one language to another. However, the methodology developed for detecting DIF is designed to evaluate the functioning of the same items administered to two groups. In evaluating the…
Descriptors: Cross Cultural Studies, Foreign Countries, International Education, Item Bias
Pages: 1  |  ...  |  118  |  119  |  120  |  121  |  122  |  123  |  124  |  125  |  126  |  ...  |  207