NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 2,251 to 2,265 of 3,126 results Save | Export
Peer reviewed Peer reviewed
DeMars, Christine E. – Applied Measurement in Education, 1998
Scores from mathematics (tested at 102 schools) and science (tested at 99 schools) sections of pilot forms of the Michigan High School Proficiency Test were examined for interaction between gender and response format (multiple choice or constructed response). Overall, neither males nor females seemed to be disadvantaged by item format. (SLD)
Descriptors: Constructed Response, High School Students, High Schools, Mathematics Tests
Peer reviewed Peer reviewed
Russell, Michael; Haney, Walt – Education Policy Analysis Archives, 1997
The effect that mode of administration, computer versus paper and pencil, had on the performance of 120 middle school students on multiple choice and written test questions was studied. Results show that, for students accustomed to writing on computers, responses written on the computer were more successful. Implications for testing are discussed.…
Descriptors: Computer Assisted Testing, Essay Tests, Middle School Students, Middle Schools
Peer reviewed Peer reviewed
Ponsoda, Vicente; Olea, Julio; Rodriguez, Maria Soledad; Revuelta, Javier – Applied Measurement in Education, 1999
Compared easy and difficult versions of self-adapted tests (SAT) and computerized adapted tests. No significant differences were found among the tests for estimated ability or posttest state anxiety in studies with 187 Spanish high school students, although other significant differences were found. Discusses implications for interpreting test…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Parkes, Jay – Educational Research, 2000
Data from 77 ninth-grade Spanish students who took an objective test, a performance assessment, and a measure of perceptions of control indicate that control perceptions predict scores on performance assessments, not objective tests. Performance assessments thus reflect motivational variables beyond the constructs being tested. (SK)
Descriptors: High Schools, Locus of Control, Motivation, Objective Tests
Peer reviewed Peer reviewed
Woodburn, Jim; Sutcliffe, Nick – Assessment & Evaluation in Higher Education, 1996
The Objective Structured Clinical Examination (OSCE), initially developed for undergraduate medical education, has been adapted for assessment of clinical skills in podiatry students. A 12-month pilot study found the test had relatively low levels of reliability, high construct and criterion validity, and good stability of performance over time.…
Descriptors: Clinical Teaching (Health Professions), Higher Education, Medical Education, Podiatry
Peer reviewed Peer reviewed
Direct linkDirect link
Greidanus, Tine; Beks, Bianca; Wakely, Richard – Modern Language Journal, 2005
The present study is a continuation of the work presented in the 2001 article by Greidanus and Nienhuis. In the current study, we also examine the quality of word knowledge among advanced learners of French as a second language (L2) by means of a word associates test. We studied the development of word knowledge among 6 groups of university-level…
Descriptors: Indo European Languages, Second Language Learning, Language Tests, Native Speakers
Peer reviewed Peer reviewed
Direct linkDirect link
McDaniel, Michael A.; Whetzel, Deborah L. – Intelligence, 2005
[Gottfredson, L. S. (2003). Dissecting practical intelligence theory: Its claims and evidence. Intelligence, 31, 343-397.] provided a detailed critique of Sternberg's [Sternberg, R. J., Fotsythe, G. B., Hedlund, J., Horvath, J. A., Wagner, R. K., Williams, W. M., Snook, S. A., Grigorenko, E. L. (2000). Practical intelligence in everyday life. New…
Descriptors: Individual Testing, Test Format, Test Items, Personnel Selection
Kopriva, Rebecca J.; Wiley, David E.; Emick, Jessica – Online Submission, 2007
The goal of the current study was to examine the influence of providing more optimal testing conditions and evaluate the effect this has on the validity of the score inferences across ELL students with different needs, strengths, and levels of language proficiency. It was expected that the validity of the score inferences would be similar for 3rd…
Descriptors: Grade 5, Test Format, Inferences, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Swartz, Stephen M. – Journal of Education for Business, 2006
The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…
Descriptors: Multiple Choice Tests, Essay Tests, Graduate Students, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Quenette, Mary A.; Nicewander, W. Alan; Thomasson, Gary L. – Applied Psychological Measurement, 2006
Model-based equating was compared to empirical equating of an Armed Services Vocational Aptitude Battery (ASVAB) test form. The model-based equating was done using item pretest data to derive item response theory (IRT) item parameter estimates for those items that were retained in the final version of the test. The analysis of an ASVAB test form…
Descriptors: Item Response Theory, Multiple Choice Tests, Test Items, Computation
Stocking, Martha L. – 1988
The construction of parallel editions of conventional tests for purposes of test security while maintaining score comparability has always been a recognized and difficult problem in psychometrics and test construction. The introduction of new modes of test construction, e.g., adaptive testing, changes the nature of the problem, but does not make…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Identification
Wang, Xiang-bo; And Others – 1993
An increasingly popular test format allows examinees to choose the items they will answer from among a larger set. When examinee choice is allowed fairness requires that the different test forms thus formed be equated for their possible differential difficulty. For this equating to be possible it is necessary to know how well examinees would have…
Descriptors: Adaptive Testing, Advanced Placement, Difficulty Level, Equated Scores
Nissan, Susan; And Others – 1996
One of the item types in the Listening Comprehension section of the Test of English as a Foreign Language (TOEFL) test is the dialogue. Because the dialogue item pool needs to have an appropriate balance of items at a range of difficulty levels, test developers have examined items at various difficulty levels in an attempt to identify their…
Descriptors: Classification, Dialogs (Language), Difficulty Level, English (Second Language)
King, Winston, Ed.; And Others – 1992
This manual informs tutors/evaluators and students about the Caribbean Examinations Council's (CXC) school based assessment (SBA) of integrated science skills. It includes directions for the teacher/evaluator and a student activity package. The SBA is done in agricultural science, biology, chemistry, integrated science, and social studies and is…
Descriptors: Foreign Countries, Science Activities, Science Education, Science Process Skills
Hambleton, Ronald; Rodgers, Jane – 1995
This digest introduces three issues to consider when evaluating test items for bias: fairness, bias, and stereotyping. In any bias investigation, the first step is to identify the subgroups of interest. Bias reviews and studies generally focus on differential performance for sex, ethnic, cultural, and religious groups. In preparing an item bias…
Descriptors: Cultural Differences, Culture Fair Tests, Ethnicity, Evaluation Methods
Pages: 1  |  ...  |  147  |  148  |  149  |  150  |  151  |  152  |  153  |  154  |  155  |  ...  |  209