NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,016 to 3,030 of 4,798 results Save | Export
Peer reviewed Peer reviewed
Barrett, Richard S. – Public Personnel Management, 1992
The Content Validation Form is presented as a means of proving that occupational tests provide a representative work sample or knowledge, skill, or ability necessary for a job. It is best used during test construction by a panel of subject matter experts. (SK)
Descriptors: Content Validity, Item Analysis, Multiple Choice Tests, Occupational Tests
Peer reviewed Peer reviewed
Gross, Leon J. – Evaluation and the Health Professions, 1994
The use of the "none of the above" (NOTA) option in test items is explored. Empirical research on use of this option is obscuring the fact that, when NOTA is the correct answer, it can reward examinees with serious knowledge deficiencies or misinformation. This option should not be used. (SLD)
Descriptors: Distractors (Tests), Guides, Item Analysis, Knowledge Level
Peer reviewed Peer reviewed
Boodoo, Gwyneth M. – Educational Horizons, 1993
Examination of the role of psychometrics in the development of multiple-choice tests and performance-based assessments and consideration of validity and reliability issues leads to this conclusion: Choice of assessments for instruction or large-scale accountability depends on which is more appropriate for the particular purposes. Taxonomic…
Descriptors: Alternative Assessment, Classification, Multiple Choice Tests, Performance Based Assessment
Peer reviewed Peer reviewed
Paxton, Moragh – Assessment & Evaluation in Higher Education, 2000
Critiques the over-emphasis on multiple choice testing in some large first-year college classes, as well as the poor design and construction of many tests, and calls for a broader and more diverse range of assessment measures. Argues that multiple choice tests to not allow students to develop communicative competence in the academic disciplines.…
Descriptors: College Freshmen, Communication Skills, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Bielinski, John; Davison, Mark L. – Journal of Educational Measurement, 2001
Used mathematics achievement data from the 1992 National Assessment of Educational Progress, the Third International Mathematics and Science Study, and the National Education Longitudinal Study of 1988 to examine the sex difference by item difficulty interaction. The predicted negative correlation was found for all eight populations and was…
Descriptors: Correlation, Difficulty Level, Interaction, Mathematics Tests
Peer reviewed Peer reviewed
Lummus, Rhonda R.; Neal, Joan C.; Edwards, Sandra – Delta Pi Epsilon Journal, 2000
Scores of 220 students on assessment instruments used for admission to a business management department (multiple-choice test, case analysis, executive summary, business presentation) were analyzed. Multiple choice was not a reliable measure of knowledge. The other three methods reliably measured knowledge, skills, and abilities for management…
Descriptors: Admission Criteria, Aptitude Tests, Business Administration Education, Case Method (Teaching Technique)
Peer reviewed Peer reviewed
Feinstein, Zachary S. – Applied Psychological Measurement, 1995
The closed-interval signed area (CSA) and closed-interval unsigned area (CUA) statistics were studied by Monte Carlo simulation to detect differential item functioning (DIF) when the reference and focal groups had different parameter distributions. Different behaviors of the CSA and CUA as functions of the parameters are discussed. (SLD)
Descriptors: Focus Groups, Item Bias, Item Response Theory, Models
Peer reviewed Peer reviewed
Wang, Wen-Chung – Journal of Applied Measurement, 2000
Proposes a factorial procedure for investigating differential distractor functioning in multiple choice items that models each distractor with a distinct distractibility parameter. Results of a simulation study show that the parameters of the proposed modeling were recovered very well. Analysis of 10 4-choice items from a college entrance…
Descriptors: College Entrance Examinations, Distractors (Tests), Factor Structure, Foreign Countries
Peer reviewed Peer reviewed
Heck, Ronald H.; Crislip, Marian – Educational Evaluation and Policy Analysis, 2001
Compared performances of 3,300 third graders on a performance-based writing test and a multiple choice test of writing skills. Overall, results support the view that performance-based writing assessment is relatively fair and that it measures learning tasks that are related to the school's curriculum. (SLD)
Descriptors: Curriculum, Elementary School Students, Equal Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Heck, Ronald H.; Crislip, Marian – Educational Evaluation and Policy Analysis, 2001
Compared performances of third graders (n=3,300) on a performance-based writing test and a multiple choice test of writing skills. Overall, results support the view that performance-based writing assessment is relatively fair and that it measures learning tasks that are related to the school's curriculum. (SLD)
Descriptors: Curriculum, Elementary School Students, Equal Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Science Scope, 2005
There are many ways of assessing students and the work they do, and many ways of getting them ready for those assessments. Special needs students provide an extra challenge to educators because they have difficulty preparing for assessment and often more difficulty communicating what they know. It is not enough to be a thoughtful, lab-focused…
Descriptors: Special Needs Students, Multiple Choice Tests, Alternative Assessment, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Journal of Educational Measurement, 2004
Score equity assessment (SEA) is introduced, and placed within a fair assessment context that includes differential prediction or fair selection and differential item functioning. The notion of subpopulation invariance of linking functions is central to the assessment of score equity, just as it has been for differential item functioning and…
Descriptors: Prediction, Scores, Calculus, Advanced Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Radwan, Nizam; Rogers, W. Todd – Alberta Journal of Educational Research, 2006
The recent increase in the use of constructed-response items in educational assessment and the dissatisfaction with the nature of the decision that the judges must make using traditional standard-setting methods created a need to develop new and effective standard-setting procedures for tests that include both multiple-choice and…
Descriptors: Criticism, Cutting Scores, Educational Assessment, Standard Setting (Scoring)
Peer reviewed Peer reviewed
Direct linkDirect link
Struyven, Katrien; Dochy, Filip; Janssens, Steven; Schelfhou, Wouter; Gielen, Sarah – Studies in Educational Evaluation, 2006
This study investigates the effect of method of assessment on student performance. Five research conditions go together with one of four assessment modes, namely: portfolio, case-based, peer assessment, and multiple choice evaluation. Data collection is done by means of a pre-test/post-test-design with the help of two standardised tests (N=816).…
Descriptors: Academic Achievement, Multiple Choice Tests, Peer Evaluation, Portfolio Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Whitehill, Tara L.; Chau, Cynthia H.-F. – Clinical Linguistics and Phonetics, 2004
Many speakers with repaired cleft palate have reduced intelligibility, but there are limitations with current procedures for assessing intelligibility. The aim of this study was to construct a single-word intelligibility test for speakers with cleft palate. The test used a multiple-choice identification format, and was based on phonetic contrasts…
Descriptors: Phonetics, Congenital Impairments, Speech Tests, Articulation (Speech)
Pages: 1  |  ...  |  198  |  199  |  200  |  201  |  202  |  203  |  204  |  205  |  206  |  ...  |  320