NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 391 to 405 of 514 results Save | Export
Peer reviewed Peer reviewed
Wall, Janet E. – Measurement and Evaluation in Counseling and Development, 2004
Because technology is more prevalent and accessible for use in assessment, this article highlights what counselors and educators need to know when considering the use of computers and the Internet for that purpose. The article concludes with some predictions on how technology might influence assessment and accountability in the future. This…
Descriptors: Student Evaluation, Counselor Training, Computer Assisted Testing, Internet
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Chung, Gregory K. W. K.; Herl, Howard E.; Klein, Davina C. D.; O'Neil, Harold F., Jr.; Schacter, John – 1997
This report examines issues in the scale-up of assessment software from the Center for Research on Evaluation, Standards, and Student Testing (CRESST). "Scale-up" is used in a metaphorical sense, meaning adding new assessment tools to CRESST's assessment software. During the past several years, CRESST has been developing and evaluating a…
Descriptors: Computer Assisted Testing, Computer Software, Concept Mapping, Educational Assessment
Lee, Yong-Won – 2001
An essay test is now an integral part of the computer based Test of English as a Foreign Language (TOEFL-CBT). This paper provides a brief overview of the current TOEFL-CBT essay test, describes the operational procedures for essay scoring, including the Online Scoring Network (OSN) of the Educational Testing Service (ETS), and discusses major…
Descriptors: Computer Assisted Testing, English (Second Language), Essay Tests, Interrater Reliability
Peer reviewed Peer reviewed
Haller, Otto; Edgington, Eugene S. – Perceptual and Motor Skills, 1982
Current scoring procedures depend on unrealistic assumptions about subjects' performance on the rod-and-frame test. A procedure is presented which corrects for constant error, is sensitive to response strategy and consistency, and examines qualitative and quantitative aspects of performance and individual differences in laterality bias as defined…
Descriptors: Computer Assisted Testing, Cues, Error of Measurement, Individual Differences
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lange, Rael T.; Chelune, Gordon J.; Taylor, Michael J.; Woodward, Todd S.; Heaton, Robert K. – Psychological Assessment, 2006
Following the publication of the third edition Wechsler scales (i.e., WAIS-III and WMS-III), demographically corrected norms were made available in the form of a computerized scoring program (i.e., WAIS-III/WMS-III/WIAT-II Scoring Assistant). These norms correct for age, gender, ethnicity, and education. Since then, four new indexes have been…
Descriptors: Norms, Scoring, Memory, Demography
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Stricker, Lawrence J.; Alderton, David L. – 1991
The usefulness of response latency data for biographical inventory items was assessed for improving the inventory's validity. Focus was on assessing whether weighting item scores on the basis of their latencies improves the predictive validity of the inventory's total score. A total of 120 items from the Armed Services Applicant Profile (ASAP)…
Descriptors: Adults, Biographical Inventories, Computer Assisted Testing, Males
De Ayala, R. J.; And Others – 1990
Computerized adaptive testing procedures (CATPs) based on the graded response method (GRM) of F. Samejima (1969) and the partial credit model (PCM) of G. Masters (1982) were developed and compared. Both programs used maximum likelihood estimation of ability, and item selection was conducted on the basis of information. Two simulated data sets, one…
Descriptors: Ability Identification, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Merrill, Beverly; Peterson, Sarah – 1986
When the Mesa, Arizona Public Schools initiated an ambitious writing instruction program in 1978, two assessments based on student writing samples were developed. The first is based on a ninth grade proficiency test. If the student does not pass the test, high school remediation is provided. After 1987, students must pass this test in order to…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Graduation Requirements, Holistic Evaluation
Vale, C. David – 1985
The specification of a computerized adaptive test, like the specification of computer-assisted instruction, is easier and can be done by personnel who are not proficient in computer programming if an authoring language is provided. The Minnesota Computerized Adaptive Testing Language (MCATL) is an authoring language specifically designed for…
Descriptors: Adaptive Testing, Authoring Aids (Programing), Branching, Computer Assisted Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sandene, Brent; Horkay, Nancy; Bennett, Randy Elliot; Allen, Nancy; Braswell, James; Kaplan, Bruce; Oranje, Andreas – National Center for Education Statistics, 2005
This publication presents the reports from two studies, Math Online (MOL) and Writing Online (WOL), part of the National Assessment of Educational Progress (NAEP) Technology-Based Assessment (TBA) project. Funded by the National Center for Education Statistics (NCES), the Technology-Based Assessment project is intended to explore the use of new…
Descriptors: Grade 8, Statistical Analysis, Scoring, Familiarity
Pages: 1  |  ...  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  31  |  ...  |  35