NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards5
Showing 496 to 510 of 3,206 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fukuzawa, Sherry; Boyd, Cleo – Collected Essays on Learning and Teaching, 2008
In 2005, the undergraduate advisory committee at the University of Toronto Mississauga found that across all disciplines, writing proficiency was the skill weakness that generated the greatest concern. Students reported that they often found writing tasks intimidating, and suggested that effective feedback and guidance would improve their writing.…
Descriptors: Foreign Countries, Undergraduate Students, Writing Skills, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Vivo, Juana-Maria; Franco, Manuel – International Journal of Mathematical Education in Science and Technology, 2008
This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…
Descriptors: Academic Achievement, Item Response Theory, Criterion Referenced Tests, Predictor Variables
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
This paper describes and compares procedures for estimating the reliability of proficiency tests that are scored with latent structure models. Results suggest that the predictive estimate is the most accurate of the procedures. (Author/BW)
Descriptors: Criterion Referenced Tests, Scoring, Test Reliability
Trapp, William J. – Online Submission, 2007
This project provides a list of criteria for which the contents of interpretive guides written for customized, criterion-referenced tests can be evaluated. The criteria are based on the "Standards for Educational and Psychological Testing" (1999) and examine the content breadth of interpretive guides. Interpretive guides written for…
Descriptors: Grade 5, Mathematics Tests, Evaluation Criteria, Psychological Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Meisels, Samuel J.; Xue, Yange; Shamblott, Melissa – Early Education and Development, 2008
Research Findings: We examined the reliability and validity of the language, literacy, and mathematics domains of "Work Sampling for Head Start" (WSHS), an observational assessment designed for 3- and 4-year-olds. Participants included 112 children who were enrolled over a two-year period in Head Start and a number of other programs…
Descriptors: Preschool Children, Preschool Education, Early Intervention, Criterion Referenced Tests
Marshall, J. Laird; Haertel, Edward H. – 1975
For classical, norm-referenced test reliability, Cronbach's alpha has been shown to be equal to the mean of all possible split-half Pearson product-moment correlation coefficients, adjusted by the Spearman-Brown prophecy formula. For criterion-referenced test reliability, in an analogous vein, this paper provides the rationale behind, the analysis…
Descriptors: Criterion Referenced Tests, Statistical Analysis, Test Reliability
Swezey, Robert W. – 1976
Though domain-oriented and norm-referenced tests are appropriate for some situations, objective-oriented and criterion-referenced tests must be used to gather additional information. Objectives for such tests must include a statement of the desired performance, the test conditions, and the standards of acceptance. When tests are constructed the…
Descriptors: Criterion Referenced Tests, Speeches, Test Construction, Testing
Haladyna, Thomas M. – 1976
The objectives of this study were to first determine whether or not the empirical item analysis of domain referenced tests (DR) was justified; and second, in the event that it was, which of a set of recommended procedures was most effective for determining item quality. The analysis that followed led to the conclusion that empirical procedures…
Descriptors: Criterion Referenced Tests, Item Analysis, Statistical Analysis
Herbig, Manfred – Programmed Learning and Educational Technology, 1976
The relationship between criterion, test items, and instruction is discussed to show the problems of the pretest and posttest evaluation of criterion referenced items. (JY)
Descriptors: Criterion Referenced Tests, Item Analysis, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Reeve, Charlie L. – Intelligence, 2004
The purpose of the current study is to test the proposition that the relative contribution of narrow abilities (but not of "g") may have been obscured in prior research due to a failure to employ fully multidimensional latent variable analyses. The current study corrects for these deficiencies and examines the relationships between cognitive…
Descriptors: Cognitive Ability, Intelligence, Criterion Referenced Tests, Scores
Ballard, Amy; Palmieri, Stafford; Winkler, Amber – Thomas B. Fordham Institute, 2008
This report has a simple aim: to present results from international assessments so readers can judge for themselves how American students stack up globally. It's intended to be a stand-alone supplement to the "Education Olympics" web event held between August 8th and August 22nd, 2008 (see edolympics.net). It shows how the U.S. has…
Descriptors: International Studies, Academic Achievement, Program Effectiveness, Foreign Countries
Froman, Terry; Brown, Shelly; Tirado, Arleti – Research Services, Miami-Dade County Public Schools, 2008
The teacher's task of assigning letter grades to students and the public's interpretation of them can be perplexing. Does the student's grade represent the level of achievement, the gain in achievement, or some combination of the two? Is the student's effort included in the grade, or are high achievers given good marks regardless of effort? Are…
Descriptors: Academic Achievement, Grades (Scholastic), Standardized Tests, Grade Equivalent Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Amrein-Beardsley, Audrey – Educational Researcher, 2008
Value-added models help to evaluate the knowledge that school districts, schools, and teachers add to student learning as students progress through school. In this article, the well-known Education Value-Added Assessment System (EVAAS) is examined. The author presents a practical investigation of the methodological issues associated with the…
Descriptors: Validity, School Districts, Academic Achievement, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
McDermott, Paul A.; Fantuzzo, John W.; Waterman, Clare; Angelo, Lauren E.; Warley, Heather P.; Gadsden, Vivian L.; Zhang, Xiuyuan – Journal of School Psychology, 2009
Educators need accurate assessments of preschool cognitive growth to guide curriculum design, evaluation, and timely modification of their instructional programs. But available tests do not provide content breadth or growth sensitivity over brief intervals. This article details evidence for a multiform, multiscale test criterion-referenced to…
Descriptors: Listening Comprehension, Curriculum Design, Intervals, Disadvantaged Youth
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Andreas; Carstensen, Claus H. – Measurement: Interdisciplinary Research and Perspectives, 2009
On a general level, the objective of diagnostic classifications models (DCMs) lies in a classification of individuals regarding multiple latent skills. In this article, the authors show that this objective can be achieved by multidimensional adaptive testing (MAT) as well. The authors discuss whether or not the restricted applicability of DCMs can…
Descriptors: Adaptive Testing, Test Items, Classification, Psychometrics
Pages: 1  |  ...  |  30  |  31  |  32  |  33  |  34  |  35  |  36  |  37  |  38  |  ...  |  214