NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 77 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Angela Johnson; Elizabeth Barker; Marcos Viveros Cespedes – Educational Measurement: Issues and Practice, 2024
Educators and researchers strive to build policies and practices on data and evidence, especially on academic achievement scores. When assessment scores are inaccurate for specific student populations or when scores are inappropriately used, even data-driven decisions will be misinformed. To maximize the impact of the research-practice-policy…
Descriptors: Equal Education, Inclusion, Evaluation Methods, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Yung, Kevin Wai-Ho – RELC Journal: A Journal of Language Teaching and Research, 2020
This article introduces the use of public exam questions in fishbowl debate to engage highly exam-oriented secondary students with communicative language teaching (CLT). The practice aims to address the issue that many teachers of English as a second language (ESL)/English as a foreign language (EFL) in Asian contexts either teach to the test or…
Descriptors: Second Language Learning, Second Language Instruction, English (Second Language), Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rambiritch, Avasha – Perspectives in Education, 2015
Applied linguists should strive to ensure that the tests they design and use are not only fair and socially acceptable, but also have positive effects--this, in light of the fact that tests can sometimes have far-reaching and often detrimental effects on test-takers. What this paper will attempt to do, is highlight how this concern for responsible…
Descriptors: Accountability, Test Construction, Applied Linguistics, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Berschback, Rick – Journal of College Teaching & Learning, 2011
College professors often regard their time in the classroom fulfilling and rewarding; the chance to affect the academic and professional development of their students is most likely a key reason why they chose to be professional educators. Unfortunately, with college courses come college credits, which necessitate a course grade for each student,…
Descriptors: Classroom Techniques, Cheating, Adjunct Faculty, Teaching Methods
Cech, Scott J. – Education Week, 2008
There's a war of sorts going on within the normally staid assessment industry, and it's a war over the definition of a type of assessment that many educators understand in only a sketchy fashion. Formative assessments, also known as "classroom assessments," are in some ways easier to define by what they are not. They're not like the long,…
Descriptors: Formative Evaluation, Testing, Evaluation Problems, Testing Problems
Peer reviewed Peer reviewed
Cziko, Gary A. – Educational and Psychological Measurement, 1984
Some problems associated with the criteria of reproducibility and scalability as they are used in Guttman scalogram analysis to evaluate cumulative, nonparametric scales of dichotomous items are discussed. A computer program is presented which analyzes response patterns elicited by dichotomous scales designed to be cumulative. (Author/DWH)
Descriptors: Scaling, Statistical Analysis, Test Construction, Test Items
Hambleton, Ronald K. – 1996
The International Test Commission formed a 13-person committee of psychologists representing a number of international organizations to prepare a set of guidelines for adapting educational and psychological tests. The committee has worked for 3 years to produce near final drafts of 22 guidelines organized into 4 categories: (1) context; (2)…
Descriptors: Educational Testing, Psychological Testing, Scoring, Test Construction
Cook, Linda L.; Eignor, Daniel R. – 1981
The purposes of this paper are five-fold to discuss: (1) when item response theory (IRT) equating methods should provide better results than traditional methods; (2) which IRT model, the three-parameter logistic or the one-parameter logistic (Rasch), is the most reasonable to use; (3) what unique contributions IRT methods can offer the equating…
Descriptors: Equated Scores, Latent Trait Theory, Mathematical Models, Test Construction
Peer reviewed Peer reviewed
Feldt, Leonard S. – Applied Measurement in Education, 2002
Considers the situation in which content or administrative considerations limit the way in which a test can be partitioned to estimate the internal consistency reliability of the total test score. Demonstrates that a single-valued estimate of the total score reliability is possible only if an assumption is made about the comparative size of the…
Descriptors: Error of Measurement, Reliability, Scores, Test Construction
Shick, Jacqueline – Health Education (Washington D.C.), 1989
This article focuses on common errors associated with true-false, matching, completion, and essay questions as presented in textbook test manuals. Teachers should be able to select and/or adapt test questions which would be applicable to the content of their courses and which meet minimal standards for test construction. (JD)
Descriptors: Health Education, Higher Education, Secondary Education, Test Construction
Peer reviewed Peer reviewed
Cameron, Ann; Durham, Nedra; Long, Yvette; Noffke, Susan E. – New Advocate, 2001
Describes a "mistake" on the newly developed Illinois State Achievement Test for Third Grade reading comprehension involving the substitution of illustrations of a White family for the African-American family members in a story. Tells the story of how a group of third-graders discovered the mistake and the reactions and events which took…
Descriptors: Grade 3, Primary Education, Racial Discrimination, Reading Comprehension
Frechtling, Joy A.; Schenet, Margot A. – 1984
A description of the difficulties encountered in constructing a criterion-referenced test to assess end-of-year skills as part of a program evaluation is presented. The problems encountered in preparing the customized test are described in humorous detail. The first problem involved the listening test obtained from a publisher without…
Descriptors: Achievement Tests, Criterion Referenced Tests, Program Evaluation, Publications
Peer reviewed Peer reviewed
Direct linkDirect link
Mitra, Ananda; Jain-Shukla, Parul; Robbins, Adrienne; Champion, Heather; Durant, Robert – International Journal on E-Learning, 2008
This article provides a broad overview of the definition of web-based surveys examining some of the benefits and burdens related to using the Web for data collection. It draws upon the experience of two years of data collection on 10 university campuses demonstrating that there are noticeable differences in the speed with which web-based surveys…
Descriptors: College Students, Research Methodology, Data Collection, Internet
Peer reviewed Peer reviewed
Roos, Linda L.; And Others – Educational and Psychological Measurement, 1996
This article describes Minnesota Computerized Adaptive Testing Language program code for using the MicroCAT 3.5 testing software to administer several types of self-adapted tests. Code is provided for: a basic self-adapted test; a self-adapted version of an adaptive mastery test; and a restricted self-adapted test. (Author/SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Programming
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6