NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yunxiao; Lee, Yi-Hsuan; Li, Xiaoou – Journal of Educational and Behavioral Statistics, 2022
In standardized educational testing, test items are reused in multiple test administrations. To ensure the validity of test scores, the psychometric properties of items should remain unchanged over time. In this article, we consider the sequential monitoring of test items, in particular, the detection of abrupt changes to their psychometric…
Descriptors: Standardized Tests, Test Items, Test Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2014
A method for medical screening is adapted to differential item functioning (DIF). Its essential elements are explicit declarations of the level of DIF that is acceptable and of the loss function that quantifies the consequences of the two kinds of inappropriate classification of an item. Instead of a single level and a single function, sets of…
Descriptors: Test Items, Test Bias, Simulation, Hypothesis Testing
Henning, Grant – English Teaching Forum, 2012
To some extent, good testing procedure, like good language use, can be achieved through avoidance of errors. Almost any language-instruction program requires the preparation and administration of tests, and it is only to the extent that certain common testing mistakes have been avoided that such tests can be said to be worthwhile selection,…
Descriptors: Testing, English (Second Language), Testing Problems, Student Evaluation
Sawchuk, Stephen – Education Week, 2010
Most experts in the testing community have presumed that the $350 million promised by the U.S. Department of Education to support common assessments would promote those that made greater use of open-ended items capable of measuring higher-order critical-thinking skills. But as measurement experts consider the multitude of possibilities for an…
Descriptors: Test Items, Federal Legislation, Scoring, Accountability
Peer reviewed Peer reviewed
Direct linkDirect link
Saladin, Shawn P.; Reid, Christine; Shiels, John – Rehabilitation Research, Policy, and Education, 2011
The Commission on Rehabilitation Counselor Certification (CRCC) has taken a proactive stance on perceived test inequities of the Certified Rehabilitation Counselor (CRC) exam as it relates to people who are prelingually deaf and hard of hearing. This article describes the process developed and implemented by the CRCC to help maximize test equity…
Descriptors: Test Items, Rehabilitation Counseling, Counselor Certification, Deafness
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation
Peer reviewed Peer reviewed
Direct linkDirect link
Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel – International Journal of Testing, 2009
In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…
Descriptors: Test Items, Investigations, Semantics, Translation
Peer reviewed Peer reviewed
Cziko, Gary A. – Educational and Psychological Measurement, 1984
Some problems associated with the criteria of reproducibility and scalability as they are used in Guttman scalogram analysis to evaluate cumulative, nonparametric scales of dichotomous items are discussed. A computer program is presented which analyzes response patterns elicited by dichotomous scales designed to be cumulative. (Author/DWH)
Descriptors: Scaling, Statistical Analysis, Test Construction, Test Items
Peer reviewed Peer reviewed
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2000
Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…
Descriptors: Computer Assisted Testing, Standardized Tests, Test Items, Testing Problems
Cantwell, Zita M. – Evaluation News, 1985
The wording and structure of questionnaire items can interact with specified sample categories based on evaluation goals and respondent characteristics. The effects of the interactions can restructure samples and introduce bias into the data analysis. These effects, and suggestions for avoiding them, are demonstrated for five types of…
Descriptors: Higher Education, Item Analysis, Questionnaires, Statistical Bias
Peer reviewed Peer reviewed
Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks
Sarvela, Paul D.; Noonan, John V. – Educational Technology, 1988
Describes measurement problems associated with computer based testing (CBT) programs when they are part of a computer assisted instruction curriculum. Topics discussed include CBT standards; selection of item types; the contamination of items that arise from test design strategies; and the non-equivalence of comparison groups in item analyses. (8…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Item Analysis, Psychometrics
Merz, William R. – 1980
Several methods of assessing test item bias are described, and the concept of fair use of tests is examined. A test item is biased if individuals of equal ability have different probabilities of attaining the item correct. The following seven general procedures used to examine test items for bias are summarized and discussed: (1) analysis of…
Descriptors: Comparative Analysis, Evaluation Methods, Factor Analysis, Mathematical Models
Sarvela, Paul D.; Noonan, John V. – 1987
Although there are many advantages to using computer-based tests (CBTs) linked to computer-based instruction (CBI), there are also several difficulties. In certain instructional settings, it is difficult to conduct psychometric analyses of test results. Several measurement issues surface when CBT programs are linked to CBI. Testing guidelines…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Educational Testing, Equated Scores
Previous Page | Next Page ยป
Pages: 1  |  2