NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Measurement:…21
Audience
Location
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Susu; Li, Anqi; Wang, Shiyu – Educational Measurement: Issues and Practice, 2023
In computer-based tests allowing revision and reviews, examinees' sequence of visits and answer changes to questions can be recorded. The variable-length revision log data introduce new complexities to the collected data but, at the same time, provide additional information on examinees' test-taking behavior, which can inform test development and…
Descriptors: Computer Assisted Testing, Test Construction, Test Wiseness, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hwanggyu Lim; Kyung T. Han – Educational Measurement: Issues and Practice, 2024
Computerized adaptive testing (CAT) has gained deserved popularity in the administration of educational and professional assessments, but continues to face test security challenges. To ensure sustained quality assurance and testing integrity, it is imperative to establish and maintain multiple stable item pools that are consistent in terms of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Pan, Yiqin; Wollack, James A. – Educational Measurement: Issues and Practice, 2023
Pan and Wollack (PW) proposed a machine learning method to detect compromised items. We extend the work of PW to an approach detecting compromised items and examinees with item preknowledge simultaneously and draw on ideas in ensemble learning to relax several limitations in the work of PW. The suggested approach also provides a confidence score,…
Descriptors: Artificial Intelligence, Prior Learning, Item Analysis, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Arikan, Serkan; Aybek, Eren Can – Educational Measurement: Issues and Practice, 2022
Many scholars compared various item discrimination indices in real or simulated data. Item discrimination indices, such as item-total correlation, item-rest correlation, and IRT item discrimination parameter, provide information about individual differences among all participants. However, there are tests that aim to select a very limited number…
Descriptors: Monte Carlo Methods, Item Analysis, Correlation, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Sarah Lindstrom Johnson; Ray E. Reichenberg; Kathan Shukla; Tracy E. Waasdorp; Catherine P. Bradshaw – Educational Measurement: Issues and Practice, 2019
The U.S. government has become increasingly focused on school climate, as recently evidenced by its inclusion as an accountability indicator in the Every Student Succeeds Act. Yet, there remains considerable variability in both conceptualizing and measuring school climate. To better inform the research and practice related to school climate and…
Descriptors: Item Response Theory, Educational Environment, Accountability, Educational Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U. – Educational Measurement: Issues and Practice, 2016
The main points of Sijtsma and Green and Yang in Educational Measurement: Issues and Practice (34, 4) are that reliability, internal consistency, and unidimensionality are distinct and that Cronbach's alpha may be problematic. Neither of these assertions are at odds with Davenport, Davison, Liou, and Love in the same issue. However, many authors…
Descriptors: Educational Assessment, Reliability, Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Margolis, Melissa J.; Mee, Janet; Clauser, Brian E.; Winward, Marcia; Clauser, Jerome C. – Educational Measurement: Issues and Practice, 2016
Evidence to support the credibility of standard setting procedures is a critical part of the validity argument for decisions made based on tests that are used for classification. One area in which there has been limited empirical study is the impact of standard setting judge selection on the resulting cut score. One important issue related to…
Descriptors: Academic Standards, Standard Setting (Scoring), Cutting Scores, Credibility
Peer reviewed Peer reviewed
Direct linkDirect link
Polikoff, Morgan S. – Educational Measurement: Issues and Practice, 2010
Standards-based reform, as codified by the No Child Left Behind Act, relies on the ability of assessments to accurately reflect the learning that takes place in U.S. classrooms. However, this property of assessments--their instructional sensitivity--is rarely, if ever, investigated by test developers, states, or researchers. In this paper, the…
Descriptors: Federal Legislation, Psychometrics, Accountability, Teaching Methods
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewed Peer reviewed
Gierl, Mark J.; Leighton, Jacqueline P.; Hunka, Stephen M. – Educational Measurement: Issues and Practice, 2000
Discusses the logic of the rule-space model (K. Tatsuoka, 1983) as it applies to test development and analysis. The rule-space model is a statistical method for classifying examinees' test item responses into a set of attribute-mastery patterns associated with different cognitive skills. Directs readers to a tutorial that may be downloaded. (SLD)
Descriptors: Item Analysis, Item Response Theory, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
LaDuca, Tony – Educational Measurement: Issues and Practice, 2006
In the Spring 2005 issue, Wang, Schnipke, and Witt provided an informative description of the task inventory approach that centered on four functions of job analysis. The discussion included persuasive arguments for making systematic connections between tasks and KSAs. But several other facets of the discussion were much less persuasive. This…
Descriptors: Criticism, Task Analysis, Job Analysis, Persuasive Discourse
Peer reviewed Peer reviewed
Bond, Lloyd – Educational Measurement: Issues and Practice, 1987
This article suggests that mechanical application of Golden Rule-like procedures is inappropriate. The fundamental idea embodied in them, namely, that of taking issues of equity into account in test construction, may reasonably be done without doing violence to test validity. (JAZ)
Descriptors: Court Litigation, Item Analysis, Minority Groups, Standards
Peer reviewed Peer reviewed
Jaeger, Richard M. – Educational Measurement: Issues and Practice, 1987
This is a reprint of a 1986 letter by the former president of the National Council on Measurement in Education (NCME) to New York and California legislators. The author outlines why NCME is opposed to legislative initiatives to extend Golden Rule procedures to tests in those states. (JAZ)
Descriptors: Item Analysis, Letters (Correspondence), Minority Groups, Standards
Previous Page | Next Page »
Pages: 1  |  2