NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 46 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Andrés Christiansen; Rianne Janssen – Educational Assessment, Evaluation and Accountability, 2024
In international large-scale assessments, students may not be compelled to answer every test item: a student can decide to skip a seemingly difficult item or may drop out before the end of the test is reached. The way these missing responses are treated will affect the estimation of the item difficulty and student ability, and ultimately affect…
Descriptors: Test Items, Item Response Theory, Grade 4, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Robitzsch, Alexander; Lüdtke, Oliver – Large-scale Assessments in Education, 2023
One major aim of international large-scale assessments (ILSA) like PISA is to monitor changes in student performance over time. To accomplish this task, a set of common items (i.e., link items) is repeatedly administered in each assessment. Linking methods based on item response theory (IRT) models are used to align the results from the different…
Descriptors: Educational Trends, Trend Analysis, International Assessment, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Jing; Wang, Chun – Journal of Educational Measurement, 2020
Item nonresponses are prevalent in standardized testing. They happen either when students fail to reach the end of a test due to a time limit or quitting, or when students choose to omit some items strategically. Oftentimes, item nonresponses are nonrandom, and hence, the missing data mechanism needs to be properly modeled. In this paper, we…
Descriptors: Item Response Theory, Test Items, Standardized Tests, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne – AERA Online Paper Repository, 2016
In educational assessments two types of missing responses can be discerned: items can be "not reached" or "skipped". Both types of omissions may be related to the test taker's proficiency, resulting in non-ignorable missingness. This paper proposes to model not reached and skipped items as part of the response process, using…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Haiwen H.; von Davier, Matthias; Yamamoto, Kentaro; Kong, Nan – ETS Research Report Series, 2015
One major issue with large-scale assessments is that the respondents might give no responses to many items, resulting in less accurate estimations of both assessed abilities and item parameters. This report studies how the types of items affect the item-level nonresponse rates and how different methods of treating item-level nonresponses have an…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
McQuillan, Mark; Phelps, Richard P.; Stotsky, Sandra – Pioneer Institute for Public Policy Research, 2015
In July 2010, the Massachusetts Board of Elementary and Secondary Education (BESE) voted to adopt Common Core's standards in English language arts (ELA) and mathematics in place of the state's own standards in these two subjects. The vote was based largely on recommendations by Commissioner of Education Mitchell Chester and then Secretary of…
Descriptors: Reading Tests, Writing Tests, Achievement Tests, Common Core State Standards
Peer reviewed Peer reviewed
Direct linkDirect link
El Masri, Yasmine H.; Baird, Jo-Anne; Graesser, Art – Assessment in Education: Principles, Policy & Practice, 2016
We investigate the extent to which language versions (English, French and Arabic) of the same science test are comparable in terms of item difficulty and demands. We argue that language is an inextricable part of the scientific literacy construct, be it intended or not by the examiner. This argument has considerable implications on methodologies…
Descriptors: International Assessment, Difficulty Level, Test Items, Language Variation
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation
Diamond, Esther E. – 1981
As test standards and research literature in general indicate, definitions of test bias and item bias vary considerably, as do the results of existing methods of identifying biased items. The situation is further complicated by issues of content, context, construct, and criterion. In achievement tests, for example, content validity may impose…
Descriptors: Achievement Tests, Aptitude Tests, Psychometrics, Test Bias
Townsend, Michael A. R.; Mahoney, Peggy – 1980
The roles of humor and anxiety in test performance were investigated. Measures of trait anxiety, state anxiety and achievement were obtained on a sample of undergraduate students; the A-Trait and A-State scales of the State-Trait Anxiety Inventory were used. Half of the students received additional humorous items in the achievement test. The…
Descriptors: Achievement Tests, Anxiety, Higher Education, Humor
Ward, William C.; And Others – 1983
A new item type was developed, incorporating features of "ill-structured" problems in a multiple-choice format. The problems are similar to previously developed scientific thinking tasks in requiring the examinee to go beyond the information provided; they resemble a variant of the logical reasoning item type, but demand somewhat more structuring…
Descriptors: Achievement Tests, Higher Education, Logical Thinking, Multiple Choice Tests
Haenn, Joseph F. – 1981
Procedures for conducting functional level testing have been available for use by practitioners for some time. However, the Title I Evaluation and Reporting System (TIERS), developed in response to the educational amendments of 1974 to the Elementary and Secondary Education Act (ESEA), has provided the impetus for widespread adoption of this…
Descriptors: Achievement Tests, Difficulty Level, Scores, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4