NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement…1
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kaiwen Man – Educational and Psychological Measurement, 2024
In various fields, including college admission, medical board certifications, and military recruitment, high-stakes decisions are frequently made based on scores obtained from large-scale assessments. These decisions necessitate precise and reliable scores that enable valid inferences to be drawn about test-takers. However, the ability of such…
Descriptors: Prior Learning, Testing, Behavior, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nazaretsky, Tanya; Hershkovitz, Sara; Alexandron, Giora – International Educational Data Mining Society, 2019
Sequencing items in adaptive learning systems typically relies on a large pool of interactive question items that are analyzed into a hierarchy of skills, also known as Knowledge Components (KCs). Educational data mining techniques can be used to analyze students response data in order to optimize the mapping of items to KCs, with similarity-based…
Descriptors: Intelligent Tutoring Systems, Item Response Theory, Measurement, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D. – International Journal of Environmental and Science Education, 2016
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Descriptors: Expertise, Computer Assisted Testing, Student Evaluation, Knowledge Level
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: Standardized Tests, Career Readiness, College Readiness, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Verbic, Srdjan; Tomic, Boris; Kartal, Vesna – Online Submission, 2010
On-line trial testing for fourth-grade students was an exploratory study realized as a part of the project "Developing annual test of students' achievement in Nature & Society" realized by Institute for Education Quality and Evaluation. Main ideas of the study were to explore possibilities for on-line testing at national level in…
Descriptors: Foreign Countries, Item Response Theory, High School Students, Computer Assisted Testing
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tianyou; Zhang, Jiawei – Psychometrika, 2006
This paper deals with optimal partitioning of limited testing time in order to achieve maximum total test score. Nonlinear optimization theory was used to analyze this problem. A general case using a generic item response model is first presented. A special case that applies a response time model proposed by Wang and Hanson (2005) is also…
Descriptors: Reaction Time, Testing, Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Richard S.; Villarreal, Julio C. – International Journal of Testing, 2007
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation,…
Descriptors: Psychometrics, Test Bias, Testing, Simulation
Peer reviewed Peer reviewed
Davies, Alan – Language Testing, 2003
Provides a personal view of the history of Anglo-American language testing over the last half-century. Argues that major developments in the field have tended to be embraced too enthusiastically so that they have led to unbalanced views concerning the construct definition of language, the scope of test impact, and the value of new methods of test…
Descriptors: Communicative Competence (Languages), Computer Assisted Testing, Item Response Theory, Language Tests
Previous Page | Next Page ยป
Pages: 1  |  2