NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 122 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Federica Picasso – Research on Education and Media, 2024
In the current higher education context, the development of academics' competencies seems to be a crucial issue, with a strong focus on teaching, learning and assessment digital skills (Redecker & Punie, 2017). In connection with the framework of DigCompEdu (2017), it seems important to understand how to better sustain academics' new…
Descriptors: Technology Uses in Education, Computer Assisted Testing, Feedback (Response), Evaluation Methods
Abdullah Abdul Wahab Alsayar – ProQuest LLC, 2021
Testlets bring several perks in the development and administration of tests, such as 1) the construction of meaningful test items, 2) the avoidance of non-relevant context exposure, 3) the improvement of testing efficiency, and 4) the progression of testlet items requiring higher thinking skills. Thus, the inclusion of testlets in educational…
Descriptors: Test Construction, Testing, Test Items, Efficiency
Fager, Meghan L. – ProQuest LLC, 2019
Recent research in multidimensional item response theory has introduced within-item interaction effects between latent dimensions in the prediction of item responses. The objective of this study was to extend this research to bifactor models to include an interaction effect between the general and specific latent variables measured by an item.…
Descriptors: Test Items, Item Response Theory, Factor Analysis, Simulation
Bukhari, Nurliyana – ProQuest LLC, 2017
In general, newer educational assessments are deemed more demanding challenges than students are currently prepared to face. Two types of factors may contribute to the test scores: (1) factors or dimensions that are of primary interest to the construct or test domain; and, (2) factors or dimensions that are irrelevant to the construct, causing…
Descriptors: Item Response Theory, Models, Psychometrics, Computer Simulation
Rothman, Robert – Jobs for the Future, 2018
This executive summary synthesizes recent research on formative assessment to elucidate its core components, then examines some new approaches currently being tried in schools. It considers the evidence for them, as well as the questions and issues they continue to raise, and takes a look at the challenges schools and school systems face in…
Descriptors: Formative Evaluation, Summative Evaluation, Learner Engagement, Models
Rothman, Robert – Jobs for the Future, 2018
This paper synthesizes recent research on formative assessment to elucidate its core components, then examines some new approaches currently being tried in schools. It considers the evidence for them, as well as the questions and issues they continue to raise, and takes a look at the challenges schools and school systems face in implementing both…
Descriptors: Formative Evaluation, Models, Academic Achievement, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Boals, Timothy; Kenyon, Dorry M.; Blair, Alissa; Cranley, M. Elizabeth; Wilmes, Carsten; Wright, Laura J. – Review of Research in Education, 2015
In conducting this review, we examine literature that explores the merits and shortcomings of ELP test design and testing as they have evolved over time through the current era of CCR standards. In the first section, we situate the role of language testing in its broader historical and policy context. In the second section, we examine the evolving…
Descriptors: Language Proficiency, English (Second Language), English Language Learners, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bond, Lloyd – Measurement: Interdisciplinary Research and Perspectives, 2014
Lloyd Bond comments here on the Focus article in this issue of "Measurement: Interdisciplinary Research and Perspectives". The Focus article is entitled: "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games" (Russell G. Almond, Yoon Jeon Kim, Gertrudes Velasquez, and Valerie J. Shute). Bond…
Descriptors: Educational Assessment, Task Analysis, Models, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2013
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
Descriptors: Educational Assessment, Test Items, Automation, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Smithson, John – State Education Standard, 2017
A standards-based model of reform has dominated public education for 30 years. Under the Every Student Succeeds Act (ESSA), it will continue to dominate education policy. Is that model working? State boards of education share an intrinsic interest in this question. While there are many ways to investigate it, one approach that shows promise treats…
Descriptors: Academic Standards, Models, Educational Change, Public Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Taehoon; Chen, Troy T. – Asia Pacific Education Review, 2011
The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…
Descriptors: Simulation, Item Response Theory, Models, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
He, Qingping; Opposs, Dennis; Glanville, Matthew; Lampreia-Carvalho, Fatima – Curriculum Journal, 2015
In England, pupils aged 16 take the General Certificate of Secondary Education (GCSE) examinations for a range of subjects. The current assessment models for GCSE include a two-tier structure for some subjects and a non-tier model for the others. The tiered subjects have a higher tier designed for high achieving pupils and a lower tier for low…
Descriptors: Foreign Countries, Exit Examinations, Secondary School Students, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Copp, Derek T. – Education Policy Analysis Archives, 2017
Large-scale assessment (LSA) is a tool used by education authorities for several purposes, including the promotion of teacher-based instructional change. In Canada, all 10 provinces engage in large-scale testing across several grade levels and subjects, and also have the common expectation that the results data will be used to improve instruction…
Descriptors: Foreign Countries, Incentives, Educational Policy, Mixed Methods Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rock, Donald A. – ETS Research Report Series, 2012
This paper provides a history of ETS's role in developing assessment instruments and psychometric procedures for measuring change in large-scale national assessments funded by the Longitudinal Studies branch of the National Center for Education Statistics. It documents the innovations developed during more than 30 years of working with…
Descriptors: Models, Educational Change, Longitudinal Studies, Educational Development
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Pei-Ying; Lin, Yu-Cheng – International Journal of Inclusive Education, 2015
To identify teacher candidates' needs for training in inclusive classroom assessment, the present study investigated teacher candidates' beliefs about inclusive classroom assessments for all students educated in regular classrooms, including those with special needs and English language learners. An innovative theoretical assessment model,…
Descriptors: Foreign Countries, Preservice Teachers, Teacher Education, Inclusion
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9