NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teck Kiang Tan – Practical Assessment, Research & Evaluation, 2024
The procedures of carrying out factorial invariance to validate a construct were well developed to ensure the reliability of the construct that can be used across groups for comparison and analysis, yet mainly restricted to the frequentist approach. This motivates an update to incorporate the growing Bayesian approach for carrying out the Bayesian…
Descriptors: Bayesian Statistics, Factor Analysis, Programming Languages, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Benjawan Plengkham; Sonthaya Rattanasak; Patsawut Sukserm – Journal of Education and Learning, 2025
This academic article provides the essential steps for designing an effective English questionnaire in social science research, with a focus on ensuring clarity, cultural sensitivity and ethical integrity. Developed from key insights from related studies, it outlines potential practice in questionnaire design, item development and the importance…
Descriptors: Guidelines, Test Construction, Questionnaires, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Ledford, Jennifer R.; Lambert, Joseph M.; Pustejovsky, James E.; Zimmerman, Kathleen N.; Hollins, Nicole; Barton, Erin E. – Exceptional Children, 2023
Single-case design has a long history of use for assessing intervention effectiveness for children with disabilities. Although these designs have been widely employed for more than 50 years, recent years have been especially dynamic in terms of growth in the use of single-case design and application of standards designed to improve the validity…
Descriptors: Research Design, Educational Research, Case Studies, Special Education
Peer reviewed Peer reviewed
Direct linkDirect link
Buckley, Jeffrey; Seery, Niall; Gumaelius, Lena; Canty, Donal; Doyle, Andrew; Pears, Arnold – International Journal of Technology and Design Education, 2021
Design is core element of general technology education internationally. While there is a degree of contention with regards to its treatment, there is general consensus that the inclusion of design in some form is important, if not characteristic, of the subject area. Acknowledging that design is important, there are many questions which need to be…
Descriptors: Alignment (Education), Design, Guidelines, Learning Theories
Peer reviewed Peer reviewed
Direct linkDirect link
Jonson, Jessica L.; Trantham, Pamela; Usher-Tate, Betty Jean – Educational Measurement: Issues and Practice, 2019
One of the substantive changes in the 2014 Standards for Educational and Psychological Testing was the elevation of fairness in testing as a foundational element of practice in addition to validity and reliability. Previous research indicates that testing practices often do not align with professional standards and guidelines. Therefore, to raise…
Descriptors: Culture Fair Tests, Test Validity, Test Reliability, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bao, Lei; Koenig, Kathleen; Xiao, Yang; Fritchman, Joseph; Zhou, Shaona; Chen, Cheng – Physical Review Physics Education Research, 2022
Abilities in scientific thinking and reasoning have been emphasized as core areas of initiatives, such as the Next Generation Science Standards or the College Board Standards for College Success in Science, which focus on the skills the future will demand of today's students. Although there is rich literature on studies of how these abilities…
Descriptors: Physics, Science Instruction, Teaching Methods, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve – Educational Measurement: Issues and Practice, 2017
Test security is not an end in itself; it is important because we want to be able to make valid interpretations from test scores. In this article, I propose a framework for comprehensive test security systems: prevention, detection, investigation, and resolution. The article discusses threats to test security, roles and responsibilities, rigorous…
Descriptors: Testing Programs, Educational Practices, Educational Policy, Program Improvement
Peer reviewed Peer reviewed
Direct linkDirect link
Larson-Hall, Jenifer; Plonsky, Luke – Language Learning, 2015
This paper presents a set of guidelines for reporting on five types of quantitative data issues: (1) Descriptive statistics, (2) Effect sizes and confidence intervals, (3) Instrument reliability, (4) Visual displays of data, and (5) Raw data. Our recommendations are derived mainly from various professional sources related to L2 research but…
Descriptors: Guidelines, Statistical Analysis, Language Research, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Purpura, James E.; Brown, James Dean; Schoonen, Rob – Language Learning, 2015
In empirical applied linguistics research it is essential that the key variables are operationalized in a valid and reliable way, and that the scores are treated appropriately, allowing for a proper testing of the hypotheses under investigation. The current article addresses several theoretical and practical issues regarding the use of measurement…
Descriptors: Applied Linguistics, Language Research, Statistical Analysis, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay – Educational Measurement: Issues and Practice, 2012
A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…
Descriptors: Automation, Scoring, Evaluation, Guidelines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gilda, Agacer; Christofi, Andreas; Moliver, Donald – Journal of Instructional Pedagogies, 2014
Our paper provides some critical attributes of an online homegrown assessment test, which we labelled Major Field Learning Test (MFLT). These attributes are also valid for departmental tests, directly connected to coursework which makes up the MFLT. The paper provides helpful recommendations for online assessment of learning as well as retention…
Descriptors: Computer Assisted Testing, Achievement Tests, Outcome Measures, Retention (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Berk, Ronald A. – Journal of Faculty Development, 2016
Recently, student outcomes have bubbled to the top of debates about how to evaluate teaching in community and liberal arts colleges, universities, and professional schools, but even more international attention has been riveted on how outcomes are being used to evaluate teachers and administrators K-12 (Harris, 2012; Rowen & Raudenbush, 2016;…
Descriptors: Value Added Models, Academic Achievement, Outcomes of Education, Teacher Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Newman, Carole; Newman, Isadore – Teacher Educator, 2013
The concept of teacher accountability assumes teachers will use data-driven decision making to plan and deliver appropriate and effective instruction to their students. In order to do so, teachers must be able to accurately interpret the data that is given to them, and that requires the knowledge of some basic concepts of assessment and…
Descriptors: Decision Making, Basic Vocabulary, Data, Accountability
Basanta, Carmen Perez – English Teaching Forum, 2012
The area of progress testing has been neglected and has lagged far behind developments in language teaching and testing in general. In most classrooms today, English is taught through communicative textbooks that provide neither accompanying tests nor any guidance for test construction. Teachers are on their own in constructing tests to measure…
Descriptors: Language Tests, Testing, Guidelines, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Feldman, Moshe; Lazzara, Elizabeth H.; Vanderbilt, Allison A.; DiazGranados, Deborah – Journal of Continuing Education in the Health Professions, 2012
Competency-based assessment and an emphasis on obtaining higher-level outcomes that reflect physicians' ability to demonstrate their skills has created a need for more advanced assessment practices. Simulation-based assessments provide medical education planners with tools to better evaluate the 6 Accreditation Council for Graduate Medical…
Descriptors: Performance Based Assessment, Physicians, Accuracy, High Stakes Tests
Previous Page | Next Page ยป
Pages: 1  |  2