NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 73 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao – Language Testing, 2022
Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the…
Descriptors: Translation, Language Tests, Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Read, John – Language Testing, 2023
Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cobern, William W.; Adams, Betty A. J. – International Journal of Assessment Tools in Education, 2020
What follows is a practical guide for establishing the validity of a survey for research purposes. The motivation for providing this guide is our observation that researchers, not necessarily being survey researchers per se, but wanting to use a survey method, lack a concise resource on validity. There is far more to know about surveys and survey…
Descriptors: Surveys, Test Validity, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Education Inquiry, 2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of…
Descriptors: Computer Assisted Testing, Cheating, Test Wiseness, Scores
NWEA, 2022
This technical report documents the processes and procedures employed by NWEA® to build and support the English MAP® Reading Fluency™ assessments administered during the 2020-2021 school year. It is written for measurement professionals and administrators to help evaluate the quality of MAP Reading Fluency. The seven sections of this report: (1)…
Descriptors: Achievement Tests, Reading Tests, Reading Achievement, Reading Fluency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Magno, Carlo – UNESCO Bangkok, 2020
The COVID-19 pandemic has disrupted education across the globe leading countries to adapt how they administer and manage high-stakes examinations and large-scale learning assessments. This thematic review describes the measures that countries have taken, in terms of policies and practices, when learning assessments are disrupted by emergencies and…
Descriptors: High Stakes Tests, COVID-19, Pandemics, Cross Cultural Studies
Sriram, Rishi – NASPA - Student Affairs Administrators in Higher Education, 2014
When student affairs professionals assess their work, they often employ some type of survey. The use of surveys stems from a desire to objectively measure outcomes, a demand from someone else (e.g., supervisor, accreditation committee) for data, or the feeling that numbers can provide an aura of competence. Although surveys are effective tools for…
Descriptors: Surveys, Test Construction, Student Personnel Services, Test Use
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thomas, Jason E.; Hornsey, Philip E. – Journal of Instructional Research, 2014
Formative Classroom Assessment Techniques (CAT) have been well-established instructional tools in higher education since their exposition in the late 1980s (Angelo & Cross, 1993). A large body of literature exists surrounding the strengths and weaknesses of formative CATs. Simpson-Beck (2011) suggested insufficient quantitative evidence exists…
Descriptors: Classroom Techniques, Nontraditional Education, Adult Education, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Salend, Spencer J. – Educational Leadership, 2011
Creating a fair, reliable, teacher-made test is a challenge. Every year poorly designed tests fail to accurately measure many students' learning--and negatively affect their academic futures. Salend, a well-known writer on assessment for at-risk students who consults with schools on assessment procedures, offers guidelines for creating tests that…
Descriptors: At Risk Students, Test Construction, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ediger, Marlow – Education, 2010
Data driven decision making emphasizes the importance of the teacher using objective sources of information in developing the social studies curriculum. Too frequently, decisions of teachers have been made based on routine and outdated methods of teaching. Valid and reliable tests used to secure results from pupil learning make for better…
Descriptors: Data, Decision Making, Social Studies, Standardized Tests
Jin, Yan – Journal of Pan-Pacific Association of Applied Linguistics, 2011
The College English Test (CET) is an English language test designed for educational purposes, administered on a very large scale, and used for making high-stakes decisions. This paper discusses the key issues facing the CET during the course of its development in the past two decades. It argues that the most fundamental and critical concerns of…
Descriptors: High Stakes Tests, Language Tests, Measures (Individuals), Graduates
Peer reviewed Peer reviewed
Simon, Alan J.; Joiner, Lee M. – Journal of Educational Measurement, 1976
The purpose of this study was to determine whether a Mexican version of the Peabody Picture Vocabulary Test could be improved by directly translating both forms of the American test, then using decision procedures to select the better item of each pair. The reliability of the simple translations suffered. (Author/BW)
Descriptors: Early Childhood Education, Spanish, Test Construction, Test Format
Caldwell, Robert M.; Marcel, Marvin – Training, 1985
Examines Southwestern Bell's Interdepartmental Training Center's program of providing objective evaluations of trainers and the training process. Elements that are discussed include the evaluation format, the form of the evaluation instrument and its emphasis, the validation process, and refinements to the system. (CT)
Descriptors: Evaluation Methods, Guidelines, Teacher Evaluation, Test Construction
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5