NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 121 to 135 of 514 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Conijn, Rianne; Martinez-Maldonado, Roberto; Knight, Simon; Buckingham Shum, Simon; Van Waes, Luuk; van Zaanen, Menno – Computer Assisted Language Learning, 2022
Current writing support tools tend to focus on assessing final or intermediate products, rather than the writing process. However, sensing technologies, such as keystroke logging, can enable provision of automated feedback during, and on aspects of, the writing process. Despite this potential, little is known about the critical indicators that can…
Descriptors: Automation, Feedback (Response), Writing Evaluation, Learning Analytics
Peer reviewed Peer reviewed
Direct linkDirect link
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Beula M. Magimairaj; Philip Capin; Sandra L. Gillam; Sharon Vaughn; Greg Roberts; Anna-Maria Fall; Ronald B. Gillam – Grantee Submission, 2022
Purpose: Our aim was to evaluate the psychometric properties of the online administered format of the Test of Narrative Language--Second Edition (TNL-2; Gillam & Pearson, 2017), given the importance of assessing children's narrative ability and considerable absence of psychometric studies of spoken language assessments administered online.…
Descriptors: Computer Assisted Testing, Language Tests, Story Telling, Language Impairments
Li, Haiying; Cai, Zhiqiang; Graesser, Arthur – Grantee Submission, 2018
In this study we developed and evaluated a crowdsourcing-based latent semantic analysis (LSA) approach to computerized summary scoring (CSS). LSA is a frequently used mathematical component in CSS, where LSA similarity represents the extent to which the to-be-graded target summary is similar to a model summary or a set of exemplar summaries.…
Descriptors: Computer Assisted Testing, Scoring, Semantics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Scoular, Claire; Care, Esther – Educational Assessment, 2019
Recent educational and psychological research has highlighted shifting workplace requirements and change required to equip the emerging workforce with skills for the 21st century. The emergence of these highlights the issues, and drives the importance, of new methods of assessment. This study addresses some of the issues by describing a scoring…
Descriptors: Cooperation, Problem Solving, Scoring, 21st Century Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Cetin-Berber, Dee Duygu; Sari, Halil Ibrahim; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uzun, Kutay – Contemporary Educational Technology, 2018
Managing crowded classes in terms of classroom assessment is a difficult task due to the amount of time which needs to be devoted to providing feedback to student products. In this respect, the present study aimed to develop an automated essay scoring environment as a potential means to overcome this problem. Secondarily, the study aimed to test…
Descriptors: Computer Assisted Testing, Essays, Scoring, English Literature
Peer reviewed Peer reviewed
Direct linkDirect link
O'Leary, Michael; Scully, Darina; Karakolidis, Anastasios; Pitsia, Vasiliki – European Journal of Education, 2018
The role of digital technology in assessment has received a great deal of attention in recent years. Naturally, technology offers many practical benefits, such as increased efficiency with regard to the design, implementation and scoring of existing assessments. More importantly, it also has the potential to have profound, transformative effects…
Descriptors: Computer Assisted Testing, Educational Technology, Technology Uses in Education, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al Habbash, Maha; Alsheikh, Negmeldin; Liu, Xu; Al Mohammedi, Najah; Al Othali, Safa; Ismail, Sadiq Abdulwahed – International Journal of Instruction, 2021
This convergent mixed method study aimed at exploring the English context of the widely used Emirates Standardized Test (EmSAT) by juxtaposing it to its sequel, the International English Language Testing System (IELTS). For this purpose, the study used the Common European Framework of Reference (CEFR) international standards which is used as a…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Guidelines
Doris Zahner; Jeffrey T. Steedle; James Soland; Catherine Welch; Qi Qin; Kathryn Thompson; Richard Phelps – Online Submission, 2023
The "Standards for Educational and Psychological Testing" have served as a cornerstone for best practices in assessment. As the field evolves, so must these standards, with regular revisions ensuring they reflect current knowledge and practice. The National Council on Measurement in Education (NCME) conducted a survey to gather feedback…
Descriptors: Standards, Educational Assessment, Psychological Testing, Best Practices
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Reddick, Rachel – International Educational Data Mining Society, 2019
One significant challenge in the field of measuring ability is measuring the current ability of a learner while they are learning. Many forms of inference become computationally complex in the presence of time-dependent learner ability, and are not feasible to implement in an online context. In this paper, we demonstrate an approach which can…
Descriptors: Measurement Techniques, Mathematics, Assignments, Learning
Sterett H. Mercer; Joanna E. Cannon – Grantee Submission, 2022
We evaluated the validity of an automated approach to learning progress assessment (aLPA) for English written expression. Participants (n = 105) were students in Grades 2-12 who had parent-identified learning difficulties and received academic tutoring through a community-based organization. Participants completed narrative writing samples in the…
Descriptors: Elementary School Students, Secondary School Students, Learning Problems, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Jing; Jones, Edmund; Laxton, Victoria; Galaczi, Evelina – Assessment in Education: Principles, Policy & Practice, 2021
Recent advances in machine learning have made automated scoring of learner speech widespread, and yet validation research that provides support for applying automated scoring technology to assessment is still in its infancy. Both the educational measurement and language assessment communities have called for greater transparency in describing…
Descriptors: Second Language Learning, Second Language Instruction, English (Second Language), Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Fairbairn, Judith; Spiby, Richard – European Journal of Special Needs Education, 2019
Language test developers have a responsibility to ensure that their tests are accessible to test takers of various backgrounds and characteristics and also that they have the opportunity to perform to the best of their ability. This principle is widely recognised by educational and language testing associations in guidelines for the production and…
Descriptors: Testing, Language Tests, Test Construction, Testing Accommodations
New York State Education Department, 2022
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts and Mathematics Paper-Based Field Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed as written…
Descriptors: Testing Programs, Mathematics Tests, Test Format, Computer Assisted Testing
Pages: 1  |  ...  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  ...  |  35