NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koch, Marco; Spinath, Frank M.; Greiff, Samuel; Becker, Nicolas – Journal of Intelligence, 2022
Figural matrices tasks are one of the most prominent item formats used in intelligence tests, and their relevance for the assessment of cognitive abilities is unquestionable. However, despite endeavors of the open science movement to make scientific research accessible on all levels, there is a lack of royalty-free figural matrices tests. The Open…
Descriptors: Intelligence, Intelligence Tests, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Isler, Cemre; Aydin, Belgin – International Journal of Assessment Tools in Education, 2021
This study is about the development and validation process of the Computerized Oral Proficiency Test of English as a Foreign Language (COPTEFL). The test aims at assessing the speaking proficiency levels of students in Anadolu University School of Foreign Languages (AUSFL). For this purpose, three monologic tasks were developed based on the Global…
Descriptors: Test Construction, Construct Validity, Interrater Reliability, Scores
Sara Faye Maher – ProQuest LLC, 2020
To meet the needs of complex and/or underserved patient populations, health care professionals must possess diverse backgrounds, qualities, and skill sets. Holistic review has been used to diversify student admissions through examination of non-cognitive attributes of health care applicants. The objective of this study was to develop a novel…
Descriptors: Computer Assisted Testing, Pilot Projects, Measures (Individuals), Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kosan, Aysen Melek Aytug; Koç, Nizamettin; Elhan, Atilla Halil; Öztuna, Derya – International Journal of Assessment Tools in Education, 2019
Progress Test (PT) is a form of assessment that simultaneously measures ability levels of all students in a certain educational program and their progress over time by providing them with same questions and repeating the process at regular intervals with parallel tests. Our objective was to generate an item bank for the PT and to examine the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Medical Education
Peer reviewed Peer reviewed
Direct linkDirect link
Küchemann, Stefan; Malone, Sarah; Edelsbrunner, Peter; Lichtenberger, Andreas; Stern, Elsbeth; Schumacher, Ralph; Brünken, Roland; Vaterlaus, Andreas; Kuhn, Jochen – Physical Review Physics Education Research, 2021
Representational competence is essential for the acquisition of conceptual understanding in physics. It enables the interpretation of diagrams, graphs, and mathematical equations, and relating these to one another as well as to observations and experimental outcomes. In this study, we present the initial validation of a newly developed…
Descriptors: Physics, Science Instruction, Teaching Methods, Concept Formation
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Bartels, Hauke; Geelan, David; Kulgemeyer, Christoph – International Journal of Science Education, 2019
Measuring teachers' skills to carry out the complex tasks required in teaching is an important means of evaluating the effectiveness of teacher education but remains a challenging activity to conduct in practice. It is necessary to optimise approaches for usability and effectiveness along a continuum from low-effort and low-authenticity measures…
Descriptors: Science Teachers, Teacher Competency Testing, Performance Based Assessment, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
Timpe-Laughlin, Veronika; Choi, Ikkyu – Language Assessment Quarterly, 2017
Pragmatics has been a key component of language competence frameworks. While the majority of second/foreign language (L2) pragmatics tests have targeted productive skills, the assessment of receptive pragmatic skills remains a developing field. This study explores validation evidence for a test of receptive L2 pragmatic ability called the American…
Descriptors: Pragmatics, Language Tests, Test Validity, Receptive Language
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Yan; Yan, Ming – Language Assessment Quarterly, 2017
One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…
Descriptors: Writing Tests, Computer Assisted Testing, Computer Literacy, Construct Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kebble, Paul Graham – The EUROCALL Review, 2016
The C-Test as a tool for assessing language competence has been in existence for nearly 40 years, having been designed by Professors Klein-Braley and Raatz for implementation in German and English. Much research has been conducted over the ensuing years, particularly in regards to reliability and construct validity, for which it is reported to…
Descriptors: Language Tests, Computer Software, Test Construction, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dermo, John; Boyne, James – Practitioner Research in Higher Education, 2014
We describe a study conducted during 2009-12 into innovative assessment practice, evaluating an assessed coursework task on a final year Medical Genetics module for Biomedical Science undergraduates. An authentic e-assessment coursework task was developed, integrating objectively marked online questions with an online DNA sequence analysis tool…
Descriptors: Biomedicine, Medical Education, Computer Assisted Testing, Courseware
Peer reviewed Peer reviewed
Direct linkDirect link
Greiff, Samuel; Wustenberg, Sascha; Funke, Joachim – Applied Psychological Measurement, 2012
This article addresses two unsolved measurement issues in dynamic problem solving (DPS) research: (a) unsystematic construction of DPS tests making a comparison of results obtained in different studies difficult and (b) use of time-intensive single tasks leading to severe reliability problems. To solve these issues, the MicroDYN approach is…
Descriptors: Problem Solving, Tests, Measurement, Structural Equation Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Previous Page | Next Page »
Pages: 1  |  2