NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 107 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stella Y. Kim; Sungyeun Kim – Educational Measurement: Issues and Practice, 2025
This study presents several multivariate Generalizability theory designs for analyzing automatic item-generated (AIG) based test forms. The study used real data to illustrate the analysis procedure and discuss practical considerations. We collected the data from two groups of students, each group receiving a different form generated by AIG. A…
Descriptors: Generalizability Theory, Automation, Test Items, Students
Miranda Kucera; K. Kawena Begay – Communique, 2025
While the field advocates for a diversified and comprehensive professional role (National Association of School Psychologists, 2020), school psychologists have long spent most of their time in assessment-related activities (Farmer et al., 2021), averaging about eight cognitive evaluations monthly (Benson et al., 2020). Assessment practices have…
Descriptors: Equal Education, Student Evaluation, Evaluation Methods, Standardized Tests
Miranda Kucera; K. Kawena Begay – Communique, 2025
In Part 1 of this series, the authors briefly reviewed some challenges inherent in using standardized tools with students who are not well represented in norming data. To help readers clearly conceptualize the framework steps, the authors present two case studies that showcase how a nonstandardized approach to assessment can be individualized to…
Descriptors: Equal Education, Student Evaluation, Evaluation Methods, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian C.; Gregg, Nikole; Ames, Allison J. – Measurement: Interdisciplinary Research and Perspectives, 2022
Response styles introduce construct-irrelevant variance as a result of respondents systematically responding to Likert-type items regardless of content. Methods to account for response styles through data analysis as well as approaches to mitigating the effects of response styles during data collection have been well-documented. Recent approaches…
Descriptors: Response Style (Tests), Item Response Theory, Test Items, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Neuert, Cornelia E.; Meitinger, Katharina; Behr, Dorothée – Sociological Methods & Research, 2023
The method of web probing integrates cognitive interviewing techniques into web surveys and is increasingly used to evaluate survey questions. In a usual web probing scenario, probes are administered immediately after the question to be tested (concurrent probing), typically as open-ended questions. A second possibility of administering probes is…
Descriptors: Internet, Online Surveys, Test Items, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Mihyun Son; Minsu Ha – Education and Information Technologies, 2025
Digital literacy is essential for scientific literacy in a digital world. Although the NGSS Practices include many activities that require digital literacy, most studies have examined digital literacy from a generic perspective rather than a curricular context. This study aimed to develop a self-report tool to measure elements of digital literacy…
Descriptors: Test Construction, Measures (Individuals), Digital Literacy, Scientific Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chowdhury, Pinaki – Online Submission, 2021
Collecting data on learners' performance in different chemistry contents and analysing them to identify their knowledge and understanding in related content areas is a major task of Chemistry Education Research. The data collection process on the learners' content knowledge and understanding of content knowledge requires a standard measuring tool.…
Descriptors: Data Collection, Standards, Chemistry, Scientific Concepts
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hongwen Guo; Matthew S. Johnson; Daniel F. McCaffrey; Lixong Gu – ETS Research Report Series, 2024
The multistage testing (MST) design has been gaining attention and popularity in educational assessments. For testing programs that have small test-taker samples, it is challenging to calibrate new items to replenish the item pool. In the current research, we used the item pools from an operational MST program to illustrate how research studies…
Descriptors: Test Items, Test Construction, Sample Size, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Changiz Mohiyeddini – Anatomical Sciences Education, 2025
Medical schools are required to assess and evaluate their curricula and to develop exam questions with strong reliability and validity evidence, often based on data derived from statistically small samples of medical students. Achieving a large enough sample to reliably and validly evaluate courses, assessments, and exam questions would require…
Descriptors: Medical Education, Medical Students, Medical Schools, Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – Journal of Educational Data Mining, 2021
The NAEP EDM Competition required participants to predict efficient test-taking behavior based on log data. This paper describes our top-down approach for engineering features by means of psychometric modeling, aiming at machine learning for the predictive classification task. For feature engineering, we employed, among others, the Log-Normal…
Descriptors: National Competency Tests, Engineering Education, Data Collection, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Patricia Hadler – Sociological Methods & Research, 2025
Probes are follow-ups to survey questions used to gain insights on respondents' understanding of and responses to these questions. They are usually administered as open-ended questions, primarily in the context of questionnaire pretesting. Due to the decreased cost of data collection for open-ended questions in web surveys, researchers have argued…
Descriptors: Online Surveys, Discovery Processes, Test Items, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jafri, Mairaj – Waikato Journal of Education, 2022
This paper reports how I addressed the issue of extensive missing values in my PhD study, "Digital Competencies of High School Mathematics Teachers". I collected data using an online survey. Several methods exist to address the issue of missing values. I utilised multiple imputation (MI) as it provides more accurate results. The mean…
Descriptors: Data Collection, Research Problems, Doctoral Dissertations, Online Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
An, Lily Shiao; Ho, Andrew Dean; Davis, Laurie Laughlin – Educational Measurement: Issues and Practice, 2022
Technical documentation for educational tests focuses primarily on properties of individual scores at single points in time. Reliability, standard errors of measurement, item parameter estimates, fit statistics, and linking constants are standard technical features that external stakeholders use to evaluate items and individual scale scores.…
Descriptors: Documentation, Scores, Evaluation Methods, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Jo Lein; Jennifer Gripado – Learning Professional, 2024
There are many valuable sources of evaluation data, including -- but not limited to -- professional learning participants. In the authors' work on leadership development and organizational learning for Tulsa Public Schools in Oklahoma, they regularly ask educators to share feedback and perceptions of usefulness of their professional learning. The…
Descriptors: Participant Satisfaction, Surveys, Test Items, Feedback (Response)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8