NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 1,167 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Miguel A. García-Pérez – Educational and Psychological Measurement, 2024
A recurring question regarding Likert items is whether the discrete steps that this response format allows represent constant increments along the underlying continuum. This question appears unsolvable because Likert responses carry no direct information to this effect. Yet, any item administered in Likert format can identically be administered…
Descriptors: Likert Scales, Test Construction, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chan Zhang; Shuaiying Cao; Minglei Wang; Jiangyan Wang; Lirui He – Field Methods, 2025
Previous research on grid questions has mostly focused on their comparability with the item-by-item method and the use of shading to help respondents navigate through a grid. This study extends prior work by examining whether lexical similarity among grid items affects how respondents answer the questions in an experiment where we manipulated…
Descriptors: Foreign Countries, Surveys, Test Construction, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Gregory H. Peterson; Michael B. Kozlowski – Measurement and Evaluation in Counseling and Development, 2024
This study aimed to develop a scale to assess counselors' ability to provide counseling to address the mental health impacts of climate change. Over three studies, we provide reliability and validity evidence for a Climate Change Counseling Scale (3CS) in a large representative sample of counselors across the US. In study one and two, an…
Descriptors: Counselors, Mental Health, Climate, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence Scahill; Luc Lecavalier; Michael C. Edwards; Megan L. Wenzell; Leah M. Barto; Arielle Mulligan; Auscia T. Williams; Opal Ousley; Cynthia B. Sinha; Christopher A. Taylor; Soo Youn Kim; Laura M. Johnson; Scott E. Gillespie; Cynthia R. Johnson – Autism: The International Journal of Research and Practice, 2024
This report presents a new parent-rated outcome measure of insomnia for children with autism spectrum disorder. Parents of 1185 children with autism spectrum disorder (aged 3-12; 80.3% male) completed the first draft of the measure online. Factor and item response theory analyses reduced the set of 40 items to the final 21-item Pediatric Insomnia…
Descriptors: Autism Spectrum Disorders, Children, Sleep, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hongwen Guo; Matthew S. Johnson; Daniel F. McCaffrey; Lixong Gu – ETS Research Report Series, 2024
The multistage testing (MST) design has been gaining attention and popularity in educational assessments. For testing programs that have small test-taker samples, it is challenging to calibrate new items to replenish the item pool. In the current research, we used the item pools from an operational MST program to illustrate how research studies…
Descriptors: Test Items, Test Construction, Sample Size, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Susu; Li, Anqi; Wang, Shiyu – Educational Measurement: Issues and Practice, 2023
In computer-based tests allowing revision and reviews, examinees' sequence of visits and answer changes to questions can be recorded. The variable-length revision log data introduce new complexities to the collected data but, at the same time, provide additional information on examinees' test-taking behavior, which can inform test development and…
Descriptors: Computer Assisted Testing, Test Construction, Test Wiseness, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Meike Akveld; George Kinnear – International Journal of Mathematical Education in Science and Technology, 2024
Many universities use diagnostic tests to assess incoming students' preparedness for mathematics courses. Diagnostic test results can help students to identify topics where they need more practice and give lecturers a summary of strengths and weaknesses in their class. We demonstrate a process that can be used to make improvements to a mathematics…
Descriptors: Mathematics Tests, Diagnostic Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Achmad Rante Suparman; Eli Rohaeti; Sri Wening – Journal on Efficiency and Responsibility in Education and Science, 2024
This study focuses on developing a five-tier chemical diagnostic test based on a computer-based test with 11 assessment categories with an assessment score from 0 to 10. A total of 20 items produced were validated by education experts, material experts, measurement experts, and media experts, and an average index of the Aiken test > 0.70 was…
Descriptors: Chemistry, Diagnostic Tests, Computer Assisted Testing, Credits
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mehmet Kanik – International Journal of Assessment Tools in Education, 2024
ChatGPT has surged interest to cause people to look for its use in different tasks. However, before allowing it to replace humans, its capabilities should be investigated. As ChatGPT has potential for use in testing and assessment, this study aims to investigate the questions generated by ChatGPT by comparing them to those written by a course…
Descriptors: Artificial Intelligence, Testing, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Marjo Sirén; Sari Sulkunen – Scandinavian Journal of Educational Research, 2025
This study examined which aspects of critical literacy are focused on in the reading literacy assessment for the Programme for International Student Assessment (PISA) 2018 and what kinds of texts are related to the critical literacy items in the test. Based on theory-oriented qualitative content analysis, critical literacy items in PISA…
Descriptors: International Assessment, Achievement Tests, Foreign Countries, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Elkhatat, Ahmed M. – International Journal for Educational Integrity, 2022
Examinations form part of the assessment processes that constitute the basis for benchmarking individual educational progress, and must consequently fulfill credibility, reliability, and transparency standards in order to promote learning outcomes and ensure academic integrity. A randomly selected question examination (RSQE) is considered to be an…
Descriptors: Integrity, Monte Carlo Methods, Credibility, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Yalalem Assefa; Bekalu Tadesse Moges; Shouket Ahmad Tilwani – Journal of Applied Research in Higher Education, 2024
Purpose: Lifelong learning has become one of the most interesting areas of research. Hence, the current study was aimed at developing and validating a tool that helps to study how well people working in higher education institutions are engaged in lifelong learning. Design/methodology/approach: A review of theories in the literature and experts'…
Descriptors: Lifelong Learning, Measures (Individuals), Likert Scales, Test Construction
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  78