Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Wuji Lin; Chenxi Lv; Jiejie Liao; Yuan Hu; Yutong Liu; Jingyuan Lin – npj Science of Learning, 2024
The debate about whether the capacity of working memory (WM) varies with the complexity of memory items continues. This study employed novel experimental materials to investigate the role of complexity in WM capacity. Across seven experiments, we explored the relationship between complexity and WM capacity. The results indicated that the…
Descriptors: Short Term Memory, Difficulty Level, Retention (Psychology), Test Items
Peer reviewedAndreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Jochen Ranger; Christoph König; Benjamin W. Domingue; Jörg-Tobias Kuhn; Andreas Frey – Journal of Educational and Behavioral Statistics, 2024
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension…
Descriptors: Models, Statistical Distributions, Item Response Theory, Response Rates (Questionnaires)
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Sam von Gillern; Chad Rose; Amy Hutchison – British Journal of Educational Technology, 2024
As teachers are purveyors of digital citizenship and their perspectives influence classroom practice, it is important to understand teachers' views on digital citizenship. This study establishes the Teachers' Perceptions of Digital Citizenship Scale (T-PODS) as a survey instrument for scholars to investigate educators' views on digital citizenship…
Descriptors: Citizenship, Digital Literacy, Teacher Attitudes, Test Items
Kofi Nkonkonya Mpuangnan – Review of Education, 2024
Assessment practices play a crucial role in fostering student learning and guiding instructional decision-making. The ability to construct effective test items is of utmost importance in evaluating student learning and shaping instructional strategies. This study aims to investigate the skills of Ghanaian basic schoolteachers in test item…
Descriptors: Test Items, Test Construction, Student Evaluation, Foreign Countries
Xu, Yufeng; Liu, Huinan; Chen, Bo; Huang, Sihui; Zhong, Chongyu – Chemistry Education Research and Practice, 2023
Scientific methods have received widespread attention in recent years. Based on the analytical framework derived from Brandon's matrix consisting of four categories of scientific methods, this paper aims to conduct a content analysis to examine how the diversity of scientific methods is represented in college entrance chemistry examination papers…
Descriptors: College Entrance Examinations, Chemistry, Scientific Methodology, Test Items
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Ntumi, Simon; Agbenyo, Sheilla; Bulala, Tapela – Shanlax International Journal of Education, 2023
There is no need or point to testing of knowledge, attributes, traits, behaviours or abilities of an individual if information obtained from the test is inaccurate. However, by and large, it seems the estimation of psychometric properties of test items in classroomshas been completely ignored otherwise dying slowly in most testing environments. In…
Descriptors: Psychometrics, Accuracy, Test Validity, Factor Analysis
Wu, Tong; Kim, Stella Y.; Westine, Carl – Educational and Psychological Measurement, 2023
For large-scale assessments, data are often collected with missing responses. Despite the wide use of item response theory (IRT) in many testing programs, however, the existing literature offers little insight into the effectiveness of various approaches to handling missing responses in the context of scale linking. Scale linking is commonly used…
Descriptors: Data Analysis, Responses, Statistical Analysis, Measurement
Pan, Yiqin; Livne, Oren; Wollack, James A.; Sinharay, Sandip – Educational Measurement: Issues and Practice, 2023
In computerized adaptive testing, overexposure of items in the bank is a serious problem and might result in item compromise. We develop an item selection algorithm that utilizes the entire bank well and reduces the overexposure of items. The algorithm is based on collaborative filtering and selects an item in two stages. In the first stage, a set…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Pan, Yiqin; Wollack, James A. – Journal of Educational Measurement, 2021
As technologies have been improved, item preknowledge has become a common concern in the test security area. The present study proposes an unsupervised-learning-based approach to detect compromised items. The unsupervised-learning-based compromised item detection approach contains three steps: (1) classify responses of each examinee as either…
Descriptors: Test Items, Cheating, Artificial Intelligence, Identification
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2022
The testlet comprises a set of items based on a common stimulus. When the testlet is used in the tests, there may violate the local independence assumption, and in this case, it would not be appropriate to use traditional item response theory models in the tests in which the testlet is included. When the testlet is discussed, one of the most…
Descriptors: Test Items, Test Theory, Models, Sample Size
Choe, Edison M.; Han, Kyung T. – Journal of Educational Measurement, 2022
In operational testing, item response theory (IRT) models for dichotomous responses are popular for measuring a single latent construct [theta], such as cognitive ability in a content domain. Estimates of [theta], also called IRT scores or [theta hat], can be computed using estimators based on the likelihood function, such as maximum likelihood…
Descriptors: Scores, Item Response Theory, Test Items, Test Format

Direct link
