Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 579 |
Since 2006 (last 20 years) | 1093 |
Descriptor
Foreign Countries | 1106 |
Statistical Analysis | 1106 |
Feedback (Response) | 587 |
Questionnaires | 404 |
Student Attitudes | 289 |
Emotional Response | 236 |
College Students | 213 |
Teaching Methods | 208 |
Comparative Analysis | 205 |
English (Second Language) | 205 |
Correlation | 199 |
More ▼ |
Source
Author
Farrokhi, Farahman | 6 |
Lai, Ching-San | 4 |
Xu, Jianzhong | 4 |
Brown, Gavin T. L. | 3 |
Du, Jianxia | 3 |
Fan, Xitao | 3 |
Glas, Cees A. W. | 3 |
Hou, Huei-Tse | 3 |
Hu, Guangwei | 3 |
Huang, Hung-Yu | 3 |
Kaivanpanah, Shiva | 3 |
More ▼ |
Publication Type
Education Level
Location
Australia | 97 |
Turkey | 71 |
Germany | 65 |
Taiwan | 56 |
Canada | 55 |
United Kingdom | 54 |
Iran | 53 |
China | 51 |
Netherlands | 46 |
Japan | 33 |
Hong Kong | 31 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 1 |
Javed Iqbal; Tanweer Ul Islam – Educational Research and Evaluation, 2024
Economic efficiency demands accurate assessment of individual ability for selection purposes. This study investigates Classical Test Theory (CTT) and Item Response Theory (IRT) for estimating true ability and ranking individuals. Two Monte Carlo simulations and real data analyses were conducted. Results suggest a slight advantage for IRT, but…
Descriptors: Item Response Theory, Monte Carlo Methods, Ability, Statistical Analysis
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Kuijpers, Renske E.; Visser, Ingmar; Molenaar, Dylan – Journal of Educational and Behavioral Statistics, 2021
Mixture models have been developed to enable detection of within-subject differences in responses and response times to psychometric test items. To enable mixture modeling of both responses and response times, a distributional assumption is needed for the within-state response time distribution. Since violations of the assumed response time…
Descriptors: Test Items, Responses, Reaction Time, Models
Ryan, Tracii; Henderson, Michael – Assessment & Evaluation in Higher Education, 2018
Assessment feedback allows students to obtain valuable information about how they can improve their future performance and learning strategies. However, research indicates that students are more likely to reject or ignore comments if they evoke negative emotional responses. Despite the importance of this issue, there is a lack of research…
Descriptors: Foreign Countries, College Students, Feedback (Response), Foreign Students
Luo, Jiaorong; Yang, Mingcheng; Wang, Ling – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
The increased Simon effect with increasing the ratio of congruent trials may be interpreted by both attention modulation and irrelevant stimulus-response (S-R) associations learning accounts, although the reversed Simon effect with increasing the ratio of incongruent trials provides evidence supporting the latter account. To investigate if…
Descriptors: Foreign Countries, Responses, Reaction Time, Accuracy
Vaheoja, Monika; Verhelst, N. D.; Eggen, T.J.H.M. – European Journal of Science and Mathematics Education, 2019
In this article, the authors applied profile analysis to Maths exam data to demonstrate how different exam forms, differing in difficulty and length, can be reported and easily interpreted. The results were presented for different groups of participants and for different institutions in different Maths domains by evaluating the balance. Some…
Descriptors: Feedback (Response), Foreign Countries, Statistical Analysis, Scores
Iannario, Maria; Manisera, Marica; Piccolo, Domenico; Zuccolotto, Paola – Sociological Methods & Research, 2020
In analyzing data from attitude surveys, it is common to consider the "don't know" responses as missing values. In this article, we present a statistical model commonly used for the analysis of responses/evaluations expressed on Likert scales and extended to take into account the presence of don't know responses. The main objective is to…
Descriptors: Response Style (Tests), Likert Scales, Statistical Analysis, Models
Erturk, Zafer; Oyar, Esra – International Journal of Assessment Tools in Education, 2021
Studies aiming to make cross-cultural comparisons first should establish measurement invariance in the groups to be compared because results obtained from such comparisons may be artificial in the event that measurement invariance cannot be established. The purpose of this study is to investigate the measurement invariance of the data obtained…
Descriptors: International Assessment, Foreign Countries, Attitude Measures, Mathematics
Rios, Joseph A. – Educational and Psychological Measurement, 2021
Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence…
Descriptors: Response Style (Tests), Statistical Analysis, Measurement, Comparative Analysis
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Mousavi, Amin; Schmidt, Matthew; Squires, Vicki; Wilson, Ken – International Journal of Artificial Intelligence in Education, 2021
Greer and Mark's (2016) paper suggested and reviewed different methods for evaluating the effectiveness of intelligent tutoring systems such as Propensity score matching. The current study aimed at assessing the effectiveness of automated personalized feedback intervention implemented via the Student Advice Recommender Agent (SARA) in a first-year…
Descriptors: Automation, Feedback (Response), Intervention, College Freshmen
Zheng, Xiaying; Yang, Ji Seung – Measurement: Interdisciplinary Research and Perspectives, 2021
The purpose of this paper is to briefly introduce two most common applications of multiple group item response theory (IRT) models, namely detecting differential item functioning (DIF) analysis and nonequivalent group score linking with a simultaneous calibration. We illustrate how to conduct those analyses using the "Stata" item…
Descriptors: Item Response Theory, Test Bias, Computer Software, Statistical Analysis
Vriens, Ingrid; Moors, Guy; Gelissen, John; Vermunt, Jeroen K. – Sociological Methods & Research, 2017
Measuring values in sociological research sometimes involves the use of ranking data. A disadvantage of a ranking assignment is that the order in which the items are presented might influence the choice preferences of respondents regardless of the content being measured. The standard procedure to rule out such effects is to randomize the order of…
Descriptors: Evaluation Methods, Social Science Research, Sociology, Structural Equation Models