NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 976 to 990 of 9,530 results Save | Export
Crisp, Victoria; Shaw, Stuart – Research Matters, 2020
For assessment contexts where both a paper-based test and an on-screen assessment are available as alternatives, it is still common for the paper-based test to be prepared first with questions later transferred into an on-screen testing platform. One challenge with this is that some questions cannot be transferred. One solution might be for…
Descriptors: Computer Assisted Testing, Test Items, Test Construction, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – Educational and Psychological Measurement, 2020
This study compares automated methods to develop short forms of psychometric scales. Obtaining a short form that has both adequate internal structure and strong validity with respect to relationships with other variables is difficult with traditional methods of short-form development. Metaheuristic algorithms can select items for short forms while…
Descriptors: Test Construction, Automation, Heuristics, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, A. Adrienne; Wind, Stefanie A. – International Journal of Testing, 2020
Researchers apply individual person fit analyses as a procedure for checking model-data fit for individual test-takers. When a test-taker "misfits," it means that the inferences from their test score regarding what they know and can do may not be accurate. One problem in applying individual person fit procedures in practice is the…
Descriptors: Test Items, Scores, Achievement, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Selvi, Hüseyin – Higher Education Studies, 2020
This study aimed to examine the effect of using items from previous exams on students? pass-fail rates and on the psychometric properties of the tests and items. The study included data from 115 tests and 11,500 items used in the midterm and final exams of 3,910 students in the preclinical term at the Faculty of Medicine from 2014 to 2019. Data…
Descriptors: Answer Keys, Tests, Test Items, True Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Daniel; Cook, Robert – Educational Measurement: Issues and Practice, 2020
In this paper we assert that the practice of principled assessment design renders traditional standard-setting methodology redundant at best and contradictory at worst. We describe the rationale for, and methodological details of, Embedded Standard Setting (ESS; previously, Engineered Cut Scores. Lewis, 2016), an approach to establish performance…
Descriptors: Standard Setting, Evaluation, Cutting Scores, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Furter, Robert T.; Dwyer, Andrew C. – Applied Measurement in Education, 2020
Maintaining equivalent performance standards across forms is a psychometric challenge exacerbated by small samples. In this study, the accuracy of two equating methods (Rasch anchored calibration and nominal weights mean) and four anchor item selection methods were investigated in the context of very small samples (N = 10). Overall, nominal…
Descriptors: Classification, Accuracy, Item Response Theory, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Michelle Y.; Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
This study introduces a novel differential item functioning (DIF) method based on propensity score matching that tackles two challenges in analyzing performance assessment data, that is, continuous task scores and lack of a reliable internal variable as a proxy for ability or aptitude. The proposed DIF method consists of two main stages. First,…
Descriptors: Probability, Scores, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lancaster, Thomas; Cotarlan, Codrin – International Journal for Educational Integrity, 2021
Students are using file sharing sites to breach academic integrity in light of the COVID-19 pandemic. This paper analyses the use of one such site, Chegg, which offers "homework help" and other academic services to students. Chegg is often presented as a file sharing site in the academic literature, but that is just one of many ways in…
Descriptors: Cheating, Contracts, Integrity, STEM Education
Peer reviewed Peer reviewed
Direct linkDirect link
Chung, Seungwon; Cai, Li – Journal of Educational and Behavioral Statistics, 2021
In the research reported here, we propose a new method for scale alignment and test scoring in the context of supporting students with disabilities. In educational assessment, students from these special populations take modified tests because of a demonstrated disability that requires more assistance than standard testing accommodation. Updated…
Descriptors: Students with Disabilities, Scoring, Achievement Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Szendey, Olivia; Kaplan, Larry – Educational Assessment, 2021
Differential Item Function (DIF) analysis is commonly employed to examine potential bias produced by a test item. Since its introduction DIF analyses have focused on potential bias related to broad categories of oppression, including gender, racial stratification, economic class, and ableness. More recently, efforts to examine the effects of…
Descriptors: Test Bias, Achievement Tests, Individual Characteristics, Disadvantaged
Peer reviewed Peer reviewed
Direct linkDirect link
Petersen, Lara Aylin; Leue, Anja – Applied Cognitive Psychology, 2021
The Cambridge Face Memory Test Long (CFMT+) is used to investigate extraordinary face recognition abilities (super-recognizers [SR]). Whether lab and online presentation of the CFMT+ lead to different test performance has not yet been investigated. Furthermore, we wanted to investigate psychometric properties of the CFMT+ and the Glasgow face…
Descriptors: Recognition (Psychology), Human Body, Cognitive Tests, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozalp, M. Talha; Akpinar, Mehmet – Open Journal for Educational Research, 2021
In this study, Social Studies teachers tried to examine the questions in their exams according to their creative thinking skills. In the study document analysis and semi-structured interview methods were used for this purpose. A total of 2,065 questions were examined from the examinations prepared by 61 teachers working in 20 different schools in…
Descriptors: Social Studies, Teacher Attitudes, Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oyar, Esra; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2021
The aim of this study is to examine whether or not the positive and negative items in the Mathematical Self-Confidence Scale employed in TIMSS 2015 lead to wording effect. While examining whether the expression effect is present or not, analyzes were conducted both on the general sample and on a separate sample for female and male students. To…
Descriptors: Foreign Countries, International Assessment, Self Concept Measures, Mathematics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Pages: 1  |  ...  |  62  |  63  |  64  |  65  |  66  |  67  |  68  |  69  |  70  |  ...  |  636