Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 14 |
Descriptor
Difficulty Level | 21 |
Test Format | 21 |
Testing | 21 |
Test Items | 15 |
Foreign Countries | 7 |
Multiple Choice Tests | 7 |
Comparative Analysis | 6 |
Computer Assisted Testing | 5 |
Item Response Theory | 5 |
Models | 4 |
Scores | 4 |
More ▼ |
Source
Author
Adam C. Sales | 1 |
Andrew A. McReynolds | 1 |
Aryadoust, Vahid | 1 |
Ashish Gurung | 1 |
Baghaei, Purya | 1 |
Basaraba, Deni L. | 1 |
Brownell, Sara E. | 1 |
Cooper, Katelyn M. | 1 |
DiBattista, David | 1 |
Eamon S. Worden | 1 |
Eckerly, Carol | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Journal Articles | 12 |
Speeches/Meeting Papers | 5 |
Reports - Evaluative | 3 |
Information Analyses | 2 |
Tests/Questionnaires | 2 |
Collected Works - Proceedings | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 5 |
Postsecondary Education | 4 |
Middle Schools | 3 |
Secondary Education | 3 |
Elementary Education | 2 |
Grade 5 | 2 |
Grade 6 | 2 |
Intermediate Grades | 2 |
Junior High Schools | 2 |
Grade 7 | 1 |
High Schools | 1 |
More ▼ |
Audience
Location
Germany | 2 |
Canada | 1 |
China | 1 |
India | 1 |
Iran | 1 |
Malaysia | 1 |
Netherlands | 1 |
Philippines | 1 |
Singapore | 1 |
Sweden | 1 |
United Kingdom (England) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
International English… | 1 |
What Works Clearinghouse Rating
Semih Asiret; Seçil Ömür Sünbül – International Journal of Psychology and Educational Studies, 2023
In this study, it was aimed to examine the effect of missing data in different patterns and sizes on test equating methods under the NEAT design for different factors. For this purpose, as part of this study, factors such as sample size, average difficulty level difference between the test forms, difference between the ability distribution,…
Descriptors: Research Problems, Data, Test Items, Equated Scores
Yang, Chunliang; Li, Jiaojiao; Zhao, Wenbo; Luo, Liang; Shanks, David R. – Educational Psychology Review, 2023
Practice testing is a powerful tool to consolidate long-term retention of studied information, facilitate subsequent learning of new information, and foster knowledge transfer. However, practitioners frequently express the concern that tests are anxiety-inducing and that their employment in the classroom should be minimized. The current review…
Descriptors: Tests, Test Format, Testing, Test Wiseness
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Basaraba, Deni L.; Yovanoff, Paul; Shivraj, Pooja; Ketterlin-Geller, Leanne R. – Practical Assessment, Research & Evaluation, 2020
Stopping rules for fixed-form tests with graduated item difficulty are intended to stop administration of a test at the point where students are sufficiently unlikely to provide a correct response following a pattern of incorrect responses. Although widely employed in fixed-form tests in education, little research has been done to empirically…
Descriptors: Formative Evaluation, Test Format, Test Items, Difficulty Level
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Wright, Christian D.; Huang, Austin L.; Cooper, Katelyn M.; Brownell, Sara E. – International Journal for the Scholarship of Teaching and Learning, 2018
College instructors in the United States usually make their own decisions about how to design course exams. Even though summative course exams are well known to be important to student success, we know little about the decision making of instructors when designing course exams. To probe how instructors design exams for introductory biology, we…
Descriptors: College Faculty, Science Teachers, Science Tests, Teacher Made Tests
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Gafoor, Kunnathodi Abdul; Shilna, V. – Online Submission, 2014
In view of the perceived difficulty of organic chemistry unit for high schools students, this study examined the usefulness of concept mapping as a testing device to assess students' difficulty in the select areas. Since many tests used for identifying students misconceptions and difficulties in school subjects are observed to favour one or the…
Descriptors: Sex Fairness, Gender Differences, Rural Areas, Organic Chemistry
Plassmann, Sibylle; Zeidler, Beate – Language Learning in Higher Education, 2014
Language testing means taking decisions: about the test taker's results, but also about the test construct and the measures taken in order to ensure quality. This article takes the German test "telc Deutsch C1 Hochschule" as an example to illustrate this decision-making process in an academic context. The test is used for university…
Descriptors: Language Tests, Test Wiseness, Test Construction, Decision Making
Kubinger, Klaus D. – Educational and Psychological Measurement, 2009
The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…
Descriptors: Models, Test Items, Psychometrics, Item Response Theory
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Previous Page | Next Page »
Pages: 1 | 2