Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 37 |
Since 2016 (last 10 years) | 63 |
Since 2006 (last 20 years) | 88 |
Descriptor
Computer Assisted Testing | 156 |
Test Format | 156 |
Test Items | 156 |
Test Construction | 68 |
Adaptive Testing | 47 |
Comparative Analysis | 37 |
Multiple Choice Tests | 35 |
Foreign Countries | 33 |
Difficulty Level | 29 |
Scores | 29 |
Scoring | 27 |
More ▼ |
Source
Author
Wainer, Howard | 5 |
Anderson, Paul S. | 4 |
Stocking, Martha L. | 4 |
Goldhammer, Frank | 3 |
van der Linden, Wim J. | 3 |
Gierl, Mark J. | 2 |
Katz, Irvin R. | 2 |
Keehner, Madeleine | 2 |
Li, Dongmei | 2 |
Lin, Chuan-Ju | 2 |
Plake, Barbara S. | 2 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 6 |
Researchers | 4 |
Teachers | 4 |
Location
Canada | 3 |
Germany | 3 |
United Kingdom | 3 |
Australia | 2 |
China | 2 |
Japan | 2 |
New Jersey | 2 |
Turkey | 2 |
United States | 2 |
Africa | 1 |
Canada (Ottawa) | 1 |
More ▼ |
Laws, Policies, & Programs
Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Jing Ma – ProQuest LLC, 2024
This study investigated the impact of scoring polytomous items later on measurement precision, classification accuracy, and test security in mixed-format adaptive testing. Utilizing the shadow test approach, a simulation study was conducted across various test designs, lengths, number and location of polytomous item. Results showed that while…
Descriptors: Scoring, Adaptive Testing, Test Items, Classification
Srikanth Allamsetty; M. V. S. S. Chandra; Neelima Madugula; Byamakesh Nayak – IEEE Transactions on Learning Technologies, 2024
The present study is related to the problem associated with student assessment with online examinations at higher educational institutes (HEIs). With the current COVID-19 outbreak, the majority of educational institutes are conducting online examinations to assess their students, where there would always be a chance that the students go for…
Descriptors: Computer Assisted Testing, Accountability, Higher Education, Comparative Analysis
Sebastian Moncaleano – ProQuest LLC, 2021
The growth of computer-based testing over the last two decades has motivated the creation of innovative item formats. It is often argued that technology-enhanced items (TEIs) provide better measurement of test-takers' knowledge, skills, and abilities by increasing the authenticity of tasks presented to test-takers (Sireci & Zenisky, 2006).…
Descriptors: Computer Assisted Testing, Test Format, Test Items, Classification
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Fu-Yun Yu – Interactive Learning Environments, 2024
Currently, 50 + learning systems supporting student question-generation (SQG) activities have been developed. While generating questions of different types is supported in many of these systems, systems allowing students to generate questions around a scenario (i.e. student testlet-generation, STG) are not yet available. Noting the increasing…
Descriptors: Computer Assisted Testing, Test Format, Test Construction, Test Items
Green, Clare; Hughes, Sarah – Cambridge University Press & Assessment, 2022
The Digital High Stakes Assessment Programme in Cambridge University Press & Assessment is developing digital assessments for UK and global teachers and learners. In one development, the team are making decisions about the assessment models to use to assess computing systems knowledge and understanding. This research took place as part of the…
Descriptors: Test Items, Computer Science, Achievement Tests, Objective Tests
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Jang, Jung Un; Kim, Eun Joo – Journal of Curriculum and Teaching, 2022
This study conducts the validity of the pen-and-paper and smart-device-based tests on optician's examination. The developed questions for each media were based on the national optician's simulation test. The subjects of this study were 60 students enrolled in E University. The data analysis was performed to verify the equivalence of the two…
Descriptors: Optometry, Licensing Examinations (Professions), Test Format, Test Validity
Anna Caroline Keefe – ProQuest LLC, 2022
Computer-assisted assessment continues to be incorporated into more and more mathematics courses. As this method of testing is used, questions are created to use through computer-assisted assessment. This study analyzed two types of questions used on computer-assisted assessment in Calculus I, II, and III courses. The first question type was…
Descriptors: Psychometrics, Computer Assisted Testing, Technology Integration, Calculus
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Kathleen A. Paciga; Christina M. Cassano – AERA Open, 2024
Early literacy assessment has become commonplace in the preschool years, with phonological awareness constituting one component of emergent literacy targeted by such practices. This within-subjects experimental study examines the role of word familiarity on 93 dual language preschoolers' performance on phoneme-level awareness tasks in…
Descriptors: Emergent Literacy, Phonological Awareness, Bilingualism, Preschool Children
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)