Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 17 |
| Since 2007 (last 20 years) | 31 |
Descriptor
| Computer Assisted Testing | 43 |
| Test Items | 17 |
| Adaptive Testing | 14 |
| Foreign Countries | 11 |
| Item Response Theory | 10 |
| Simulation | 10 |
| Psychometrics | 9 |
| Comparative Analysis | 7 |
| Internet | 7 |
| Test Format | 7 |
| Correlation | 6 |
| More ▼ | |
Source
| International Journal of… | 43 |
Author
Publication Type
| Journal Articles | 43 |
| Reports - Research | 26 |
| Reports - Descriptive | 9 |
| Reports - Evaluative | 6 |
| Tests/Questionnaires | 2 |
| Information Analyses | 1 |
| Opinion Papers | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Practitioners | 1 |
| Researchers | 1 |
Location
| Germany | 7 |
| China | 3 |
| Canada | 2 |
| Denmark | 2 |
| Poland | 2 |
| South Korea | 2 |
| Sweden | 2 |
| United States | 2 |
| Austria | 1 |
| Belgium | 1 |
| Brazil | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Management Admission… | 1 |
| National Assessment of… | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Evers, Arne; McCormick, Carina M.; Hawley, Leslie R.; Muñiz, José; Balboni, Giulia; Bartram, Dave; Boben, Dusica; Egeland, Jens; El-Hassan, Karma; Fernández-Hermida, José R.; Fine, Saul; Frans, Örjan; Gintiliené, Grazina; Hagemeister, Carmen; Halama, Peter; Iliescu, Dragos; Jaworowska, Aleksandra; Jiménez, Paul; Manthouli, Marina; Matesic, Krunoslav; Michaelsen, Lars; Mogaji, Andrew; Morley-Kirk, James; Rózsa, Sándor; Rowlands, Lorraine; Schittekatte, Mark; Sümer, H. Canan; Suwartono, Tono; Urbánek, Tomáš; Wechsler, Solange; Zelenevska, Tamara; Zanev, Svetoslav; Zhang, Jianxin – International Journal of Testing, 2017
On behalf of the International Test Commission and the European Federation of Psychologists' Associations a world-wide survey on the opinions of professional psychologists on testing practices was carried out. The main objective of this study was to collect data for a better understanding of the state of psychological testing worldwide. These data…
Descriptors: Testing, Attitudes, Surveys, Psychologists
Lee, Yi-Hsuan; Haberman, Shelby J. – International Journal of Testing, 2016
The use of computer-based assessments makes the collection of detailed data that capture examinees' progress in the tests and time spent on individual actions possible. This article presents a study using process and timing data to aid understanding of an international language assessment and the examinees. Issues regarding test-taking strategies,…
Descriptors: Computer Assisted Testing, Test Wiseness, Language Tests, International Assessment
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Ling, Guangming – International Journal of Testing, 2016
To investigate possible iPad related mode effect, we tested 403 8th graders in Indiana, Maryland, and New Jersey under three mode conditions through random assignment: a desktop computer, an iPad alone, and an iPad with an external keyboard. All students had used an iPad or computer for six months or longer. The 2-hour test included reading, math,…
Descriptors: Educational Testing, Computer Assisted Testing, Handheld Devices, Computers
Talento-Miller, Eileen; Guo, Fanmin; Han, Kyung T. – International Journal of Testing, 2013
When power tests include a time limit, it is important to assess the possibility of speededness for examinees. Past research on differential speededness has examined gender and ethnic subgroups in the United States on paper and pencil tests. When considering the needs of a global audience, research regarding different native language speakers is…
Descriptors: Adaptive Testing, Computer Assisted Testing, English, Scores
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Stark, Stephen; Chernyshenko, Oleksandr S. – International Journal of Testing, 2011
This article delves into a relatively unexplored area of measurement by focusing on adaptive testing with unidimensional pairwise preference items. The use of such tests is becoming more common in applied non-cognitive assessment because research suggests that this format may help to reduce certain types of rater error and response sets commonly…
Descriptors: Test Length, Simulation, Adaptive Testing, Item Analysis
Makransky, Guido; Glas, Cees A. W. – International Journal of Testing, 2013
Cognitive ability tests are widely used in organizations around the world because they have high predictive validity in selection contexts. Although these tests typically measure several subdomains, testing is usually carried out for a single subdomain at a time. This can be ineffective when the subdomains assessed are highly correlated. This…
Descriptors: Foreign Countries, Cognitive Ability, Adaptive Testing, Feedback (Response)
Guo, Jing; Tay, Louis; Drasgow, Fritz – International Journal of Testing, 2009
Test compromise is a concern in cognitive ability testing because such tests are widely used in employee selection and administered on a continuous basis. In this study, the resistance of cognitive tests, deployed in different test systems, to small-scale cheating conspiracies, was evaluated regarding the accuracy of ability estimation.…
Descriptors: Cheating, Cognitive Tests, Adaptive Testing, Computer Assisted Testing
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Veldkamp, Bernard P.; van der Linden, Wim J. – International Journal of Testing, 2008
In most operational computerized adaptive testing (CAT) programs, the Sympson-Hetter (SH) method is used to control the exposure of the items. Several modifications and improvements of the original method have been proposed. The Stocking and Lewis (1998) version of the method uses a multinomial experiment to select items. For severely constrained…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Methods
Ullstadius, Eva; Carlstedt, Berit; Gustafsson, Jan-Eric – International Journal of Testing, 2008
The influence of general and verbal ability on each of 72 verbal analogy test items were investigated with new factor analytical techniques. The analogy items together with the Computerized Swedish Enlistment Battery (CAT-SEB) were given randomly to two samples of 18-year-old male conscripts (n = 8566 and n = 5289). Thirty-two of the 72 items had…
Descriptors: Test Items, Verbal Ability, Factor Analysis, Swedish
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing
Papanastasiou, Elena C.; Reckase, Mark D. – International Journal of Testing, 2007
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Descriptors: Simulation, Adaptive Testing, Computer Assisted Testing, Test Items

Peer reviewed
Direct link
