Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Computer Assisted Testing | 8 |
| Effect Size | 8 |
| Item Response Theory | 8 |
| Comparative Analysis | 5 |
| Test Items | 4 |
| Foreign Countries | 3 |
| Mathematics Tests | 3 |
| Correlation | 2 |
| Item Analysis | 2 |
| Language Tests | 2 |
| Measurement | 2 |
| More ▼ | |
Source
| ETS Research Report Series | 2 |
| ACT, Inc. | 1 |
| Educational and Psychological… | 1 |
| Mathematics Education… | 1 |
| Partnership for Assessment of… | 1 |
| Psicologica: International… | 1 |
Author
| Ali, Usama | 1 |
| Bergstrom, Betty A. | 1 |
| Boughton, Keith A. | 1 |
| Breland, Hunter | 1 |
| Brown, Terran | 1 |
| Chen, Jianshen | 1 |
| Costanzo, Kate | 1 |
| Egberink, Iris J. L. | 1 |
| Ferrando, Pere J. | 1 |
| Harris, Deborah | 1 |
| Hou, Likun | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 6 |
| Journal Articles | 4 |
| Information Analyses | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Early Childhood Education | 2 |
| Elementary Education | 2 |
| Higher Education | 2 |
| Primary Education | 2 |
| Grade 3 | 1 |
| Grade 5 | 1 |
| Grade 7 | 1 |
| Grade 9 | 1 |
| High Schools | 1 |
| Intermediate Grades | 1 |
| Junior High Schools | 1 |
| More ▼ | |
Audience
Location
| Australia | 1 |
| Netherlands | 1 |
| Spain | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| ACT Assessment | 1 |
| Eysenck Personality Inventory | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N. – Educational and Psychological Measurement, 2015
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
Descriptors: Personality Measures, Computer Assisted Testing, Measurement, Test Items
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Rogers, Angela – Mathematics Education Research Group of Australasia, 2013
As we move into the 21st century, educationalists are exploring the myriad of possibilities associated with Computer Based Assessment (CBA). At first glance this mode of assessment seems to provide many exciting opportunities in the mathematics domain, yet one must question the validity of CBA and whether our school systems, students and teachers…
Descriptors: Mathematics Tests, Student Evaluation, Computer Assisted Testing, Test Validity
Ferrando, Pere J. – Psicologica: International Journal of Methodology and Experimental Psychology, 2006
This study assessed the hypothesis that the response time to an item increases as the positions of the item and the respondent on the continuum of the trait that is measured draw closer together. This hypothesis has previously been stated by several authors, but so far it does not seem to have been empirically assessed in a rigorous way. A…
Descriptors: Reaction Time, Personality, Effect Size, Item Response Theory
Bergstrom, Betty A. – 1992
This paper reports on existing studies and uses meta analysis to compare and synthesize the results of 20 studies from 8 research reports comparing the ability measure equivalence of computer adaptive tests (CAT) and conventional paper and pencil tests. Using the research synthesis techniques developed by Hedges and Olkin (1985), it is possible to…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Puhan, Gautam; Boughton, Keith A.; Kim, Sooyeon – ETS Research Report Series, 2005
The study evaluated the comparability of two versions of a teacher certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). Standardized mean difference (SMD) and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that effect sizes…
Descriptors: Comparative Analysis, Test Items, Statistical Analysis, Teacher Certification
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – ETS Research Report Series, 2004
This study has investigated the comparability of computer-based testing (CBT) writing prompts in the Test of English as a Foreign Language™ (TOEFL®) for examinees of different native language backgrounds. A total of 81 writing prompts introduced from July 1998 through August 2000 were examined using a three-step logistic regression procedure for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing

Peer reviewed
Direct link
