Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Comparative Testing | 9 |
| Evaluation Methods | 9 |
| Item Analysis | 9 |
| Computer Assisted Testing | 4 |
| Comparative Analysis | 3 |
| Intermode Differences | 3 |
| Academic Achievement | 2 |
| Effect Size | 2 |
| Equated Scores | 2 |
| Evaluation Research | 2 |
| Factor Analysis | 2 |
| More ▼ | |
Source
| Journal of Technology,… | 4 |
| Educational Research and… | 1 |
| Journal of Biological… | 1 |
| Journal of Genetic Psychology | 1 |
| ProQuest LLC | 1 |
| School Science and Mathematics | 1 |
Author
| Attali, Yigal | 1 |
| Boughton, Keith | 1 |
| Bridgeman, Brent | 1 |
| Brown, Michelle Stallone | 1 |
| Cantrell, Pamela | 1 |
| Ehmer, Maike | 1 |
| Fraillon, Julian | 1 |
| Genovese, Jeremy E. C. | 1 |
| Grimm, Tobias | 1 |
| Hammann, Marcus | 1 |
| Huynh, Huynh | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 8 |
| Reports - Research | 6 |
| Reports - Evaluative | 2 |
| Dissertations/Theses -… | 1 |
Education Level
| Higher Education | 5 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| High Schools | 1 |
| Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Zhang, Bin – ProQuest LLC, 2012
Social scientists usually are more interested in consumers' dichotomous choice, such as purchase a product or not, adopt a technology or not, etc. However, up to date, there is nearly no model can help us solve the problem of multi-network effects comparison with a dichotomous dependent variable. Furthermore, the study of multi-network…
Descriptors: Social Networks, Network Analysis, Comparative Analysis, Population Groups
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Schulz, Wolfram; Fraillon, Julian – Educational Research and Evaluation, 2011
When comparing data derived from tests or questionnaires in cross-national studies, researchers commonly assume measurement invariance in their underlying scaling models. However, different cultural contexts, languages, and curricula can have powerful effects on how students respond in different countries. This article illustrates how the…
Descriptors: Citizenship Education, International Studies, Item Response Theory, International Education
Hammann, Marcus; Phan, Thi Thanh Hoi; Ehmer, Maike; Grimm, Tobias – Journal of Biological Education, 2008
This study is concerned with different forms of assessment of pupils' skills in experimentation. The findings of three studies are reported. Study 1 investigates whether it is possible to develop reliable multiple-choice tests for the skills of forming hypotheses, designing experiments and analysing experimental data. Study 2 compares scores from…
Descriptors: Multiple Choice Tests, Experiments, Science Process Skills, Skill Analysis
Kim, Do-Hong; Huynh, Huynh – Journal of Technology, Learning, and Assessment, 2007
This study examined comparability of student scores obtained from computerized and paper-and-pencil formats of the large-scale statewide end-of-course (EOC) examinations in the two subject areas of Algebra and Biology. Evidence in support of comparability of computerized and paper-based tests was sought by examining scale scores, item parameter…
Descriptors: Computer Assisted Testing, Measures (Individuals), Biology, Algebra
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Cantrell, Pamela – School Science and Mathematics, 2003
The difference in gain scores produced by traditional pretests and those produced by retrospective pretests when compared to posttest scores on the Science Teaching Efficacy Belief Instrument for preservice teachers was investigated in this study. Results indicated that gain scores using the traditional pretest produced significant improvement in…
Descriptors: Pretests Posttests, Validity, Scores, Preservice Teachers
Genovese, Jeremy E. C. – Journal of Genetic Psychology, 2005
In this study, the author tested the reliability, concurrent validity, and predictive validity of three hemispheric cognitive style instruments: (a) the Preference Test (PT; R. Zenhausern, 1978), (b) the Polarity Questionnaire (PQ; B. E. Morton, 2002), and (c) the Wagner Preference Inventory II (WAPI II; R. F. Wagner & K. A. Wells, 1985).…
Descriptors: Cognitive Style, Item Analysis, Psychometrics, Multitrait Multimethod Techniques

Direct link
Peer reviewed
