NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.; Park, Ryoungsun – Educational and Psychological Measurement, 2012
This study compared various panel designs of the multistage test (MST) using mixed-format tests in the context of classification testing. Simulations varied the design of the first-stage module. The first stage was constructed according to three levels of test information functions (TIFs) with three different TIF centers. Additional computerized…
Descriptors: Test Format, Comparative Analysis, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2008
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have…
Descriptors: Elementary Secondary Education, Reading Achievement, Computer Assisted Testing, Comparative Analysis
Peer reviewed Peer reviewed
Vispoel, Walter P.; Boo, Jaeyool; Bleiler, Timothy – Educational and Psychological Measurement, 2001
Evaluated the characteristics of computerized and paper-and-pencil versions of the Rosenberg Self-Esteem Scale (SES) using scores for 224 college students. Results show that mode of administration has little effect on the psychometric properties of the SES although the computerized version took longer and was preferred by examinees. (SLD)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Psychometrics
Peer reviewed Peer reviewed
Goldberg, Amie L.; Pedulla, Joseph J. – Educational and Psychological Measurement, 2002
Studied the relationship between test mode (paper and pencil or computerized with and without editorial control) and computer familiarity for 222 undergraduates. Results emphasize the importance of evaluating time constraints when converting exams from paper to computer delivery. (SLD)
Descriptors: Computer Assisted Testing, Computer Literacy, Higher Education, Test Construction
Peer reviewed Peer reviewed
Sukigara, Masune – Educational and Psychological Measurement, 1996
The New Japanese version of the Minnesota Multiphasic Personality Inventory (MMPI) was administered twice to 200 Japanese female college students to verify the equivalence of the computer- and booklet-administered formats. For four scales, scores from the computer version were statistically significantly higher than those from the booklet…
Descriptors: College Students, Computer Assisted Testing, Females, Foreign Countries
Peer reviewed Peer reviewed
Millstein, Susan G. – Educational and Psychological Measurement, 1987
This study examined response bias in 108 female adolescents randomly assigned to one of three groups: (1) interactive computer interview; (2) face-to-face interview, or (3) self-administered questionnaire. Results showed no significant group differences on reports of sexual behavior, substance use or symptomatology. (Author/BS)
Descriptors: Adolescents, Affective Behavior, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Ponsoda, Vincente; And Others – Educational and Psychological Measurement, 1997
A study involving 209 Spanish high school students compared computer-based English vocabulary tests: (1) a self-adapted test (SAT); (2) a computerized adaptive test (CAT); (3) a conventional test; and (4) a test combining SAT and CAT. No statistically significant differences were found among test types for estimated ability or posttest anxiety.…
Descriptors: Ability, Adaptive Testing, Anxiety, Comparative Analysis
Peer reviewed Peer reviewed
Styles, Irene; Andrich, David – Educational and Psychological Measurement, 1993
This paper describes the use of the Rasch model to help implement computerized administration of the standard and advanced forms of Raven's Progressive Matrices (RPM), to compare relative item difficulties, and to convert scores between the standard and advanced forms. The sample consisted of 95 girls and 95 boys in Australia. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education