NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Educational and Psychological Measurement, 2021
An essential question when computing test--retest and alternate forms reliability coefficients is how many days there should be between tests. This article uses data from reading and math computerized adaptive tests to explore how the number of days between tests impacts alternate forms reliability coefficients. Results suggest that the highest…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Reliability, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Davison, Mark L.; Semmes, Robert; Huang, Lan; Close, Catherine N. – Educational and Psychological Measurement, 2012
Data from 181 college students were used to assess whether math reasoning item response times in computerized testing can provide valid and reliable measures of a speed dimension. The alternate forms reliability of the speed dimension was .85. A two-dimensional structural equation model suggests that the speed dimension is related to the accuracy…
Descriptors: Computer Assisted Testing, Reaction Time, Reliability, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Arce-Ferrer, Alvaro J.; Guzman, Elvira Martinez – Educational and Psychological Measurement, 2009
This study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for…
Descriptors: Factor Analysis, Raw Scores, Statistical Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Luecht, Richard M. – Educational and Psychological Measurement, 1987
Test Pac, a test scoring and analysis computer program for moderate-sized sample designs using dichotomous response items, performs comprehensive item analyses and multiple reliability estimates. It also performs single-facet generalizability analysis of variance, single-parameter item response theory analyses, test score reporting, and computer…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Item Analysis
Peer reviewed Peer reviewed
Krus, David J.; Ceurvorst, Robert W. – Educational and Psychological Measurement, 1978
An algorithm for updating the means of variances of a norm group after each computer-assisted administration of a test is described. The algorithm does not require storage of the whole data set, and provides for unlimited, continuous expansion of the test norms. (Author)
Descriptors: Computer Assisted Testing, Computer Programs, Norms, Statistical Data
Peer reviewed Peer reviewed
Rae, Gordon – Educational and Psychological Measurement, 1991
A brief overview is provided of the Conger-Lipshitz approach to estimating the reliability of a profile or test battery. A computational example from a recent study shows how canonical reliability can be obtained through existing statistical software. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Correlation, Equations (Mathematics)
Peer reviewed Peer reviewed
Brooks, Sarah; Hartz, Mary A. – Educational and Psychological Measurement, 1978
The predictive ability of a mathematics test organized into a branching test for computer-interactive administration was investigated. Twenty-five blocks of five items were used in the branching. Each testee took 25 items, with each subsequent block being determined by prior performance. Results supported the branching technique. (JKS)
Descriptors: Achievement Tests, Branching, College Mathematics, Computer Assisted Testing
Peer reviewed Peer reviewed
Hamer, Robert; Young, Forrest W. – Educational and Psychological Measurement, 1978
TESTER, a computer program which produces individualized objective tests from a pool of items, is described. Available in both PL/1 and FORTRAN, TESTER may be executed either interactively or in batch. (Author/JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Individualized Instruction
Peer reviewed Peer reviewed
Davis, Caroline; Cowles, Michael – Educational and Psychological Measurement, 1989
Computerized and paper-and-pencil versions of four standard personality inventories administered to 147 undergraduates were compared for: (1) test-retest reliability; (2) scores; (3) trait anxiety; (4) interaction between method and social desirability; and (5) preferences concerning method of testing. Doubts concerning the efficacy of…
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Personality Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Scherbaum, Charles A.; Cohen-Charash, Yochi; Kern, Michael J. – Educational and Psychological Measurement, 2006
General self-efficacy (GSE), individuals' belief in their ability to perform well in a variety of situations, has been the subject of increasing research attention. However, the psychometric properties (e.g., reliability, validity) associated with the scores on GSE measures have been criticized, which has hindered efforts to further establish the…
Descriptors: Self Efficacy, Measures (Individuals), Psychometrics, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Nietfeld, John L.; Enders, Craig K; Schraw, Gregory – Educational and Psychological Measurement, 2006
Researchers studying monitoring accuracy currently use two different indexes to estimate accuracy: relative accuracy and absolute accuracy. The authors compared the distributional properties of two measures of monitoring accuracy using Monte Carlo procedures that fit within these categories. They manipulated the accuracy of judgments (i.e., chance…
Descriptors: Monte Carlo Methods, Test Items, Computation, Metacognition