Publication Date
| In 2026 | 3 |
| Since 2025 | 240 |
| Since 2022 (last 5 years) | 1373 |
| Since 2017 (last 10 years) | 2831 |
| Since 2007 (last 20 years) | 4821 |
Descriptor
| Computer Assisted Testing | 7218 |
| Foreign Countries | 2054 |
| Test Construction | 1112 |
| Student Evaluation | 1067 |
| Evaluation Methods | 1061 |
| Test Items | 1058 |
| Adaptive Testing | 1053 |
| Educational Technology | 905 |
| Comparative Analysis | 835 |
| Scores | 832 |
| Higher Education | 825 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 40 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 170 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 94 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 72 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
Peer reviewedNeuman, George; Baydoun, Ramzi – Applied Psychological Measurement, 1998
Studied the cross-mode equivalence of paper-and-pencil and computer-based clerical tests with 141 undergraduates. Found no differences across modes for the two types of tests. Differences can be minimized when speeded computerized tests follow the same administration and response procedures as the paper format. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewedVispoel, Walter P. – Journal of Educational Measurement, 1998
Compared results from computer-adaptive and self-adaptive tests under conditions in which item review was and was not permitted for 379 college students. Results suggest that, when given the opportunity, most examinees will change answers, but usually only to a small portion of items, resulting in some benefit to the test taker. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Higher Education
Peer reviewedRussell, Michael – Education Policy Analysis Archives, 1999
Examined the effect of taking open-ended tests on computers and on paper for students with different levels of computer skill using items from the Massachusetts Comprehensive Assessment System and the National Assessment of Educational Progress for 287 middle school students. Results suggest a large effect of mode of administration on student…
Descriptors: Computer Assisted Testing, Followup Studies, Middle School Students, Middle Schools
Peer reviewedChen, Ssu-Kuang; Hou, Liling; Dodd, Barbara G. – Educational and Psychological Measurement, 1998
A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the partial credit model and compare it with maximum likelihood estimation (MLE). Results show the conditions under which EAP and MLE provide relatively accurate estimation in CAT. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedMooney, John – Public Personnel Management, 2002
The experience of a county government illustrates factors to consider in implementing online employment testing for job candidates: (1) selection of the appropriate Internet-based test; (2) passwords, timing, security, and technical difficulties; and (3) provisions for applicants who lack Internet access. (SK)
Descriptors: Computer Assisted Testing, Internet, Job Applicants, Occupational Tests
Peer reviewedWalker, Cindy M.; Beretvas, S. Natasha; Ackerman, Terry – Applied Measurement in Education, 2001
Conducted a simulation study of differential item functioning (DIF) to compare the power and Type I error rates for two conditions: using an examinee's ability estimate as the conditioning variable with the CATSIB program and either using the regression correction from CATSIB or not. Discusses implications of findings for DIF detection. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Bias
Nandakumar, Ratna; Roussos, Louis – Journal of Educational and Behavioral Statistics, 2004
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Descriptors: Evaluation, Adaptive Testing, Computer Assisted Testing, Pretesting
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2003
The Hetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic…
Descriptors: Law Schools, Adaptive Testing, Admission (School), Computer Assisted Testing
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego – Journal of Educational Measurement, 2007
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Descriptors: Inferences, Models, Item Response Theory, Cognitive Measurement
Brown, Richard S.; Villarreal, Julio C. – International Journal of Testing, 2007
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation,…
Descriptors: Psychometrics, Test Bias, Testing, Simulation
Chaney, Elizabeth; Gilman, David Alan – Computers in the Schools, 2005
This paper reviews the history of technology and testing. The role and functions of computers in education have become more varied, from drill and practice to simple tutorials toWebQuests. However, one important aspect of teaching for which the computer is ideally suited, achievement testing, is often overlooked. While it is not difficult to…
Descriptors: Computer Assisted Testing
Oster-Levinz, Anat; Klieger, Aviva – Turkish Online Journal of Distance Education, 2010
In the Information Communication Technology era teachers will have to wisely use the online environment in order to realize a new pedagogy. The penetration of the internet and collaborative online instruments to teaching and learning affect the quality of teaching. We have developed a digital indicator to evaluate the quality of the online tasks…
Descriptors: Pedagogical Content Knowledge, Technological Literacy, Online Courses, Task Analysis
Winter, Phoebe C., Ed. – Council of Chief State School Officers, 2010
In 2006, a consortium of state departments of education, led by the North Carolina Department of Public Instruction and the Council of Chief State School Officers, was awarded a grant from the U.S. Department of Education to investigate methods of determining comparability of variations of states' assessments used to meet the requirements of the…
Descriptors: Achievement Tests, Alternative Assessment, Spanish, Linguistics
Crisp, Geoffrey – Journal of Learning Design, 2010
This paper will explore some of the practical options that are available to teachers as we move towards Assessment 2.0. Assessment 2.0 describes an environment in which the teacher sets tasks that allow students to use more dynamic, immersive and interactive environments for exploring and creating responses to sophisticated assessment tasks.…
Descriptors: Educational Assessment, Student Evaluation, Thinking Skills, Interaction

Direct link
