Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 4 |
Descriptor
Source
Author
| Shermis, Mark D. | 13 |
| Averitt, Jason | 2 |
| Brown, Mike | 1 |
| Bublitz, Scott T. | 1 |
| DiVesta, Francis J. | 1 |
| Kieftenbeld, Vincent | 1 |
| Lillig, Clo | 1 |
| Lombard, Danielle | 1 |
| Lottridge, Sue | 1 |
| Mao, Liyang | 1 |
| Mayfield, Elijah | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 9 |
| Reports - Research | 7 |
| Reports - Descriptive | 5 |
| Speeches/Meeting Papers | 2 |
| Books | 1 |
| Guides - Non-Classroom | 1 |
Education Level
| Higher Education | 2 |
| Elementary Secondary Education | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Computer Anxiety Scale | 1 |
| Myers Briggs Type Indicator | 1 |
| National Assessment of… | 1 |
| Test Anxiety Inventory | 1 |
What Works Clearinghouse Rating
Shermis, Mark D.; Lottridge, Sue; Mayfield, Elijah – Journal of Educational Measurement, 2015
This study investigated the impact of anonymizing text on predicted scores made by two kinds of automated scoring engines: one that incorporates elements of natural language processing (NLP) and one that does not. Eight data sets (N = 22,029) were used to form both training and test sets in which the scoring engines had access to both text and…
Descriptors: Scoring, Essays, Computer Assisted Testing, Natural Language Processing
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Shermis, Mark D. – Assessment Update, 2008
This article describes the Collegiate Learning Assessment (CLA), a postsecondary assessment tool designed to evaluate the "value-added" component of institutional contributions to student learning outcomes. Developed by the Council for Aid to Education (CAE), the instrument ostensibly focuses on the contributions of general education coursework…
Descriptors: Postsecondary Education, Exit Examinations, Majors (Students), Student Evaluation
Shermis, Mark D.; Averitt, Jason – 2001
The purpose of this paper is to enumerate a series of security steps that might be taken by those researchers or organizations that are contemplating Web-based tests and performance assessments. From a security viewpoint, much of what goes on with Web-based transactions is similar to other general computer activity, but the recommendations here…
Descriptors: Computer Assisted Testing, Computer Security, Performance Based Assessment, Testing Problems
Peer reviewedShermis, Mark D.; Averitt, Jason – Educational Measurement: Issues and Practice, 2002
Outlines a series of security steps that might be taken by researchers or organizations that are contemplating Web-based tests and performance assessments. Focuses on what can be done to avoid the loss, compromising, or modification of data collected by or stored through the Internet. (SLD)
Descriptors: Computer Assisted Testing, Data Collection, Performance Based Assessment, Test Construction
Shermis, Mark D.; Mzumara, Howard; Brown, Mike; Lillig, Clo – 1997
An important problem facing institutions of higher education is the number of students reporting that they are not adequately prepared for the difficulty of college-level courses. To meet this problem, a computerized adaptive testing package was developed that permitted remote placement testing of high school students via the World Wide Web. The…
Descriptors: Adaptive Testing, Adolescents, Computer Assisted Testing, High Schools
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
Peer reviewedShermis, Mark D.; And Others – Journal of Developmental Education, 1996
Describes a study to pilot-test a new reading assessment instrument designed to function in a computerized adaptive testing (CAT) environment. Indicates that the measure showed fair internal consistency and correlated well with other tests. Discusses advantages and disadvantages of CAT systems and describes the HyperCAT testing program. (23…
Descriptors: Computer Assisted Testing, Diagnostic Tests, Higher Education, Pilot Projects
Peer reviewedShermis, Mark D.; Lombard, Danielle – Computers in Human Behavior, 1998
Examines the degree to which computer and test anxiety have a predictive role in performance across three computer-administered placement tests. Subjects (72 undergraduate students) were measured with the Computer Anxiety Rating Scale, the Test Anxiety Inventory, and the Myers-Briggs Type Indicator. Results suggest that much of what is considered…
Descriptors: Computer Anxiety, Computer Assisted Testing, Computer Attitudes, Computer Literacy
Peer reviewedShermis, Mark D.; Mzumara, Howard R.; Bublitz, Scott T. – Journal of Educational Computing Research, 2001
This study of undergraduates examined differences between computer adaptive testing (CAT) and self-adaptive testing (SAT), including feedback conditions and gender differences. Results of the Test Anxiety Inventory, Computer Anxiety Rating Scale, and a Student Attitude Questionnaire showed measurement efficiency is differentially affected by test…
Descriptors: Adaptive Testing, Computer Anxiety, Computer Assisted Testing, Gender Issues
Peer reviewedShermis, Mark D.; And Others – Roeper Review, 1996
Three study cohorts involving 199 gifted fifth graders, 190 gifted sixth graders, and 683 typical sixth graders were used to construct, validate, and pilot a computerized adaptive math test for placing fifth graders in a middle school mathematics gifted program. Results suggest the adaptive test has potential for talent identification. (Author/CR)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Evaluation Methods
Shermis, Mark D.; And Others – 1992
The reliability of four branching algorithms commonly used in computer adaptive testing (CAT) was examined. These algorithms were: (1) maximum likelihood (MLE); (2) Bayesian; (3) modal Bayesian; and (4) crossover. Sixty-eight undergraduate college students were randomly assigned to one of the four conditions using the HyperCard-based CAT program,…
Descriptors: Adaptive Testing, Algorithms, Bayesian Statistics, Comparative Analysis
Peer reviewedShermis, Mark D.; And Others – Journal of Research on Computing in Education, 1996
Describes a pilot study of computerized adaptive testing in the Michigan Educational Assessment Program's tenth-grade mathematics problem-solving and applications subtests. Comparisons are made to pencil-and-paper tests, and student reactions as determined by a posttest survey are discussed. (LRW)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Grade 10

Direct link
