Publication Date
| In 2026 | 0 |
| Since 2025 | 217 |
| Since 2022 (last 5 years) | 1347 |
| Since 2017 (last 10 years) | 2805 |
| Since 2007 (last 20 years) | 4795 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 39 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 169 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 93 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 71 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation
Marzano, Robert J. – Association for Supervision and Curriculum Development, 2006
If you've ever questioned the logic of reducing a student's entire academic performance to a single test score or a vague letter grade, then here's a book that will revolutionize the way you think about assessment and grading. Drawing from years of in-depth research, Robert J. Marzano provides you with guidelines and steps for designing a…
Descriptors: Feedback (Response), Report Cards, Academic Achievement, Computer Software
Chuang, San-hui; O'Neil, Harold F. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2006
Collaborative problem solving and collaborative skills are considered necessary skills for success in today's world. Collaborative problem solving is defined as problem solving activities that involve interactions among a group of individuals. Large-scale and small-scale assessment programs increasingly use collaborative group tasks in which…
Descriptors: Problem Solving, Feedback, Concept Mapping, Cooperative Learning
Lei, Pui-Wa; Chen, Shu-Ying; Yu, Lan – Journal of Educational Measurement, 2006
Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional…
Descriptors: Evaluation Methods, Test Bias, Computer Assisted Testing, Multiple Regression Analysis
Wise, Steven L. – Applied Measurement in Education, 2006
In low-stakes testing, the motivation levels of examinees are often a matter of concern to test givers because a lack of examinee effort represents a direct threat to the validity of the test data. This study investigated the use of response time to assess the amount of examinee effort received by individual test items. In 2 studies, it was found…
Descriptors: Computer Assisted Testing, Motivation, Test Validity, Item Response Theory
Villano, Matt – Campus Technology, 2006
Across the U.S., a growing number of schools are turning to ePortfolio assessment technologies to help them monitor and evaluate student progress in a variety of disciplines--and to help them and their students do even more. Across the board, educators report that their ePortfolio efforts have revolutionized the learning process, and the…
Descriptors: Portfolios (Background Materials), Portfolio Assessment, Student Evaluation, Computer Assisted Testing
Thin, Alasdair G. – Bioscience Education e-Journal, 2006
It is not what is taught that has the most influence on students' study behaviour, but rather what is assessed. Computer-assisted assessment offers the possibility of widening the scope of the material that is assessed, without placing excessive burdens on either staff or students. This article describes a computer-assisted assessment scheme…
Descriptors: Physiology, Anatomy, Teaching Methods, Computer Assisted Testing
Ketterlin-Geller, Leanne R.; McCoy, Jan D.; Twyman, Todd; Tindal, Gerald – Assessment for Effective Intervention, 2006
Curriculum-based measurement is a system for monitoring students' progress and formatively evaluating instruction backed by 25 years of validation research. Most of this research has been conducted in elementary schools. In middle and high school classrooms, where there is an emphasis on mastering content knowledge, elementary-level measurements…
Descriptors: Curriculum Based Assessment, Academic Achievement, Cloze Procedure, Program Validation
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Wainer, Howard; And Others – 1991
When an examination consists, in whole or in part, of constructed response items, it is a common practice to allow the examinee to choose among a variety of questions. This procedure is usually adopted so that the limited number of items that can be completed in the allotted time does not unfairly affect the examinee. This results in the de facto…
Descriptors: Adaptive Testing, Chemistry, Comparative Analysis, Computer Assisted Testing
Gibbs, William J.; Lario-Gibbs, Annette M. – 1995
This paper discusses a computer-based prototype called TestMaker that enables educators to create computer-based tests. Given the functional needs of faculty, the host of research implications computer technology has for assessment, and current educational perspectives such as constructivism and their impact on testing, the purposes for developing…
Descriptors: College Faculty, Computer Assisted Testing, Computer Software, Computer Uses in Education
Bergstrom, Betty; And Others – 1994
Examinee response times from a computerized adaptive test taken by 204 examinees taking a certification examination were analyzed using a hierarchical linear model. Two equations were posed: a within-person model and a between-person model. Variance within persons was eight times greater than variance between persons. Several variables…
Descriptors: Adaptive Testing, Adults, Certification, Computer Assisted Testing
Schaeffer, Gary A.; And Others – 1995
This report summarizes the results from two studies. The first assessed the comparability of scores derived from linear computer-based (CBT) and computer adaptive (CAT) versions of the three Graduate Record Examinations (GRE) General Test measures. A verbal CAT was taken by 1,507, a quantitative CAT by 1,354, and an analytical CAT by 995…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Burstein, Jill C.; Kaplan, Randy M. – 1995
There is a considerable interest at Educational Testing Service (ETS) to include performance-based, natural language constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…
Descriptors: Computer Assisted Testing, Constructed Response, Cost Effectiveness, Hypothesis Testing

Peer reviewed
Direct link
