Publication Date
| In 2026 | 0 |
| Since 2025 | 59 |
| Since 2022 (last 5 years) | 385 |
| Since 2017 (last 10 years) | 828 |
| Since 2007 (last 20 years) | 1342 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 195 |
| Teachers | 161 |
| Researchers | 93 |
| Administrators | 50 |
| Students | 34 |
| Policymakers | 15 |
| Parents | 12 |
| Counselors | 2 |
| Community | 1 |
| Media Staff | 1 |
| Support Staff | 1 |
| More ▼ | |
Location
| Canada | 62 |
| Turkey | 59 |
| Germany | 40 |
| Australia | 36 |
| United Kingdom | 36 |
| Japan | 35 |
| China | 33 |
| United States | 32 |
| California | 25 |
| Iran | 25 |
| United Kingdom (England) | 25 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Knudson, Joel; Hannan, Stephanie; O'Day, Jennifer; Castro, Marina – California Collaborative on District Reform, 2015
The Common Core State Standards represent an exciting step forward for California, and for the nation as a whole, in supporting instruction that can better prepare students for college and career success. Concurrent with the transition to the new standards, the Smarter Balanced Assessment Consortium (SBAC), of which California is a governing…
Descriptors: Academic Standards, State Standards, Measurement, Educational Assessment
Jaeger, Martin; Adair, Desmond – European Journal of Engineering Education, 2017
Online quizzes have been shown to be effective learning and assessment approaches. However, if scenario-based online construction safety quizzes do not include time pressure similar to real-world situations, they reflect situations too ideally. The purpose of this paper is to compare engineering students' performance when carrying out an online…
Descriptors: Engineering Education, Quasiexperimental Design, Tests, Academic Achievement
Hoshino, Yuko – Language Testing in Asia, 2013
This study compares the effect of different kinds of distractors on the level of difficulty of multiple-choice (MC) vocabulary tests in sentential contexts. This type of test is widely used in practical testing but it has received little attention so far. Furthermore, although distractors, which represent the unique characteristics of MC tests,…
Descriptors: Vocabulary Development, Comparative Analysis, Difficulty Level, Multiple Choice Tests
Tarun, Prashant; Krueger, Dale – Journal of Learning in Higher Education, 2016
In the United States System of Education the growth of student evaluations from 1973 to 1993 has increased from 29% to 86% which in turn has increased the importance of student evaluations on faculty retention, tenure, and promotion. However, the impact student evaluations have had on student academic development generates complex educational…
Descriptors: Critical Thinking, Teaching Methods, Multiple Choice Tests, Essay Tests
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Ghaderi, Marzieh; Mogholi, Marzieh; Soori, Afshin – International Journal of Education and Literacy Studies, 2014
Testing subject has many subsets and connections. One important issue is how to assess or measure students or learners. What would be our tools, what would be our style, what would be our goal and so on. So in this paper the author attended to the style of testing in school and other educational settings. Since the purposes of educational system…
Descriptors: Testing, Testing Programs, Intermode Differences, Computer Assisted Testing
Thomas, Jason E.; Hornsey, Philip E. – Journal of Instructional Research, 2014
Formative Classroom Assessment Techniques (CAT) have been well-established instructional tools in higher education since their exposition in the late 1980s (Angelo & Cross, 1993). A large body of literature exists surrounding the strengths and weaknesses of formative CATs. Simpson-Beck (2011) suggested insufficient quantitative evidence exists…
Descriptors: Classroom Techniques, Nontraditional Education, Adult Education, Formative Evaluation
Warschausky, Seth; Van Tubbergen, Marie; Asbell, Shana; Kaufman, Jacqueline; Ayyangar, Rita; Donders, Jacobus – Assessment, 2012
This study examined the psychometric properties of test presentation and response formats that were modified to be accessible with the use of assistive technology (AT). First, the stability of psychometric properties was examined in 60 children, ages 6 to 12, with no significant physical or communicative impairments. Population-specific…
Descriptors: Testing, Assistive Technology, Testing Accommodations, Psychometrics
van der Linden, Wim J. – Journal of Educational Measurement, 2011
A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…
Descriptors: Test Format, Reaction Time, Test Construction
Ahmadi, Alireza; Sadeghi, Elham – Language Assessment Quarterly, 2016
In the present study we investigated the effect of test format on oral performance in terms of test scores and discourse features (accuracy, fluency, and complexity). Moreover, we explored how the scores obtained on different test formats relate to such features. To this end, 23 Iranian EFL learners participated in three test formats of monologue,…
Descriptors: Oral Language, Comparative Analysis, Language Fluency, Accuracy
Wibowo, Santoso; Grandhi, Srimannarayana; Chugh, Ritesh; Sawir, Erlenawati – Journal of Educational Technology Systems, 2016
This study sought academic staff and students' views of electronic exams (e-exams) system and the benefits and challenges of e-exams in general. The respondents provided useful feedback for future adoption of e-exams at an Australian university and elsewhere too. The key findings show that students and academic staff are optimistic about the…
Descriptors: Pilot Projects, Computer Assisted Testing, Student Attitudes, College Faculty
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N. – Applied Measurement in Education, 2013
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
Descriptors: Test Format, Test Items, Item Analysis, Goodness of Fit
Dutke, Stephan; Barenberg, Jonathan – Psychology Learning and Teaching, 2015
We introduce a specific type of item for knowledge tests, confidence-weighted true-false (CTF) items, and review experiences of its application in psychology courses. A CTF item is a statement about the learning content to which students respond whether the statement is true or false, and they rate their confidence level. Previous studies using…
Descriptors: Foreign Countries, College Students, Psychology, Objective Tests

Peer reviewed
Direct link
