Publication Date
| In 2026 | 0 |
| Since 2025 | 62 |
| Since 2022 (last 5 years) | 388 |
| Since 2017 (last 10 years) | 831 |
| Since 2007 (last 20 years) | 1345 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 195 |
| Teachers | 161 |
| Researchers | 93 |
| Administrators | 50 |
| Students | 34 |
| Policymakers | 15 |
| Parents | 12 |
| Counselors | 2 |
| Community | 1 |
| Media Staff | 1 |
| Support Staff | 1 |
| More ▼ | |
Location
| Canada | 63 |
| Turkey | 59 |
| Germany | 41 |
| United Kingdom | 37 |
| Australia | 36 |
| Japan | 35 |
| China | 33 |
| United States | 32 |
| California | 25 |
| Iran | 25 |
| United Kingdom (England) | 25 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Liu, Jinghua; Zhu, Xiaowen – ETS Research Report Series, 2008
The purpose of this paper is to explore methods to approximate population invariance without conducting multiple linkings for subpopulations. Under the single group or equivalent groups design, no linking needs to be performed for the parallel-linear system linking functions. The unequated raw score information can be used as an approximation. For…
Descriptors: Raw Scores, Test Format, Comparative Analysis, Test Construction
Whithaus, Carl; Harrison, Scott B.; Midyette, Jeb – Assessing Writing, 2008
This article examines the influence of keyboarding versus handwriting in a high-stakes writing assessment. Conclusions are based on data collected from a pilot project to move Old Dominion University's Exit Exam of Writing Proficiency from a handwritten format into a dual-option format (i.e., the students may choose to handwrite or keyboard the…
Descriptors: Writing Evaluation, Handwriting, Pilot Projects, Writing Tests
Hanson, Bradley A.; Feinstein, Zachary S. – 1995
This paper discusses loglinear models for assessing differential item functioning (DIF). Loglinear and logit models that have been suggested for studying DIF are reviewed, and loglinear formulations of the logit models are given. A polynomial loglinear model for assessing DIF is introduced. Two examples using the polynomial loglinear model for…
Descriptors: Equated Scores, Item Bias, Test Format, Test Items
Stansfield, Charles W. – 1990
A discussion of the simulated oral proficiency interview (SOPI), a type of semi-direct speaking test that models the format of the oral proficiency interview (OPI), describes its development and research and examines its usefulness. The test used for discussion is a tape-recorded test consisting of six parts, scored by a trained rater using the…
Descriptors: Interviews, Language Proficiency, Language Tests, Simulation
Roe, Andrew G. – Graduating Engineer, 1985
Presents the case for taking the Engineer in Training examination (EIT), also called the Fundamentals of Engineering Examination, and the Graduate Record Examinations (GRE), indicating that they can affect future employment opportunities, career advancement, and post-graduate studies. Includes subject areas tested, test format, and how to prepare…
Descriptors: Engineering, Engineering Education, Higher Education, Test Format
Peer reviewedPerry, Devern – Business Education Forum, 1987
Discusses microcomputer-based test types that can measure learning of classroom content: (1) objective examination, (2) hands-on examination, (3) power examination, and (4) take-home examination. Recommends that a combination of power, objective, and take-home examinations achieves representative results with a minimum loss of class time. (CH)
Descriptors: Computer Assisted Testing, Microcomputers, Performance Tests, Test Format
Tanguma, Jesus – 2000
This paper addresses four steps in test construction specification: (1) the purpose of the test; (2) the content of the test; (3) the format of the test; and (4) the pool of items. If followed, such steps not only will assist the test constructor but will also enhance the students' learning. Within the "Content of the Test" section, two…
Descriptors: Test Construction, Test Content, Test Format, Test Items
Hanick, Patricia L.; Huang, Chi-Yu – 2002
The term "equating" refers to a statistical procedure that adjusts test scores on different forms of the same examination so that scores can be interpreted interchangeably. This study examines the impact of equating with fewer items than originally planned when items have been removed from the equating set for a variety of reasons. A…
Descriptors: Equated Scores, Test Format, Test Items, Test Results
Peer reviewedDeMars, Christine E. – Journal of Educational Measurement, 2003
Generated data to simulate multidimensionality resulting from including two or four subtopics on a test. DIMTEST analysis results suggest that including multiple topics, when they are commonly taught together, can lead to conceptual multidimensionality and mathematical multidimensionality. (SLD)
Descriptors: Curriculum, Simulation, Test Construction, Test Format
Peer reviewedHarwell, Michael R. – Journal of Counseling and Development, 1988
Discusses several statistical and substantive criteria that can be used to choose between parametric and nonparametric tests. Presents a non-parametric test capable of testing a number of statistical hypotheses using existing computer packages. Provides recommendations encouraging researchers to routinely use nonparametric tests in their data…
Descriptors: Statistical Analysis, Test Format, Test Selection, Test Use
Peer reviewedLiou, Michelle; Cheng, Philip E. – Psychometrika, 1995
Different data imputation techniques that are useful for equipercentile equating are discussed, and empirical data are used to evaluate the accuracy of these techniques as compared with chained equipercentile equating. The kernel estimator, the EM algorithm, the EB model, and the iterative moment estimator are considered. (SLD)
Descriptors: Equated Scores, Equations (Mathematics), Estimation (Mathematics), Test Format
Peer reviewedHolley, Joyce H.; Jenkins, Elizabeth K. – Journal of Education for Business, 1993
The relationship between performance on four test formats (multiple-choice theory, multiple-choice quantitative, open-ended theory, open-ended quantitative) and scores on the Kolb Learning Style Inventory was investigated for 49 accounting students. Learning style was significant for all formats except multiple-choice quantitative. (SK)
Descriptors: Accounting, Cognitive Style, Higher Education, Scores
Hoachlander, E. Gareth – Techniques: Making Education and Career Connections, 1998
Discusses state testing, various types of tests, and whether the increased attention to assessment is contributing to improved student learning. Describes uses of standardized multiple-choice, open-ended constructed response, essay, performance event, and portfolio methods. (JOW)
Descriptors: Academic Achievement, Student Evaluation, Test Format, Test Reliability
Peer reviewedWilliams, Susan A.; Swanson, Melvin S. – Journal of Continuing Education in Nursing, 2001
Patients with third-fifth grade reading ability (n=16) and with higher ability (n=32) completed nursing care satisfaction questionnaires with either a Likert scale, yes/no/uncertain, or pictorial format. Yes/no/uncertain and Likert formats elicited the same information. All patients had difficulty with negatively worded items. (Contains 45…
Descriptors: Patients, Questionnaires, Readability, Reading Ability
Peer reviewedNetemeyer, Richard G.; Williamson, Donald A.; Burton, Scot; Biswas, Dipayan; Jindal, Supriya; Landreth, Stacy; Mills, Gregory; Primeaux, Sonya – Educational and Psychological Measurement, 2002
Derived shortened versions of the Automatic Thoughts Questionnaire (ATQ) (S. Hollon and P. Kendall, 1980) using samples of 434 and 419 adults. Cross-validation with samples of 163 and 91 adults showed support for the shortened versions. Overall, results suggest that these short forms are useful in measuring cognitions associated with depression.…
Descriptors: Adults, Depression (Psychology), Psychometrics, Test Format

Direct link
