Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 4 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Data Collection | 14 |
| Test Format | 14 |
| Test Construction | 6 |
| Computer Assisted Testing | 4 |
| Equated Scores | 4 |
| High School Students | 3 |
| Questionnaires | 3 |
| Test Items | 3 |
| Achievement Tests | 2 |
| Adults | 2 |
| Foreign Countries | 2 |
| More ▼ | |
Source
Author
| Arce-Ferrer, Alvaro J. | 1 |
| Baldwin, Peter | 1 |
| Boser, Judith A. | 1 |
| Bulut, Okan | 1 |
| Clauser, Brian E. | 1 |
| Gollwitzer, Mario | 1 |
| Grant, Mary | 1 |
| Haertel, Edward | 1 |
| Hahn-Smith, Stephen | 1 |
| Holmes, Susan E. | 1 |
| Kolen, Michael J. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 14 |
| Reports - Research | 9 |
| Reports - Descriptive | 3 |
| Reports - Evaluative | 2 |
| Collected Works - General | 1 |
Education Level
| High Schools | 3 |
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Secondary Education | 2 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
Location
| China | 1 |
| Germany | 1 |
| Mexico | 1 |
| Washington | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| ACT Assessment | 1 |
| Advanced Placement… | 1 |
| College Level Examination… | 1 |
| Digit Span Test | 1 |
| Law School Admission Test | 1 |
| Raven Progressive Matrices | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Provasnik, Stephen – Large-scale Assessments in Education, 2021
This paper presents the concepts and observations in the author's keynote address at the May 2019 "Opportunity versus Challenge: Exploring Usage of Log-File and Process Data in International Large-Scale Assessments" conference in Dublin, Ireland. This paper recaps briefly some key points that emerged at the December 2018 ETS symposium on…
Descriptors: Data Collection, Cognitive Processes, Ethics, Student Evaluation
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Arce-Ferrer, Alvaro J.; Bulut, Okan – Journal of Experimental Education, 2019
This study investigated the performance of four widely used data-collection designs in detecting test-mode effects (i.e., computer-based versus paper-based testing). The experimental conditions included four data-collection designs, two test-administration modes, and the availability of an anchor assessment. The test-level and item-level results…
Descriptors: Data Collection, Test Construction, Test Format, Computer Assisted Testing
Raghupathy, Shobana; Hahn-Smith, Stephen – Current Issues in Education, 2013
There has been increasing interest in using of web-based surveys--rather than paper based surveys--for collecting data on alcohol and other drug use in middle and high schools in the US. However, prior research has indicated that respondent confidentiality is an underlying concern with online data collection especially when computer-assisted…
Descriptors: Intermode Differences, Online Surveys, Alcohol Abuse, Drug Use
Scheu, Ian Edward; Lawrence, Thomas – Journal of Educational Computing Research, 2013
This article details the construction of a computer program to test cognitive processing differences in adolescents engaged in a standard presentation of tests versus a fantasy-based game presentation. The article will discuss the challenges of creating a replication of traditional psychological tests into a new medium which holds comparable…
Descriptors: Psychological Testing, Computer Assisted Testing, Games, Adolescents
Puhan, Gautam; Moses, Tim; Grant, Mary; McHale, Fred – ETS Research Report Series, 2008
A single group (SG) equating design with nearly equivalent test forms (SiGNET) design was developed by Grant (2006) to equate small volume tests. The basis of this design is that examinees take two largely overlapping test forms within a single administration. The scored items for the operational form are divided into mini-tests called testlets.…
Descriptors: Data Collection, Equated Scores, Item Sampling, Sample Size
Boser, Judith A. – Evaluation News, 1985
The maximum incorporation of computer coding into an instrument is recommended to reduce errors in coding information from questionnaires. Specific suggestions for guiding the precoding process for response options, numeric identifiers, and assignment of card columns are proposed for mainframe computer data entry. (BS)
Descriptors: Computers, Data Collection, Data Processing, Questionnaires
Peer reviewedWang, Tianyou; Kolen, Michael J. – Applied Psychological Measurement, 1996
A quadratic curve test equating method for equating different test forms under a random-groups data collection design is proposed that equates the first three central moments of the test forms. When applied to real test data, the method performs as well as other equating methods. Procedures from implementing the test are described. (SLD)
Descriptors: Data Collection, Equated Scores, Standardized Tests, Test Construction
Peer reviewedTram, Jane My Duc; Varnhagen, Connie K. – Alberta Journal of Educational Research, 1998
In a study of questioning formats, children and adults answered spelling questions in an open-ended condition or one of two close-ended conditions where options were likely or unlikely. Participants presented with unlikely-response options generated their own responses more often than participants presented with likely-response options. Children…
Descriptors: Adults, Children, Data Collection, Educational Research
Peer reviewedOry, John C.; And Others – Journal of Educational Psychology, 1980
The study investigated the structural corroboration of instructional evaluation information collected from one source (students) by three different methods: responses to objective questionnaire items, written comments to open-ended questions, and group interview results. The three types of information presented a similar general impression of…
Descriptors: Course Evaluation, Data Collection, Evaluation Methods, Higher Education
Peer reviewedHolmes, Susan E. – Evaluation and the Health Professions, 1986
A specific application of test equating is described, namely that of credentialing examination programs in the health professions. Considered are: (1) the role of test equating in the credentialing process; and (2) the issues that must be considered when implementing test equating in a credentialing examination program. (Author/LMO)
Descriptors: Certification, Credentials, Data Collection, Equated Scores
von Davier, Alina A., Ed.; Liu, Mei, Ed. – ETS Research Report Series, 2006
This report builds on and extends existent research on population invariance to new tests and issues. The authors lay the foundation for a deeper understanding of the use of population invariance measures in a wide variety of practical contexts. The invariance of linear, equipercentile and IRT equating methods are examined using data from five…
Descriptors: Equated Scores, Statistical Analysis, Data Collection, Test Format
Peer reviewedHaertel, Edward – Educational Evaluation and Policy Analysis, 1986
The purposes of this paper are to analyze some problems in using student test scores to evaluate teachers and to propose an achievement-based model for teacher evaluation that is effective, affordable, fair, legally defensible, and politically acceptable. The system is designed for detecting and documenting poor teacher performance. (Auth/JAZ)
Descriptors: Academic Ability, Achievement Tests, Competence, Data Collection

Direct link
