Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
McGlone, Matthew S.; Aronson, Joshua; Kobrynowicz, Diane – Psychology of Women Quarterly, 2006
Men tend to achieve higher response accuracy than women on surveys of political knowledge. We investigated the possibility that this performance gap is moderated by factors that render the communicative context of a survey intellectually threatening to women and thereby induce stereotype threat. In a telephone survey of college students' political…
Descriptors: Gender Differences, Telephone Surveys, College Students, Interviews
van der Linden, Wim J.; Ariel, Adelaide; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2006
Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Banks
Seethaler, Pamela M.; Fuchs, Lynn S. – Learning Disabilities Research & Practice, 2006
The purpose of this study was to examine the relations of various cognitive abilities and aspects of math performance with computational estimation skill among third graders. Students (n=315) were assessed on language, nonverbal reasoning, concept formation, processing speed, long-term memory, working memory, inattentive behavior, basic reading…
Descriptors: Grade 3, Statistical Analysis, Cognitive Ability, Correlation
Pomplun, Mark; Ritchie, Timothy; Custer, Michael – Educational Assessment, 2006
This study investigated factors related to score differences on computerized and paper-and-pencil versions of a series of primary K-3 reading tests. Factors studied included item and student characteristics. The results suggest that the score differences were more related to student than item characteristics. These student characteristics include…
Descriptors: Reading Tests, Student Characteristics, Response Style (Tests), Socioeconomic Status
Cheong, Yuk Fai – International Journal of Testing, 2006
This article considers and illustrates a strategy to study effects of school context on differential item functioning (DIF) in large-scale assessment. The approach employs a hierarchical generalized linear modeling framework to (a) detect DIF, and (b) identify school-level correlates of the between-group differences in item performance. To…
Descriptors: Context Effect, Test Bias, Causal Models, Educational Assessment
Chapin, June R. – Social Studies, 2006
Data were extracted from the Early Childhood Longitudinal Study (ECLS), a national sample of more than 20,000 kindergartners and first-graders. Fifty-one social studies and science test items were combined into a General Knowledge Test. This test was individually administered to each child with no reading required. General Knowledge Test scores…
Descriptors: Test Items, Kindergarten, African American Children, Social Studies
Winter, Phoebe C.; Kopriva, Rebecca J.; Chen, Chen-Su; Emick, Jessica E. – Learning and Individual Differences, 2006
A cognitive lab technique (n=156) was used to investigate interactions between individual factors and item factors presumed to affect assessment validity for diverse students, including English language learners. Findings support the concept of "access"--an interaction between specific construct-irrelevant item features and individual…
Descriptors: Individual Characteristics, Second Language Learning, Grade 5, Grade 3
Tremblay, Annie – Second Language Research, 2006
This study, a partial replication of Bruhn de Garavito (1999a; 1999b), investigates the second language (L2) acquisition of Spanish reflexive passives and reflexive impersonals by French- and English-speaking adults at an advanced level of proficiency. The L2 acquisition of Spanish reflexive passives and reflexive impersonals by native French and…
Descriptors: Form Classes (Languages), Second Language Learning, Adults, Test Items
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
Freedle, Roy; Kostin, Irene – 1991
The primary goal of this project was to examine the predictability of Scholastic Aptitude Test (SAT) reading item difficulty (equated delta) for main idea items, and the predictability of main idea, inference, and explicit statement item types. A secondary purpose was to contrast the responses of high verbal and low verbal ability examinees.…
Descriptors: College Entrance Examinations, Difficulty Level, High School Students, High Schools
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response
Henning, Grant – 1991
In order to evaluate the Test of English as a Foreign Language (TOEFL) vocabulary item format and to determine the effectiveness of alternative vocabulary test items, this study investigated the functioning of eight different multiple-choice formats that differed with regard to: (1) length and inference-generating quality of the stem; (2) the…
Descriptors: Adults, Context Effect, Difficulty Level, English (Second Language)
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)
Bennett, Randy Elliot; And Others – 1989
This study examined the relationship of a machine-scorable, constrained free-response computer science item that required the student to debug a faulty program to two other types of items: multiple-choice and free-response requiring production of a computer program. The free-response items were from the College Board's Advanced Placement Computer…
Descriptors: College Students, Computer Science, Computer Software, Debugging (Computers)

Peer reviewed
Direct link
