NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Head Start1
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akhtar, Hanif – International Association for Development of the Information Society, 2022
When examinees perceive a test as low stakes, it is logical to assume that some of them will not put out their maximum effort. This condition makes the validity of the test results more complicated. Although many studies have investigated motivational fluctuation across tests during a testing session, only a small number of studies have…
Descriptors: Intelligence Tests, Student Motivation, Test Validity, Student Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Murat Dogan – International Electronic Journal of Elementary Education, 2020
Advanced Item Response Theory (IRT) practices serve well in understanding the nature of latent variables which have been subject to research in various disciplines. In the current study, 7-12 aged 2536 children's responses to 20- item Visual Sequential Processing Memory (VSPM) sub-test of Anadolu-Sak Intelligence Scale (ASIS) were analyzed with…
Descriptors: Item Response Theory, Memory, Intelligence Tests, Children
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Forthmann, Boris; Förster, Natalie; Schütze, Birgit; Hebbecker, Karin; Flessner, Janis; Peters, Martin T.; Souvignier, Elmar – Journal of Intelligence, 2020
Distractors might display discriminatory power with respect to the construct of interest (e.g., intelligence), which was shown in recent applications of nested logit models to the short-form of Raven's progressive matrices and other reasoning tests. In this vein, a simulation study was carried out to examine two effect size measures (i.e., a…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Sumin; Schweizer, Karl; Ren, Xuezhu – Journal of Cognition and Development, 2019
This study examined whether there is a developmental difference in the emergence of an item-position effect in intelligence testing. The item-position effect describes the dependency of the item's characteristics on the positions of the items and is explained by learning. Data on fluid intelligence measured by Raven's Standard Progressive Matrices…
Descriptors: Intelligence Tests, Test Items, Difficulty Level, Short Term Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Bastianello, Tamara; Brondino, Margherita; Persici, Valentina; Majorano, Marinella – Journal of Research in Childhood Education, 2023
The present contribution aims at presenting an assessment tool (i.e., the TALK-assessment) built to evaluate the language development and school readiness of Italian preschoolers before they enter primary school, and its predictive validity for the children's reading and writing skills at the end of the first year of primary school. The early…
Descriptors: Literacy, Computer Assisted Testing, Italian, Language Acquisition
Nguyen, Tutrang; Malone, Lizabeth; Atkins-Burnett, Sally; Larson, Addison; Cannon, Judy – Administration for Children & Families, 2022
The Head Start Family and Child Experiences Survey (FACES) and the American Indian and Alaska Native Head Start Family and Child Experiences Survey (AIAN FACES) are separate studies done successively over time. One goal for these studies is to provide a national picture of children's readiness for school. In this research brief, the authors use…
Descriptors: Cognitive Measurement, Cognitive Ability, School Readiness, Low Income Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Carla; Hoge, Rachel; Schatschneider, Christopher; Castilla-Earls, Anny – International Journal of Bilingual Education and Bilingualism, 2021
This study examines the response patterns of 288 Spanish-English dual language learners on a standardized test of receptive Spanish vocabulary. Investigators analyzed responses to 54 items on the "Test de Vocabulario en Imagenes" (TVIP) [Dunn, L. M., D. E. Lugo, E. R. Padilla, and L. M. Dunn. 1986. "Test de Vocabulario en Imganes…
Descriptors: Predictor Variables, Phonology, Item Analysis, Spanish
Peer reviewed Peer reviewed
Direct linkDirect link
Minear, Meredith; Coane, Jennifer H.; Boland, Sarah C.; Cooney, Leah H.; Albat, Marissa – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2018
The authors examined whether individual differences in fluid intelligence (gF) modulate the testing effect. Participants studied Swahili--English word pairs and repeatedly studied half the pairs or attempted retrieval, with feedback, for the remaining half. Word pairs were easy or difficult to learn. Overall, participants showed a benefit of…
Descriptors: Individual Differences, Intelligence, Information Retrieval, Testing
Agnello, Paul – ProQuest LLC, 2018
Pseudowords (words that are not real but resemble real words in a language) have been used increasingly as a technique to reduce contamination due to construct-irrelevant variance in assessments of verbal fluid reasoning (Gf). However, despite pseudowords being researched heavily in other psychology sub-disciplines, they have received little…
Descriptors: Scores, Intelligence Tests, Difficulty Level, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Roivainen, Eka – Journal of Psychoeducational Assessment, 2014
Research on secular trends in mean intelligence test scores shows smaller gains in vocabulary skills than in nonverbal reasoning. One possible explanation is that vocabulary test items become outdated faster compared to nonverbal tasks. The history of the usage frequency of the words on five popular vocabulary tests, the GSS Wordsum, Wechsler…
Descriptors: Vocabulary Skills, Word Frequency, Language Usage, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D. – Educational and Psychological Measurement, 2016
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Descriptors: Learning Disabilities, Test Validity, Measures (Individuals), Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Dodonova, Yulia A.; Dodonov, Yury S. – Intelligence, 2013
Using more complex items than those commonly employed within the information-processing approach, but still easier than those used in intelligence tests, this study analyzed how the association between processing speed and accuracy level changes as the difficulty of the items increases. The study involved measuring cognitive ability using Raven's…
Descriptors: Difficulty Level, Intelligence Tests, Cognitive Ability, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Partchev, Ivailo; De Boeck, Paul – Intelligence, 2012
Responses to items from an intelligence test may be fast or slow. The research issue dealt with in this paper is whether the intelligence involved in fast correct responses differs in nature from the intelligence involved in slow correct responses. There are two questions related to this issue: 1. Are the processes involved different? 2. Are the…
Descriptors: Intelligence, Intelligence Tests, Reaction Time, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Warne, Russell T.; Doty, Kristine J.; Malbica, Anne Marie; Angeles, Victor R.; Innes, Scott; Hall, Jared; Masterson-Nixon, Kelli – Journal of Psychoeducational Assessment, 2016
"Above-level testing" (also called "above-grade testing," "out-of-level testing," and "off-level testing") is the practice of administering to a child a test that is designed for an examinee population that is older or in a more advanced grade. Above-level testing is frequently used to help educators design…
Descriptors: Test Items, Testing, Academically Gifted, Talent Identification
Previous Page | Next Page »
Pages: 1  |  2