NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kardanova, Elena; Loyalka, Prashant; Chirikov, Igor; Liu, Lydia; Li, Guirong; Wang, Huan; Enchikova, Ekaterina; Shi, Henry; Johnson, Natalie – Assessment & Evaluation in Higher Education, 2016
Relatively little is known about differences in the quality of engineering education within and across countries because of the lack of valid instruments that allow for the assessment and comparison of engineering students' skill gains. The purpose of our study is to develop and validate instruments that can be used to compare student skill gains…
Descriptors: Foreign Countries, Educational Quality, Engineering Education, Undergraduate Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hamdi, Syukrul; Kartowagiran, Badrun; Haryanto – International Journal of Instruction, 2018
The purpose of this study was to develop a Mathematics test instrument testlet model for a classroom assessment at elementary school. Testlet Model is a group of multiple choice question acquiring similar information with different grade of responses model. This research was conducted in East Lombok, Indonesia. The design used was research…
Descriptors: Test Items, Models, Elementary School Mathematics, Mathematics Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fulcher, Glenn; Svalberg, Agneta – International Journal of English Studies, 2013
Language testers operate within two frames of reference: norm-referenced (NRT) and criterion-referenced testing (CRT). The former underpins the world of large-scale standardized testing that prioritizes variability and comparison. The latter supports substantive score meaning in formative and domain specific assessment. Some claim that the…
Descriptors: Language Tests, Standardized Tests, Criterion Referenced Tests, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve; Duncan, Teresa – Educational Forum, 2011
This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…
Descriptors: Science Achievement, Academic Achievement, Achievement Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ritvo, Riva Ariella; Ritvo, Edward R.; Guthrie, Donald; Yuwiler, Arthur; Ritvo, Max Joseph; Weisbender, Leo – Journal of Autism and Developmental Disorders, 2008
An empirically based 78 question self-rating scale based on DSM-IV-TR and ICD-10 criteria was developed to assist clinicians' diagnosis of adults with autism and Asperger's Disorder-the Ritvo Autism and Asperger's Diagnostic Scale (RAADS). It was standardized on 17 autistic and 20 Asperger's Disorder and 57 comparison subjects. Both autistic and…
Descriptors: Autism, Asperger Syndrome, Content Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Murdock, Linda C.; Cost, Hollie C.; Tieso, Carol – Focus on Autism and Other Developmental Disabilities, 2007
The "Social-Communication Assessment Tool" (S-CAT) was created as a direct observation instrument to quantify specific social and communication deficits of children with autism spectrum disorders (ASD) within educational settings. In this pilot study, the instrument's content validity and interrater reliability were investigated to determine the…
Descriptors: Nonverbal Communication, Autism, Content Validity, Test Validity
Hambleton, Ronald K.; And Others – 1987
The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…
Descriptors: Comparative Analysis, Content Validity, Cutting Scores, Difficulty Level
Linn, Robert L. – 1987
When the National Assessment of Educational Progress (NAEP) was designed 20 years ago, comparisons among individual states or localities were not deemed desirable. Today, this lack of information to allow comparison is judged to be a serious weakness of the NAEP, and ways to allow comparisons are actively sought. The focus of this paper is to…
Descriptors: Academic Achievement, Comparative Analysis, Content Validity, Educational Assessment
McFarland, Jacqueline; Wisniewski, Shirley; Vermette, Paul – 1997
While the value of portfolio learning and assessment has gained much support from the educational community, many questions arise as specific implementations are attempted. This study examined one aspect, namely, the content validity of specific requirements, and addressed the question "How do various constituencies (methods students, student…
Descriptors: Comparative Analysis, Content Validity, Correlation, Education Majors
Park, Chung; Allen, Nancy L. – 1994
This study is part of continuing research into the meaning of future National Assessment of Educational Progress (NAEP) science scales. In this study, the test framework, as examined by NAEP's consensus process, and attributes of the items, identified by science experts, cognitive scientists, and measurement specialists, are examined. Preliminary…
Descriptors: Communication (Thought Transfer), Comparative Analysis, Construct Validity, Content Validity