NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 241 to 255 of 492 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Carretero-Dios, Hugo; Macarena, De los Santos-Roig; Buela-Casal, Gualberto – Learning and Individual Differences, 2008
This study is an item analysis of the Matching Familiar Figures Test-20. We examined error scores in the Matching Familiar Figures Test-20 to determine the influence of the difficulty of the test on the assessment of reflection-impulsivity. The sample included 700 participants aged between 6 and 12 years. The results obtained from the corrected…
Descriptors: Conceptual Tempo, Individual Differences, Item Analysis, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Costagliola, Gennaro; Fuccella, Vittorio – International Journal of Distance Education Technologies, 2009
To correctly evaluate learners' knowledge, it is important to administer tests composed of good quality question items. By the term "quality" we intend the potential of an item in effectively discriminating between skilled and untrained students and in obtaining tutor's desired difficulty level. This article presents a rule-based e-testing system…
Descriptors: Difficulty Level, Test Items, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ariel, Robert; Dunlosky, John; Bailey, Heather – Journal of Experimental Psychology: General, 2009
Theories of self-regulated study assume that learners monitor item difficulty when making decisions about which items to select for study. To complement such theories, the authors propose an agenda-based regulation (ABR) model in which learners' study decisions are guided by an agenda that learners develop to prioritize items for study, given…
Descriptors: Test Items, Time Management, Item Analysis, Rewards
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Chang, Hua-Hua; Yi, Qing – Applied Psychological Measurement, 2007
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
Descriptors: Adaptive Testing, Item Analysis, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P.; Gokiert, Rebecca J. – Educational Assessment, 2008
The purpose of the present investigation was to identify the relationship among different indicators of uncertainty that lead to potential item misalignment. The item-based indicators included ratings of ambiguity and cognitive complexity. The student-based indicators included (a) frequency of cognitive monitoring per item, (b) levels of…
Descriptors: Test Items, Cognitive Processes, Item Analysis, Self Concept
Aleamoni, Lawrence M.; Spencer, Richard E. – Educ Psychol Meas, 1969
Descriptors: Correlation, Difficulty Level, Discriminant Analysis, Item Analysis
Peer reviewed Peer reviewed
Rigdon, Steve E.; Tsutakawa, Robert K. – Journal of Educational Statistics, 1987
Estimation of the parameters of the Rasch model is considered when both the item parameters and the ability parameters are considered random quantities. (RB)
Descriptors: Ability, Difficulty Level, Item Analysis, Models
Gierl, Mark J.; Leighton, Jacqueline P.; Wang, Changjiang; Zhou, Jiawen; Gokiert, Rebecca; Tan, Adele – College Board, 2009
The purpose of the study is to present research focused on validating the four algebra cognitive models in Gierl, Wang, et al., using student response data collected with protocol analysis methods to evaluate the knowledge structures and processing skills used by a sample of SAT test takers.
Descriptors: Algebra, Mathematics Tests, College Entrance Examinations, Student Attitudes
Peer reviewed Peer reviewed
Frisbie, David A. – Educational and Psychological Measurement, 1981
The Relative Difficulty Ratio (RDR) was developed as an index of test or item difficulty for use when raw score means or item p-values are not directly comparable because of chance score differences. Computational RDR are described. Applications of the RDR at both the test and item level are illustrated. (Author/BW)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Conn; Carpenter, Clint – Teacher Education and Practice, 2008
The development of the Texas Assessment of Knowledge and Skills test involves input from educators across the state. The development process attempts to create an assessment that reflects the skills and content understanding of students at the tested grade level. This study attempts to determine other factors that can affect student performance on…
Descriptors: Readability, Science Tests, Item Analysis, Reading Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Martiniello, Maria – Harvard Educational Review, 2008
In this article, Maria Martiniello reports the findings of a study of the linguistic complexity of math word problems that were found to exhibit differential item functioning for English-language learners (ELLs) and non-ELLs taking the Massachusetts Comprehensive Assessment System (MCAS) fourth-grade math test. It builds on prior research showing…
Descriptors: Reading Comprehension, Test Items, Protocol Analysis, Mathematics Tests
PDF pending restoration PDF pending restoration
Frisbie, David A. – 1980
The development of a new technique, the Relative Difficulty Ratio (RDR), is described, as well as how it can be used to determine the difficulty level of a test so that meaningful inter-test difficulty comparisons can be made. Assumptions made in computing RDR include: 1) each item must be scored dichotomously with only one answer choice keyed as…
Descriptors: Difficulty Level, Item Analysis, Measurement Techniques, Scores
Jung, Eunju; Liu, Kimy; Ketterlin-Geller, Leanne R.; Tindal, Gerald – Behavioral Research and Teaching, 2008
The purpose of this study was to develop general outcome measures (GOM) in mathematics so that teachers could focus their instruction on needed prerequisite skills. We describe in detail, the manner in which content-related evidence was established and then present a number of statistical analyses conducted to evaluate the technical adequacy of…
Descriptors: Item Analysis, Test Construction, Test Theory, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bowling, Nathan A. – Assessment & Evaluation in Higher Education, 2008
Student ratings of teaching effectiveness are widely used to make judgments of faculty teaching performance. Research, however, has found that such ratings may not be accurate indicators of teaching performance because they are contaminated by course easiness. Using student ratings of 9855 professors employed at 79 different colleges and…
Descriptors: Student Evaluation of Teacher Performance, Correlation, Robustness (Statistics), Item Analysis
Rich, Charles E.; Johanson, George A. – 1990
Despite the existence of little empirical evidence for their effectiveness, many techniques have been suggested for writing multiple-choice items. The option "none of the above" (NA) has been widely used although a recent review of empirical studies of NA suggests that, while generally decreasing the difficulty index, NA also decreases…
Descriptors: Difficulty Level, Item Analysis, Multiple Choice Tests, Test Construction
Pages: 1  |  ...  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  ...  |  33