NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Suhwa; Kang, Hyeon-Ah – Journal of Educational Measurement, 2023
The study presents multivariate sequential monitoring procedures for examining test-taking behaviors online. The procedures monitor examinee's responses and response times and signal aberrancy as soon as significant change is identifieddetected in the test-taking behavior. The study in particular proposes three schemes to track different…
Descriptors: Test Wiseness, Student Behavior, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational Measurement, 2016
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Descriptors: Sampling, Research Methodology, Error Patterns, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yu, Chong Ho; Douglas, Samantha; Lee, Anna; An, Min – Practical Assessment, Research & Evaluation, 2016
This paper aims to illustrate how data visualization could be utilized to identify errors prior to modeling, using an example with multi-dimensional item response theory (MIRT). MIRT combines item response theory and factor analysis to identify a psychometric model that investigates two or more latent traits. While it may seem convenient to…
Descriptors: Visualization, Item Response Theory, Sample Size, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Applied Psychological Measurement, 2011
Recently, Attali and Powers investigated the usefulness of providing immediate feedback on the correctness of answers to constructed response questions and the opportunity to revise incorrect answers. This article introduces an item response theory (IRT) model for scoring revised responses to questions when several attempts are allowed. The model…
Descriptors: Feedback (Response), Item Response Theory, Models, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
Frame, Laura B.; Vidrine, Stephanie M.; Hinojosa, Ryan – Journal of Psychoeducational Assessment, 2016
The Kaufman Test of Educational Achievement, Third Edition (KTEA-3) is a revised and updated comprehensive academic achievement test (Kaufman & Kaufman, 2014). Authored by Drs. Alan and Nadeen Kaufman and published by Pearson, the KTEA-3 remains an individual achievement test normed for individuals of ages 4 through 25 years, or for those in…
Descriptors: Achievement Tests, Elementary Secondary Education, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Applied Measurement in Education, 2009
The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It was essential to examine scale drift for this testing program because new forms in this testing program are often put on scale through a series of intermediate equatings (known as equating chains). This process may cause equating error to…
Descriptors: Testing Programs, Testing, Measurement Techniques, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Psychological Measurement, 1994
Through simulation, the power of the U3 statistic was compared with the power of one of the simplest person-fit statistics, the sum of the number of Guttman errors. In most cases, a weighted version of the latter statistic performed as well as the U3 statistic. (SLD)
Descriptors: Error Patterns, Item Response Theory, Nonparametric Statistics, Power (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Tate, Richard L. – Applied Measurement in Education, 2004
The valid provision of subscores from an item response theory-based test implies a multidimensional test structure. Assuming, in the construction of a new test, that the test features required for a valid and reliable total test score have been specified already, this article describes the resulting subscore performance and the resulting…
Descriptors: Scores, Test Items, Item Response Theory, Test Construction
Peer reviewed Peer reviewed
Baker, Frank B. – Applied Psychological Measurement, 1993
Using simulation, the effect that misspecification of elements in the weight matrix has on estimates of basic parameters of the linear logistic test model was studied. Results indicate that, because specifying elements of the weight matrix is a subjective process, it must be done with great care. (SLD)
Descriptors: Error Patterns, Estimation (Mathematics), Item Response Theory, Matrices
Tatsuoka, Kikumi K. – 1991
Diagnosing cognitive errors possessed by examinees can be considered as a pattern classification problem that is designed to classify a sequential input of stimuli into one of several predetermined groups. The sequential inputs in this paper's context are item responses, and the predetermined groups are various states of knowledge resulting from…
Descriptors: Algorithms, Classification, Cognitive Processes, Equations (Mathematics)
Peer reviewed Peer reviewed
Young, John W. – Research in Higher Education, 1990
Predictive validity of preadmissions measures may be understated because of correctable defects in freshman year and cumulative high school grade point averages (GPAs). A study used item response theory (IRT) to develop a more reliable measure of performance and test it using Stanford University data. Results showed increased predictability.…
Descriptors: Admission Criteria, College Admission, Error Patterns, Grade Point Average
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
The main purpose of this study was to illustrate a polytomous IRT-based linking procedure that adjusts for rater variations. Test scores from two administrations of a statewide reading assessment were used. An anchor set of Year 1 students' constructed responses were rescored by Year 2 raters. To adjust for year-to-year rater variation in IRT…
Descriptors: Test Items, Measures (Individuals), Grade 8, Item Response Theory
Bennett, Randy Elliot; And Others – 1991
This exploratory study applied two new cognitively sensitive measurement models to constructed-response quantitative data. The models, intended to produce qualitative characteristics of examinee performance, were fitted to algebra word problem solutions produced by 285 examinees taking the Graduate Record Examinations (GRE) General Test. The two…
Descriptors: Algebra, College Entrance Examinations, College Students, Constructed Response