NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)4
Since 2007 (last 20 years)9
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Hamhuis, Eva; Glas, Cees; Meelissen, Martina – British Journal of Educational Technology, 2020
Over the last two decades, the educational use of digital devices, including digital assessments, has become a regular feature of teaching in primary education in the Netherlands. However, researchers have not reached a consensus about the so-called "mode effect," which refers to the possible impact of using computer-based tests (CBT)…
Descriptors: Handheld Devices, Elementary School Students, Grade 4, Foreign Countries
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Hudson, Ross D.; Treagust, David F. – Research in Science & Technological Education, 2013
Background: This study developed from observations of apparent achievement differences between male and female chemistry performances in a state university entrance examination. Male students performed more strongly than female students, especially in higher scores. Apart from the gender of the students, two other important factors that might…
Descriptors: Chemistry, College Entrance Examinations, State Universities, Gender Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hudson, Ross D. – Science Education International, 2012
This research inquires into the effectiveness of the two predominant forms of questions--multiple-choice questions and short-answer questions--used in the State University Entrance Examination for Chemistry including the relationship between performance and gender. It examines not only the style of question but also the content type examined…
Descriptors: Chemistry, Science Achievement, Gender Differences, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Pae, Tae-Il – Language Testing, 2012
This study tracked gender differential item functioning (DIF) on the English subtest of the Korean College Scholastic Aptitude Test (KCSAT) over a nine-year period across three data points, using both the Mantel-Haenszel (MH) and item response theory likelihood ratio (IRT-LR) procedures. Further, the study identified two factors (i.e. reading…
Descriptors: Aptitude Tests, Academic Aptitude, Language Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Einarsdottir, Sif; Rounds, James – Journal of Vocational Behavior, 2009
Item response theory was used to address gender bias in interest measurement. Differential item functioning (DIF) technique, SIBTEST and DIMTEST for dimensionality, were applied to the items of the six General Occupational Theme (GOT) and 25 Basic Interest (BI) scales in the Strong Interest Inventory. A sample of 1860 women and 1105 men was used.…
Descriptors: Test Format, Females, Vocational Interests, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J.; Liu, Jinghua; Hammond, Shelby – Applied Psychological Measurement, 2008
This exploratory study was built on research spanning three decades. Petersen, Marco, and Stewart (1982) conducted a major empirical investigation of the efficacy of different equating methods. The studies reported in Dorans (1990) examined how different equating methods performed across samples selected in different ways. Recent population…
Descriptors: Test Format, Equated Scores, Sampling, Evaluation Methods