NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 317 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jessie Leigh Nielsen; Rikke Vang Christensen; Mads Poulsen – Journal of Research in Reading, 2024
Background: Studies of syntactic comprehension and reading comprehension use a wide range of syntactic comprehension tests that vary considerably in format. The goal of this study was to examine to which extent different formats of syntactic comprehension tests measure the same construct. Methods: Sixty-nine Grade 4 students completed multiple…
Descriptors: Syntax, Reading Comprehension, Comparative Analysis, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Zhou, Linnan; Savalei, Victoria – Educational and Psychological Measurement, 2023
Zhang and Savalei proposed an alternative scale format to the Likert format, called the Expanded format. In this format, response options are presented in complete sentences, which can reduce acquiescence bias and method effects. The goal of the current study was to compare the psychometric properties of the Rosenberg Self-Esteem Scale (RSES) in…
Descriptors: Psychometrics, Self Concept Measures, Self Esteem, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tsai, Pei-Chun; Sachdeva, Chhavi; Gilbert, Sam J.; Scarampi, Chiara – Applied Cognitive Psychology, 2023
Saving information onto external resources can improve memory for subsequent information--a phenomenon known as the saving-enhanced memory effect. This article reports two preregistered online experiments investigating (A) whether this effect holds when to-be-remembered information is presented before the saved information and (B) whether people…
Descriptors: Memory, Decision Making, Word Lists, Learning Strategies
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Srikanth Allamsetty; M. V. S. S. Chandra; Neelima Madugula; Byamakesh Nayak – IEEE Transactions on Learning Technologies, 2024
The present study is related to the problem associated with student assessment with online examinations at higher educational institutes (HEIs). With the current COVID-19 outbreak, the majority of educational institutes are conducting online examinations to assess their students, where there would always be a chance that the students go for…
Descriptors: Computer Assisted Testing, Accountability, Higher Education, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lishi Liang; W. L. Quint Oga-Baldwin; Kaori Nakao; Luke K. Fryer; Alex Shum – Technology in Language Teaching & Learning, 2024
Phonological processing of written characters has been recognized as a crucial element in acquiring literacy in any language, both native and foreign. This study aimed to assess Japanese primary school students' phoneme-grapheme recognition skills using both paper-based and touch-interface tests. Differences between the two test formats and the…
Descriptors: Phoneme Grapheme Correspondence, Language Tests, Gamification, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Paul; Tong, Ye; Liu, Jinghua; Borglum, Joshua; Primoli, Vince – Journal of Educational Measurement, 2022
This article studied two methods to detect mode effects in two credentialing exams. In Study 1, we used a "modal scale comparison approach," where the same pool of items was calibrated separately, without transformation, within two TC cohorts (TC1 and TC2) and one OP cohort (OP1) matched on their pool-based scale score distributions. The…
Descriptors: Scores, Credentials, Licensing Examinations (Professions), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Scott; Kroehne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander – Large-scale Assessments in Education, 2023
Background: Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an…
Descriptors: Scoring, Test Items, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Mi-Hyun Bang; Young-Min Lee – Education and Information Technologies, 2024
The Human Resources Development Service of Korea developed a digital exam for five representative engineering categories and conducted a pilot study comparing the findings with the paper-and-pencil exam results from the last three years. This study aimed to compare the test efficiency between digital and paper-and-pencil examinations. A digital…
Descriptors: Engineering Education, Computer Assisted Testing, Foreign Countries, Human Resources
Peer reviewed Peer reviewed
Direct linkDirect link
Giofrè, D.; Allen, K.; Toffalini, E.; Caviola, S. – Educational Psychology Review, 2022
This meta-analysis reviews 79 studies (N = 46,605) that examined the existence of gender difference on intelligence in school-aged children. To do so, we limited the literature search to works that assessed the construct of intelligence through the Wechsler Intelligence Scales for Children (WISC) batteries, evaluating eventual gender differences…
Descriptors: Gender Differences, Cognitive Processes, Children, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Grajzel, Katalin; Dumas, Denis; Acar, Selcuk – Journal of Creative Behavior, 2022
One of the best-known and most frequently used measures of creative idea generation is the Torrance Test of Creative Thinking (TTCT). The TTCT Verbal, assessing verbal ideation, contains two forms created to be used interchangeably by researchers and practitioners. However, the parallel forms reliability of the two versions of the TTCT Verbal has…
Descriptors: Test Reliability, Creative Thinking, Creativity Tests, Verbal Ability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  22