NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20253
Since 202417
Since 2021 (last 5 years)66
Since 2016 (last 10 years)129
Since 2006 (last 20 years)221
What Works Clearinghouse Rating
Showing 1 to 15 of 221 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Monica Casella; Pasquale Dolce; Michela Ponticorvo; Nicola Milano; Davide Marocco – Educational and Psychological Measurement, 2024
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used for shortening tests, which belong to the so-called exploratory model, make assumptions not always verified in psychological data. This article proposes a…
Descriptors: Artificial Intelligence, Test Construction, Test Format, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mustafa Ilhan; Nese Güler; Gülsen Tasdelen Teker; Ömer Ergenekon – International Journal of Assessment Tools in Education, 2024
This study aimed to examine the effects of reverse items created with different strategies on psychometric properties and respondents' scale scores. To this end, three versions of a 10-item scale in the research were developed: 10 positive items were integrated in the first form (Form-P) and five positive and five reverse items in the other two…
Descriptors: Test Items, Psychometrics, Scores, Measures (Individuals)
Jing Ma – ProQuest LLC, 2024
This study investigated the impact of scoring polytomous items later on measurement precision, classification accuracy, and test security in mixed-format adaptive testing. Utilizing the shadow test approach, a simulation study was conducted across various test designs, lengths, number and location of polytomous item. Results showed that while…
Descriptors: Scoring, Adaptive Testing, Test Items, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Morgan McCracken; Jonathan D. Bostic; Timothy D. Folger – TechTrends: Linking Research and Practice to Improve Learning, 2024
Assessment is central to teaching and learning, and recently there has been a substantive shift from paper-and-pencil assessments towards technology delivered assessments such as computer-adaptive tests. Fairness is an important aspect of the assessment process, including design, administration, test-score interpretation, and data utility. The…
Descriptors: Middle School Students, Student Attitudes, Culture Fair Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kunz, Tanja; Meitinger, Katharina – Field Methods, 2022
Although list-style open-ended questions generally help us gain deeper insights into respondents' thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our…
Descriptors: Online Surveys, Item Response Theory, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Zhongmin; He, Yong – Measurement: Interdisciplinary Research and Perspectives, 2023
Careful considerations are necessary when there is a need to choose an anchor test form from a list of old test forms for equating under the random groups design. The choice of the anchor form potentially affects the accuracy of equated scores on new test forms. Few guidelines, however, can be found in the literature on choosing the anchor form.…
Descriptors: Test Format, Equated Scores, Best Practices, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Yigiter, Mahmut Sami; Dogan, Nuri – Measurement: Interdisciplinary Research and Perspectives, 2023
In recent years, Computerized Multistage Testing (MST), with their versatile benefits, have found themselves a wide application in large scale assessments and have increased their popularity. The fact that forms can be made ready before the exam application, such as a linear test, and that they can be adapted according to the test taker's ability…
Descriptors: Programming Languages, Monte Carlo Methods, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Jan Karem Höhne; Achim Goerres – International Journal of Social Research Methodology, 2024
The measurement of political solidarities and related concepts is an important endeavor in numerous scientific disciplines, such as political and social science research. European surveys, such as the Eurobarometer, frequently measure these concepts for people's home country and Europe raising questions with respect to the order of precedence.…
Descriptors: Surveys, Attitude Measures, Political Attitudes, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Kmetty, Zoltán; Stefkovics, Ádám – International Journal of Social Research Methodology, 2022
Questionnaire design choices, such as handling 'do not know' answers or using check all that apply questions may impact the level of item- and unit nonresponse. We used experimental data from an online panel and the questions of ESS to examine how applying forced answering, offering DK/NA options and specific question formats (CATA vs. forced…
Descriptors: Item Response Theory, Test Format, Response Rates (Questionnaires), Questionnaires
Jiajing Huang – ProQuest LLC, 2022
The nonequivalent-groups anchor-test (NEAT) data-collection design is commonly used in large-scale assessments. Under this design, different test groups take different test forms. Each test form has its own unique items and all test forms share a set of common items. If item response theory (IRT) models are applied to analyze the test data, the…
Descriptors: Item Response Theory, Test Format, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Albert Weideman; Tobie van Dyk – Language Teaching Research Quarterly, 2023
This contribution investigates gains in technical economy in measuring language ability by considering one recurrent interest of JD Brown: cloze tests. In the various versions of the Test of Academic Literacy Levels (TALL), its Sesotho and Afrikaans (Toets van Akademiese Geletterdheidsvlakke -- TAG) counterparts, as well as related other tests…
Descriptors: Language Skills, Language Aptitude, Cloze Procedure, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Fu-Yun Yu – Interactive Learning Environments, 2024
Currently, 50 + learning systems supporting student question-generation (SQG) activities have been developed. While generating questions of different types is supported in many of these systems, systems allowing students to generate questions around a scenario (i.e. student testlet-generation, STG) are not yet available. Noting the increasing…
Descriptors: Computer Assisted Testing, Test Format, Test Construction, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15