NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 316 to 330 of 3,123 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Yihua Shen – ProQuest LLC, 2024
This research examined the Chinese National College Entrance Examination (NCEE) in mathematics before and after the Great Proletarian Cultural Revolution, specifically covering the periods 1952-1965 and 1977-1984. The central focus was on the organization, structure, and content of the examinations, as well as their influence on and interaction…
Descriptors: Mathematics Achievement, Mathematics Tests, Foreign Countries, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Miki Satori – Language Learning Journal, 2024
This study examines the knowledge representation of Japanese university students assessed using grammaticality judgement tests (GJTs) and a metalinguistic knowledge test (MKT). The study also investigates the role of automatised and non-automatised explicit knowledge in general L2 language proficiency. Participants were 87 late learners of English…
Descriptors: Foreign Countries, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Orthey, Robin; Palena, Nicola; Vrij, Aldert; Meijer, Ewout; Leal, Sharon; Blank, Hartmut; Caso, Letizia – Applied Cognitive Psychology, 2019
We examined the effects of cognitive load on the strategy selection in the forced choice test (FCT) when used to detect hidden crime knowledge. Examinees (N = 120) with and without concealed knowledge from a mock crime were subjected to an FCT either under standard circumstances or cognitive load. Cognitive load was implemented through time…
Descriptors: Stress Variables, Measurement Techniques, Cognitive Processes, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Scanlon, Paul J. – Field Methods, 2019
Web, or online, probing has the potential to supplement existing questionnaire design processes by providing structured cognitive data on a wider sample than typical qualitative-only question evaluation methods can achieve. One of the practical impediments to the further integration of web probing is the concern of survey managers about how the…
Descriptors: Online Surveys, Questionnaires, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Mason, Robert; Huff, Kyle – International Journal of Social Research Methodology, 2019
This article explores the comparability of assessment tools under different format conditions. Prior studies have not considered the interaction of format and device on time to complete an assessment and have instead treated each of them separately with conflicting results. This study assesses, by linear regressions using web-based data, the…
Descriptors: Computer Assisted Testing, Test Format, Questionnaires, Usability
Klein, Michael – ProQuest LLC, 2019
The purpose of the current study was to examine the differences between number and types of administration and scoring errors made by administration method (digital/Q-Interactive vs. paper-and-pencil) on the Wechsler Intelligence Scales for Children (WISC-V). WISC-V administration and scoring checklists were developed in order to provide an…
Descriptors: Intelligence Tests, Children, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Van Rossum, Tom; Foweather, Lawrence; Hayes, Spencer; Richardson, David; Morley, David – Measurement in Physical Education and Exercise Science, 2021
The aim of this study was to establish the content of a teacher-oriented movement assessment tool (MAT) for children aged 4-7 years. A three-round Delphi poll with an international panel of forty-six academics and practitioners was conducted. Consensus was reached on a selection and number of fundamental movement skills to be assessed with four…
Descriptors: Psychomotor Skills, Basic Skills, Tests, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Merzougui, Wassim H.; Myers, Matthew A.; Hall, Samuel; Elmansouri, Ahmad; Parker, Rob; Robson, Alistair D.; Kurn, Octavia; Parrott, Rachel; Geoghegan, Kate; Harrison, Charlotte H.; Anbu, Deepika; Dean, Oliver; Border, Scott – Anatomical Sciences Education, 2021
Methods of assessment in anatomy vary across medical schools in the United Kingdom (UK) and beyond; common methods include written, spotter, and oral assessment. However, there is limited research evaluating these methods in regards to student performance and perception. The National Undergraduate Neuroanatomy Competition (NUNC) is held annually…
Descriptors: Multiple Choice Tests, Test Format, Medical Students, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nalbantoglu Yilmaz, Funda – Eurasian Journal of Educational Research, 2021
Purpose: With improvements in computer technologies and test implementations in the computer environment, when advantageous points of computer-based test implementations are considered, it is inevitable to compare psychometric characteristics of paper-and-pencil tests and computer-based tests and students' success. In computer-based tests,…
Descriptors: Computer Assisted Testing, Test Format, Paper (Material), Computer Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Senadheera, Prasad; Kulasekara, Geetha Udayangani – Open Praxis, 2021
COVID-19 outbreak brought about many challenges including the shifting of university assessments to conduct in online mode. This research study tries to explore the impact of newly designed online formative assessments on students' learning, in a Plant Physiology course. The designing of assessments were carried out focusing on constructive…
Descriptors: Formative Evaluation, Evaluation Methods, Electronic Learning, Educational Environment
Olney, Andrew M. – Grantee Submission, 2021
In contrast to simple feedback, which provides students with the correct answer, elaborated feedback provides an explanation of the correct answer with respect to the student's error. Elaborated feedback is thus a challenge for AI in education systems because it requires dynamic explanations, which traditionally require logical reasoning and…
Descriptors: Feedback (Response), Error Patterns, Artificial Intelligence, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cari F. Herrmann Abell – Grantee Submission, 2021
In the last twenty-five years, the discussion surrounding validity evidence has shifted both in language and scope, from the work of Messick and Kane to the updated Standards for Educational and Psychological Testing. However, these discussions haven't necessarily focused on best practices for different types of instruments or assessments, taking…
Descriptors: Test Format, Measurement Techniques, Student Evaluation, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Ihlenfeldt, Samuel D.; Dosedel, Michael; Riegelman, Amy – Educational Measurement: Issues and Practice, 2020
This systematic review investigated the topics studied and reporting practices of published meta-analyses in educational measurement. Our findings indicated that meta-analysis is not a highly utilized methodological tool in educational measurement; on average, less than one meta-analysis has been published per year over the past 30 years (28…
Descriptors: Meta Analysis, Educational Assessment, Test Format, Testing Accommodations
Pages: 1  |  ...  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  25  |  26  |  ...  |  209