Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 12 |
| Since 2007 (last 20 years) | 28 |
Descriptor
| Models | 77 |
| Test Construction | 77 |
| Problem Solving | 29 |
| Evaluation Methods | 23 |
| Testing Problems | 22 |
| Test Validity | 17 |
| Foreign Countries | 13 |
| Test Reliability | 13 |
| Educational Assessment | 12 |
| Elementary Secondary Education | 12 |
| Test Items | 12 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 3 |
| Researchers | 3 |
| Policymakers | 1 |
| Teachers | 1 |
Location
| Germany | 3 |
| Netherlands | 2 |
| Taiwan | 2 |
| United Kingdom (England) | 2 |
| Brazil | 1 |
| China | 1 |
| Hong Kong | 1 |
| Illinois | 1 |
| Indiana | 1 |
| Indonesia | 1 |
| Iran | 1 |
| More ▼ | |
Laws, Policies, & Programs
| Elementary and Secondary… | 1 |
Assessments and Surveys
| National Assessment of… | 5 |
| California Achievement Tests | 1 |
| Graduate Record Examinations | 1 |
| Program for International… | 1 |
| System of Multicultural… | 1 |
| Thematic Apperception Test | 1 |
What Works Clearinghouse Rating
Hai Li; Wanli Xing; Chenglu Li; Wangda Zhu; Simon Woodhead – Journal of Learning Analytics, 2025
Knowledge tracing (KT) is a method to evaluate a student's knowledge state (KS) based on their historical problem-solving records by predicting the next answer's binary correctness. Although widely applied to closed-ended questions, it lacks a detailed option tracing (OT) method for assessing multiple-choice questions (MCQs). This paper introduces…
Descriptors: Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing, Problem Solving
Suto, Irenka; Ireland, Jo – International Journal of Assessment Tools in Education, 2021
Errors in examination papers and other assessment instruments can compromise fairness. For example, a history question containing an incorrect historical date could be impossible for students to answer. Incorrect instructions at the start of an examination could lead students to answer the wrong number of questions. As there is little research on…
Descriptors: Testing Problems, Educational Testing, Test Construction, Work Environment
Shroff, Ronnie Homi; Ting, Fridolin Sze Thou; Chan, Chi Lok; Garcia, Raycelle C. C.; Tsang, Wing Ki; Lam, Wai Hung – Australasian Journal of Educational Technology, 2023
This study attempted to conceptualise and measure learners' perceptions of their collaborative problem-based learning and peer assessment strategies in a technology-enabled context. Drawing on the extant literature, we integrate collaborative, problem-based and peer assessment learning strategies and propose a new model, the collaborative…
Descriptors: Cooperative Learning, Problem Based Learning, Peer Evaluation, Educational Strategies
Tsai, Meng-Jung; Liang, Jyh-Chong; Lee, Silvia Wen-Yu; Hsu, Chung-Yuan – Journal of Educational Computing Research, 2022
A prior study developed the Computational Thinking Scale (CTS) for assessing individuals' computational thinking dispositions in five dimensions: decomposition, abstraction, algorithmic thinking, evaluation, and generalization. This study proposed the Developmental Model of Computational Thinking through validating the structural relationships…
Descriptors: Thinking Skills, Problem Solving, Computation, Models
Sipes, Shannon M. – Interdisciplinary Journal of Problem-based Learning, 2017
Few of the papers published in journals and conference proceedings on problem-based learning (PBL) are empirical studies, and most of these use self-report as the measure of PBL (Beddoes, Jesiek, & Borrego, 2010). The current study provides a theoretically derived matrix for coding and classifying PBL that was objectively applied to official…
Descriptors: Data Collection, Problem Based Learning, Program Development, Test Construction
Naqiyah, Mardhiyyatin; Rosana, Dadan; Sukardiyono; Ernasari – International Journal of Instruction, 2020
This research that aimed to (1) produce instruments that were feasible to measure the ability to solve physics problems and nationalism, and (2) determine the quality of instruments that have been developed. This research was conducted through four stages, namely the design, preparation of tests, test trials, and preparation of valid instruments.…
Descriptors: Nationalism, High School Students, Physics, Science Instruction
Warsono; Nursuhud, Puji Iman; Darma, Rio Sandhika; Supahar – International Journal of Instruction, 2020
The study was conducted to analyze the items about the ability of high school students diagram representation and obtain Item Curve Characteristic. Grid test instruments are compiled based on competencies and indicators of diagram representation which are then used to compile items. The test instrument consisted of five items and was validated by…
Descriptors: High School Students, Problem Solving, Visual Aids, Scoring
Razavipour, Kioumars; Mansoori, Mahboobeh; Gooniband Shooshtari, Zohreh – Issues in Educational Research, 2020
Over the last three decades, the study of test preparation and test washback has emerged as an indispensable area of inquiry in language assessment. Yet, how test takers' motivation and perceptions of test design and content might mediate test preparation has not been given sufficient attention. Taking the general English module of a high stakes…
Descriptors: Testing Problems, Second Language Instruction, English (Second Language), Language Tests
Beghetto, Ronald A. – ECNU Review of Education, 2019
Purpose: This article, based on an invited talk, aims to explore the relationship among large-scale assessments, creativity and personalized learning. Design/Approach/Methods: Starting with the working definition of large-scale assessments, creativity, and personalized learning, this article identified the paradox of combining these three…
Descriptors: Measurement, Creativity, Problem Solving, Artificial Intelligence
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy; Tazaz, Amanda M. – Grantee Submission, 2017
The following report describes an assessment instrument called the Mathematics Performance and Cognition (MPAC) interview. The MPAC interview was designed to measure two outcomes of interest. It was designed to measure first and second graders' mathematics achievement in number, operations, and equality, and it was also designed to gather…
Descriptors: Interviews, Test Construction, Psychometrics, Elementary School Mathematics
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy – Grantee Submission, 2017
This report provides an overview of the development, implementation, and psychometric properties of a student mathematics interview designed to assess first- and second-grade student achievement and thinking processes. The student interview was conducted with 622 first- or second-grade students in 22 schools located in two public school districts…
Descriptors: Interviews, Test Construction, Psychometrics, Elementary School Mathematics
Leber, Jasmin; Renkl, Alexander; Nückles, Matthias; Wäschle, Kristin – Learning: Research and Practice, 2018
According to the model of constructive alignment, learners adjust their learning strategies to the announced assessment (backwash effect). Hence, when teaching for understanding, the assessment method should be aligned with this teaching goal to ensure that learners engage in corresponding learning strategies. A quasi-experimental field study with…
Descriptors: Learning Strategies, Testing Problems, Educational Objectives, Learning Motivation
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Ercikan, Kadriye; Oliveri, María Elena – Applied Measurement in Education, 2016
Assessing complex constructs such as those discussed under the umbrella of 21st century constructs highlights the need for a principled assessment design and validation approach. In our discussion, we made a case for three considerations: (a) taking construct complexity into account across various stages of assessment development such as the…
Descriptors: Evaluation Methods, Test Construction, Design, Scaling
Taskinen, Päivi H.; Steimel, Jochen; Gräfe, Linda; Engell, Sebastian; Frey, Andreas – Peabody Journal of Education, 2015
This study examined students' competencies in engineering education at the university level. First, we developed a competency model in one specific field of engineering: process dynamics and control. Then, the theoretical model was used as a frame to construct test items to measure students' competencies comprehensively. In the empirical…
Descriptors: Models, Engineering Education, Test Items, Outcome Measures

Peer reviewed
Direct link
