Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Construct Validity | 9 |
| Difficulty Level | 9 |
| Undergraduate Students | 9 |
| Test Reliability | 6 |
| Test Items | 5 |
| Factor Analysis | 4 |
| Foreign Countries | 4 |
| Cognitive Processes | 3 |
| Electronic Learning | 3 |
| Factor Structure | 3 |
| Item Response Theory | 3 |
| More ▼ | |
Source
| British Journal of… | 1 |
| Education and Information… | 1 |
| International Journal of… | 1 |
| Journal of Geoscience… | 1 |
| ProQuest LLC | 1 |
| SAGE Open | 1 |
| Turkish Online Journal of… | 1 |
Author
| Abu Muaili, Zainab Helmy | 1 |
| Akbulut, Yavuz | 1 |
| Aldabbas, Lujayn | 1 |
| Bayyat, Manal | 1 |
| Daday, Jerry | 1 |
| Dönmez, Onur | 1 |
| Erdem, Mukaddes | 1 |
| Harris, Sara E. | 1 |
| Kaptan, Miray | 1 |
| Khoshdel, Fahimeh | 1 |
| McDaniel, Kerrie | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 8 |
| Journal Articles | 6 |
| Dissertations/Theses -… | 1 |
Education Level
| Higher Education | 7 |
| Postsecondary Education | 7 |
Audience
Location
| Canada | 1 |
| China (Beijing) | 1 |
| Indiana | 1 |
| Iran | 1 |
| Jordan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Bayyat, Manal; Abu Muaili, Zainab Helmy; Aldabbas, Lujayn – Turkish Online Journal of Distance Education, 2022
This study aims to investigate: (1) the construct validity of the "Blended Learners' Online Component Challenges" BLOCC scale; (2) the internal reliability of the scale; and (3) the differences between blended learners' online component challenges according to different socio-demographic variables for Sport Science students. The sample…
Descriptors: Electronic Learning, Blended Learning, Measures (Individuals), Construct Validity
Scribner, Emily D.; Harris, Sara E. – Journal of Geoscience Education, 2020
The Mineralogy Concept Inventory (MCI) is a statistically validated 18-question assessment that can be used to measure learning gains in introductory mineralogy courses. Development of the MCI was an iterative process involving expert consultation, student interviews, assessment deployment, and statistical analysis. Experts at the two universities…
Descriptors: Undergraduate Students, Mineralogy, Introductory Courses, Science Tests
Novak, Elena; McDaniel, Kerrie; Daday, Jerry; Soyturk, Ilker – British Journal of Educational Technology, 2022
e-Textbooks and e-learning technologies have become ubiquitous in college and university courses as faculty seek out ways to provide more engaging, flexible and customizable learning opportunities for students. However, the same technologies that support learning can serve as a source of frustration. Research on frustration with technology is…
Descriptors: Electronic Learning, Electronic Publishing, Textbooks, Student Attitudes
Dönmez, Onur; Akbulut, Yavuz; Telli, Esra; Kaptan, Miray; Özdemir, Ibrahim H.; Erdem, Mukaddes – Education and Information Technologies, 2022
In the current study, we aimed to develop a reliable and valid scale to address individual cognitive load types. Existing scale development studies involved limited number of items without adequate convergent, discriminant and criterion validity checks. Through a multistep correlational study, we proposed a three-factor scale with 13 items to…
Descriptors: Test Construction, Content Validity, Construct Validity, Test Reliability
Yunjiu, Luo; Wei, Wei; Zheng, Ying – SAGE Open, 2022
Artificial intelligence (AI) technologies have the potential to reduce the workload for the second language (L2) teachers and test developers. We propose two AI distractor-generating methods for creating Chinese vocabulary items: semantic similarity and visual similarity. Semantic similarity refers to antonyms and synonyms, while visual similarity…
Descriptors: Chinese, Vocabulary Development, Artificial Intelligence, Undergraduate Students
Khoshdel, Fahimeh – International Journal of Language Testing, 2017
In the current study, the validity of C-Test is investigated using the construct identification approach. Based on construct identification approach, the factors which are deemed to affect item difficulty in C-Test items were identified. To this aim, 11 factors were selected to enter into Linear Logistic Testing Model (LLTM) analysis to…
Descriptors: Cloze Procedure, Language Tests, Test Items, Difficulty Level
Yoon, So Yoon – ProQuest LLC, 2011
Working under classical test theory (CTT) and item response theory (IRT) frameworks, this study investigated psychometric properties of the Revised Purdue Spatial Visualization Tests: Visualization of Rotations (Revised PSVT:R). The original version, the PSVT:R was designed by Guay (1976) to measure spatial visualization ability in…
Descriptors: Undergraduate Students, Test Bias, Guessing (Tests), Construct Validity
Sebrechts, Marc M.; And Others – 1993
The construct validity of algebra word problems for measuring quantitative reasoning was examined, focusing on an analysis of problem attributes and on the analysis of constructed-response solutions. Constructed-response solutions to 20 problems from the Graduate Record Examinations (GRE) General Test were collected from 51 undergraduates.…
Descriptors: Algebra, Cognitive Processes, Construct Validity, Constructed Response
Ward, William C.; And Others – 1986
The keylist format (rather than the conventional multiple-choice format) for item presentation provides a machine-scorable surrogate for a truly free-response test. In this format, the examinee is required to think of an answer, look it up in a long ordered list, and enter its number on an answer sheet. The introduction of keylist items into…
Descriptors: Analogy, Aptitude Tests, Construct Validity, Correlation

Peer reviewed
Direct link
