Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Error Patterns | 5 |
| Item Response Theory | 5 |
| Test Format | 5 |
| Test Items | 5 |
| Difficulty Level | 3 |
| Item Analysis | 2 |
| Psychometrics | 2 |
| Statistical Analysis | 2 |
| Academic Ability | 1 |
| Achievement Tests | 1 |
| Artificial Intelligence | 1 |
| More ▼ | |
Source
| Applied Measurement in… | 1 |
| Educational and Psychological… | 1 |
| Grantee Submission | 1 |
| Journal of Education and… | 1 |
| Practical Assessment,… | 1 |
Author
| Ahmed Al - Badri | 1 |
| DeMars, Christine E. | 1 |
| Keller, Lisa A. | 1 |
| Keller, Robert R. | 1 |
| Mimi Ismail | 1 |
| Papanastasiou, Elena C. | 1 |
| Said Al - Senaidi | 1 |
| Sinharay, Sandip | 1 |
| Socha, Alan | 1 |
Publication Type
| Reports - Research | 5 |
| Journal Articles | 4 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
Audience
Location
| Oman | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Sinharay, Sandip – Grantee Submission, 2018
Tatsuoka (1984) suggested several extended caution indices and their standardized versions that have been used as person-fit statistics by researchers such as Drasgow, Levine, and McLaughlin (1987), Glas and Meijer (2003), and Molenaar and Hoijtink (1990). However, these indices are only defined for tests with dichotomous items. This paper extends…
Descriptors: Test Format, Goodness of Fit, Item Response Theory, Error Patterns
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format

Peer reviewed
Direct link
