NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, W. Paul – Educational and Psychological Measurement, 2014
A study in a university clinic/laboratory investigated adaptive Bayesian scaling as a supplement to interpretation of scores on the Mini-IPIP. A "probability of belonging" in categories of low, medium, or high on each of the Big Five traits was calculated after each item response and continued until all items had been used or until a…
Descriptors: Personality Traits, Personality Measures, Bayesian Statistics, Clinics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vaughn, Brandon K. – Journal on School Educational Technology, 2008
This study considers the importance of contextual effects on the quality of assessments on item bias and differential item functioning (DIF) in measurement. Often, in educational studies, students are clustered in teachers or schools, and the clusters could impact psychometric issues yet are largely ignored by traditional item analyses. A…
Descriptors: Test Bias, Educational Assessment, Educational Quality, Context Effect
Peer reviewed Peer reviewed
Kearns, Jack; Meredith, William – Psychometrika, 1975
Examines the question of how large a sample must be in order to produce empirical Bayes estimates which are preferable to other commonly used estimates, such as proportion correct observed score. (Author/RC)
Descriptors: Bayesian Statistics, Item Analysis, Probability, Sampling
Peer reviewed Peer reviewed
Smith, Jeffrey K. – Educational and Psychological Measurement, 1980
Weber contends that the use of Rasch analysis, principal components analysis, and classical test analysis shows that an instrument designed to measure a "bilevel dimensionality" in probability achievement measures a single latent trait. That interpretation and the use of Rasch and classical analysis to establish unidimensionality are…
Descriptors: Academic Achievement, Bayesian Statistics, Cognitive Processes, Item Analysis
Peer reviewed Peer reviewed
And Others; Hambleton, Ronald K. – Review of Educational Research, 1978
Topics concerning latent trait theory are addressed: (1) dimensionality of latent space, local independence, and item characteristic curves; (2) models--equations, parameter estimation, testing assumptions, and goodness of fit, (3) applications test developments, item bias, tailored testing and equating; and (4) advantages over classical…
Descriptors: Ability, Bayesian Statistics, Goodness of Fit, Item Analysis
Abdel-fattah, Abdel-fattah A. – 1992
A scaling procedure is proposed, based on item response theory (IRT), to fit non-hierarchical test structure as well. The binary scores of a test of English were used for calculating the probabilities of answering each item correctly. The probability matrix was factor analyzed, and the difficulty intervals or estimates corresponding to the factors…
Descriptors: Bayesian Statistics, Difficulty Level, English, Estimation (Mathematics)
PDF pending restoration PDF pending restoration
Civil Service Commission, Washington, DC. Personnel Research and Development Center. – 1976
This pamphlet reprints three papers and an invited discussion of them, read at a Division 5 Symposium at the 1975 American Psychological Association Convention. The first paper describes a Bayesian tailored testing process and shows how it demonstrates the importance of using test items with high discrimination, low guessing probability, and a…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Oriented Programs, Computer Programs
Warm, Thomas A. – 1978
This primer is an introduction to item response theory (also called item characteristic curve theory, or latent trait theory) as it is used most commonly--for scoring multiple choice achievement or aptitude tests. Written for the testing practitioner with minimum training in statistics and psychometrics, it presents and illustrates the basic…
Descriptors: Ability Identification, Achievement Tests, Adaptive Testing, Aptitude Tests