NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karyssa A. Courey; Frederick L. Oswald; Steven A. Culpepper – Practical Assessment, Research & Evaluation, 2024
Historically, organizational researchers have fully embraced frequentist statistics and null hypothesis significance testing (NHST). Bayesian statistics is an underused alternative paradigm offering numerous benefits for organizational researchers and practitioners: e.g., accumulating direct evidence for the null hypothesis (vs. 'fail to reject…
Descriptors: Bayesian Statistics, Statistical Distributions, Researchers, Institutional Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Meagan Karvonen; Russell Swinburne Romine; Amy K. Clark – Practical Assessment, Research & Evaluation, 2024
This paper describes methods and findings from student cognitive labs, teacher cognitive labs, and test administration observations as evidence evaluated in a validity argument for a computer-based alternate assessment for students with significant cognitive disabilities. Validity of score interpretations and uses for alternate assessments based…
Descriptors: Students with Disabilities, Intellectual Disability, Severe Disabilities, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shahid A. Choudhry; Timothy J. Muckle; Christopher J. Gill; Rajat Chadha; Magnus Urosev; Matt Ferris; John C. Preston – Practical Assessment, Research & Evaluation, 2024
The National Board of Certification and Recertification for Nurse Anesthetists (NBCRNA) conducted a one-year research study comparing performance on the traditional continued professional certification assessment, administered at a test center or online with remote proctoring, to a longitudinal assessment that required answering quarterly…
Descriptors: Nurses, Certification, Licensing Examinations (Professions), Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khodamoradi, Abolfazl; Maghsoudi, Mojtaba; Saidi, Mavadat – Practical Assessment, Research & Evaluation, 2022
This study aimed to explore the washback effects of implementing online formative assessment (OFA) in Iranian Teacher Education Universities. To this end, a sample of 227 prospective teachers majoring in Teaching English as a Foreign Language and 21 teacher educators were randomly selected. In an explanatory sequential design, their perceptions of…
Descriptors: Foreign Countries, Computer Assisted Testing, Formative Evaluation, Preservice Teachers
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Barnard, John J. – Practical Assessment, Research & Evaluation, 2018
Measurement specialists strive to shorten assessment time without compromising precision of scores. Computerized Adaptive Testing (CAT) has rapidly gained ground over the past decades to fulfill this goal. However, parameters for implementation of CATs need to be explored in simulations before implementation so that it can be determined whether…
Descriptors: Computer Assisted Testing, Adaptive Testing, Simulation, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Dukjae; Buzick, Heather; Sireci, Stephen G.; Lee, Mina; Laitusis, Cara – Practical Assessment, Research & Evaluation, 2021
Although there has been substantial research on the effects of test accommodations on students' performance, there has been far less research on students' use of embedded accommodations and other accessibility supports at the item and whole test level in operational testing programs. Data on embedded accessibility supports from digital logs…
Descriptors: Academic Accommodations (Disabilities), Testing Accommodations, Accessibility (for Disabled), Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bryant, William – Practical Assessment, Research & Evaluation, 2017
As large-scale standardized tests move from paper-based to computer-based delivery, opportunities arise for test developers to make use of items beyond traditional selected and constructed response types. Technology-enhanced items (TEIs) have the potential to provide advantages over conventional items, including broadening construct measurement,…
Descriptors: Standardized Tests, Test Items, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence – Practical Assessment, Research & Evaluation, 2016
In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…
Descriptors: Accuracy, Bayesian Statistics, Regression (Statistics), Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fask, Alan; Englander, Fred; Wang, Zhaobo – Practical Assessment, Research & Evaluation, 2015
There has been a remarkable growth in distance learning courses in higher education. Despite indications that distance learning courses are more vulnerable to cheating behavior than traditional courses, there has been little research studying whether online exams facilitate a relatively greater level of cheating. This article examines this issue…
Descriptors: Distance Education, Introductory Courses, Statistics, Cheating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Han, Kyung T.; Guo, Fanmin – Practical Assessment, Research & Evaluation, 2014
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Data, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huebner, Alan – Practical Assessment, Research & Evaluation, 2012
Computerized classification tests (CCTs) often use sequential item selection which administers items according to maximizing psychometric information at a cut point demarcating passing and failing scores. This paper illustrates why this method of item selection leads to the overexposure of a significant number of items, and the performances of…
Descriptors: Computer Assisted Testing, Classification, Test Items, Sequential Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A.; Weiss, David J. – Practical Assessment, Research & Evaluation, 2011
A substantial amount of research has been conducted over the past 40 years on technical aspects of computerized adaptive testing (CAT), such as item selection algorithms, item exposure controls, and termination criteria. However, there is little literature providing practical guidance on the development of a CAT. This paper seeks to collate some…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Construction, Models
Previous Page | Next Page »
Pages: 1  |  2