NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Measurement:…81
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 81 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Carol L.; Jones, Andrew T.; Ibáñez, Beatriz; Grambau, Marni; Buyske, Jo – Educational Measurement: Issues and Practice, 2022
In response to the COVID-19 pandemic, the American Board of Surgery (ABS) shifted from in-person to remote administrations of the oral certifying exam (CE). Although the overall exam architecture remains the same, there are a number of differences in administration and staffing costs, exam content, security concerns, and the tools used to give the…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Verbal Tests
Peer reviewed Peer reviewed
Direct linkDirect link
William Belzak; J. R. Lockwood; Yigal Attali – Educational Measurement: Issues and Practice, 2024
Remote proctoring, or monitoring test takers through internet-based, video-recording software, has become critical for maintaining test security on high-stakes assessments. The main role of remote proctors is to make judgments about test takers' behaviors and decide whether these behaviors constitute rule violations. Variability in proctor…
Descriptors: Computer Security, High Stakes Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wenhao; Kingston, Neal M.; Davis, Marcia H.; Tiemann, Gail C.; Tonks, Stephen; Hock, Michael – Educational Measurement: Issues and Practice, 2021
Adaptive tests are more efficient than fixed-length tests through the use of item response theory; adaptive tests also present students questions that are tailored to their proficiency level. Although the adaptive algorithm is straightforward, developing a multidimensional computer adaptive test (MCAT) measure is complex. Evidence-centered design…
Descriptors: Evidence Based Practice, Reading Motivation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Langenfeld, Thomas – Educational Measurement: Issues and Practice, 2020
The COVID-19 pandemic has accelerated the shift toward online learning solutions necessitating the need for developing online assessment solutions. Vendors offer online assessment delivery systems with varying security levels designed to minimize unauthorized behaviors. Combating cheating and securing assessment content, however, is not solely the…
Descriptors: Computer Assisted Testing, Justice, COVID-19, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
DiCerbo, Kristen – Educational Measurement: Issues and Practice, 2020
We have the ability to capture data from students' interactions with digital environments as they engage in learning activity. This provides the potential for a reimagining of assessment to one in which assessment become part of our natural education activity and can be used to support learning. These new data allow us to more closely examine the…
Descriptors: Student Diversity, Information Technology, Learning Activities, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Schneider, M. Christina; Agrimson, Jared; Veazey, Mary – Educational Measurement: Issues and Practice, 2022
This paper presents results of a score interpretation study for a computer adaptive mathematics assessment. The study purpose was to test the efficacy of item developers' alignment of items to Range Achievement-Level Descriptors (RALDs; Egan et al.) against the empirical achievement-level alignment of items to investigate the use of RALDs as the…
Descriptors: Computer Assisted Testing, Mathematics Tests, Scores, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Educational Measurement: Issues and Practice, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-in-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this article, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Allalouf, Avi; Gutentag, Tony; Baumer, Michal – Educational Measurement: Issues and Practice, 2017
Quality control (QC) in testing is paramount. QC procedures for tests can be divided into two types. The first type, one that has been well researched, is QC for tests administered to large population groups on few administration dates using a small set of test forms (e.g., large-scale assessment). The second type is QC for tests, usually…
Descriptors: Quality Control, Scoring, Computer Assisted Testing, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank – Educational Measurement: Issues and Practice, 2019
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper-based assessment (PBA). In the 2015 cycle, computer-based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an…
Descriptors: Reading Comprehension, Computer Assisted Testing, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yamamoto, Kentaro; Shin, Hyo Jeong; Khorramdel, Lale – Educational Measurement: Issues and Practice, 2018
A multistage adaptive testing (MST) design was implemented for the Programme for the International Assessment of Adult Competencies (PIAAC) starting in 2012 for about 40 countries and has been implemented for the 2018 cycle of the Programme for International Student Assessment (PISA) for more than 80 countries. Using examples from PISA and PIAAC,…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Test Validity
Pages: 1  |  2  |  3  |  4  |  5  |  6