NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
What Works Clearinghouse Rating
Showing 1 to 15 of 39 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Laura G. – International Journal of Behavioral Development, 2020
Retrospective pretests ask respondents to report after an intervention on their aptitudes, knowledge, or beliefs before the intervention. A primary reason to administer a retrospective pretest is that in some situations, program participants may over the course of an intervention revise or recalibrate their prior understanding of program content,…
Descriptors: Pretesting, Response Style (Tests), Bias, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horák, Tania; Gandini, Elena – Research-publishing.net, 2019
This paper reports on the proposed transfer of a paper-based English proficiency exam to an online platform. We discuss both the potential predetermined advantages, which were the impetus for the project, and also some emergent benefits, which prompted an in-depth analysis and reconceptualisation of the exam's role, which in turn we hope will…
Descriptors: Second Language Learning, Second Language Instruction, Feedback (Response), Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khamkhien, Attapol – English Language Teaching, 2010
To successfully assess how language learners enhance their performance and achieve language learning goals, the four macro skills of listening, speaking reading and writing are usually the most frequently assessed and focused areas. However, speaking, as a productive skill, seems intuitively the most important of all the four language skills…
Descriptors: Foreign Countries, English (Second Language), Second Language Instruction, Speech Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Birjandi, Parviz; Bagherkazemi, Marzieh – English Language Teaching, 2011
The pressing need for English oral communication skills in multifarious contexts today is compelling impetus behind the large number of studies done on oral proficiency interviewing. Moreover, given the recently articulated concerns with the fairness and social dimension of such interviews, parallel concerns have been raised as to how most fairly…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Oral Language
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory – Educational Research and Evaluation, 2013
In the attempt to identify or prevent unfair tests, both quantitative analyses and logical evaluation are often used. For the most part, fairness evaluation is a pragmatic attempt at determining whether procedural or substantive due process has been accorded to either a group of test takers or an individual. In both the individual and comparative…
Descriptors: Alternative Assessment, Test Bias, Test Content, Test Format
Peer reviewed Peer reviewed
Hanson, Bradley A. – Applied Measurement in Education, 1996
Determining whether score distributions differ on two or more test forms administered to samples of examinees from a single population is explored using three statistical tests using loglinear models. Examples are presented of applying tests of distribution differences to decide if equating is needed for alternative forms of a test. (SLD)
Descriptors: Equated Scores, Scoring, Statistical Distributions, Test Format
Peer reviewed Peer reviewed
Ritter, Leonora – Assessment & Evaluation in Higher Education, 2000
Describes and evaluates a "controlled assessment procedure" as a holistic approach to avoiding problems of administering and evaluating traditional exams. Key characteristics include: the question is known well in advance and is broad and open-ended, students are encouraged to respond in any self-selected written format, and the rationale and…
Descriptors: Alternative Assessment, Higher Education, Student Evaluation, Test Format
Stocking, Martha L.; Lewis, Charles – 1995
In the periodic testing environment associated with conventional paper-and-pencil tests, the frequency with which items are seen by test-takers is tightly controlled in advance of testing by policies that regulate both the reuse of test forms and the frequency with which candidates may take the test. In the continuous testing environment…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Test Construction
Hambleton, Ronald K.; Bollwark, John – 1991
The validity of results from international assessments depends on the correctness of the test translations. If the tests presented in one language are more or less difficult because of the manner in which they are translated, the validity of any interpretation of the results can be questioned. Many test translation methods exist in the literature,…
Descriptors: Cultural Differences, Educational Assessment, English, Foreign Countries
Stocking, Martha L. – 1994
As adaptive testing moves toward operational implementation in large scale testing programs, where it is important that adaptive tests be as parallel as possible to existing linear tests, a number of practical issues arise. This paper concerns three such issues. First, optimum item pool size is difficult to determine in advance of pool…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Standards
Peer reviewed Peer reviewed
Wainer, Howard; And Others – Journal of Educational Measurement, 1994
The comparability of scores on test forms that are constructed through examinee item choice is examined in an item response theory framework. The approach is illustrated with data from the College Board's Advanced Placement Test in Chemistry taken by over 18,000 examinees. (SLD)
Descriptors: Advanced Placement, Chemistry, Comparative Analysis, Constructed Response
Peer reviewed Peer reviewed
Wainer, Howard – Educational Measurement: Issues and Practice, 1993
Some cautions are sounded for converting a linearly administered test to an adaptive format. Four areas are identified in which practices broadly used in traditionally constructed tests can have adverse effects if thoughtlessly adopted when a test is administered in an adaptive mode. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Practices, Test Construction
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Previous Page | Next Page »
Pages: 1  |  2  |  3