NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 91 to 105 of 336 results Save | Export
National Assessment Governing Board, 2009
As the ongoing national indicator of what American students know and can do, the National Assessment of Educational Progress (NAEP) in Reading regularly collects achievement information on representative samples of students in grades 4, 8, and 12. The information that NAEP provides about student achievement helps the public, educators, and…
Descriptors: National Competency Tests, Reading Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Molina, Maria Teresa Lopez-Mezquita – Indian Journal of Applied Linguistics, 2009
Lexical competence is considered to be an essential step in the development and consolidation of a student's linguistic ability, and thus the reliable assessment of such competence turns out to be a fundamental aspect in this process. The design and construction of vocabulary tests has become an area of special interest, as it may provide teachers…
Descriptors: Student Evaluation, Second Language Learning, Computer Assisted Testing, Foreign Countries
Peer reviewed Peer reviewed
Holland, Paul W.; Hoskens, Machteld – Psychometrika, 2003
Gives an account of classical test theory that shows how it can be viewed as a mean and variance approximation to a general version of item response theory and then shows how this approach can give insight into predicting the true score of a test and the true scores of tests not necessarily parallel to the given test. (SLD)
Descriptors: Prediction, Test Format, Test Theory, True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Schumacker, Randall E.; Smith, Everett V., Jr. – Educational and Psychological Measurement, 2007
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
Descriptors: Measurement Techniques, Error of Measurement, Item Sampling, Item Response Theory
Roe, Andrew G. – Graduating Engineer, 1985
Presents the case for taking the Engineer in Training examination (EIT), also called the Fundamentals of Engineering Examination, and the Graduate Record Examinations (GRE), indicating that they can affect future employment opportunities, career advancement, and post-graduate studies. Includes subject areas tested, test format, and how to prepare…
Descriptors: Engineering, Engineering Education, Higher Education, Test Format
Tanguma, Jesus – 2000
This paper addresses four steps in test construction specification: (1) the purpose of the test; (2) the content of the test; (3) the format of the test; and (4) the pool of items. If followed, such steps not only will assist the test constructor but will also enhance the students' learning. Within the "Content of the Test" section, two…
Descriptors: Test Construction, Test Content, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Hyun Sook; Kim, Seonghoon; Brennan, Robert L. – Applied Psychological Measurement, 2007
Large-scale testing programs involving classification decisions typically have multiple forms available and conduct equating to ensure cut-score comparability across forms. A test developer might be interested in the extent to which an examinee who happens to take a particular form would have a consistent classification decision if he or she had…
Descriptors: Classification, Reliability, Indexes, Computation
Peer reviewed Peer reviewed
van der Linden, Wim J.; Adema, Jos J. – Journal of Educational Measurement, 1998
Proposes an algorithm for the assembly of multiple test forms in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. Illustrates how the method can be implemented using 0-1 linear programming and gives two examples. (SLD)
Descriptors: Algorithms, Linear Programming, Test Construction, Test Format
Boser, Judith A. – Evaluation News, 1985
The maximum incorporation of computer coding into an instrument is recommended to reduce errors in coding information from questionnaires. Specific suggestions for guiding the precoding process for response options, numeric identifiers, and assignment of card columns are proposed for mainframe computer data entry. (BS)
Descriptors: Computers, Data Collection, Data Processing, Questionnaires
van der Linden, Wim J.; Vos, Hans J.; Chang, Lei – 2000
In judgmental standard setting experiments, it may be difficult to specify subjective probabilities that adequately take the properties of the items into account. As a result, these probabilities are not consistent with each other in the sense that they do not refer to the same borderline level of performance. Methods to check standard setting…
Descriptors: Interrater Reliability, Judges, Probability, Standard Setting
Ministerial Council for Education, Early Childhood Development and Youth Affairs (NJ1), 2008
The information and assessment materials in these resources have been designed to assist teachers to gauge their own students' proficiency in Information and Communication Technologies (ICT) literacy. By examining modules from the National Year 6 and Year 10 ICT Literacy Assessment teachers may be able to design similar tasks and to judge their…
Descriptors: Foreign Countries, National Programs, Testing Programs, National Competency Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Puppin, Leni – English Teaching Forum, 2007
This article describes how The Language Center at the Espirito Santo Federal University changed from using traditional pencil-andpaper tests to performance testing, based on authentic tasks. The change was prompted because people thought that their testing did not reflect a communicative approach to language teaching. The Assessment Project lasted…
Descriptors: Performance Based Assessment, Test Format, Alternative Assessment, Educational Change
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bloxham, Sue – Practitioner Research in Higher Education, 2008
This paper is a polemical discussion of assessment in teacher education. Working from the proposition that assessment serves a number of important purposes for a range of stakeholders (students, employers, quality assurance agencies, government), it argues that there is considerable potential for conflict between the different purposes. In an age…
Descriptors: Teacher Education, Student Evaluation, Stakeholders, Measurement Objectives
Tauber, Robert T. – 1984
A technique is described for reducing the incidence of cheating on multiple choice exams. One form of the test is used and each item is assigned multiple numbers. Depending upon the instructions given to the class, some students will use the first of each pair of numbers to determine where to place their responses on a separate answer sheet, while…
Descriptors: Answer Sheets, Cheating, Higher Education, Multiple Choice Tests
van der Linden, Wim J. – 2001
This report contains a review of procedures for computerized assembly of linear, sequential, and adaptive tests. The common approach to these test assembly problems is to view them as instances of constrained combinatorial optimization. For each testing format, several potentially useful objective functions and types of constraints are discussed.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Test Format
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  23