NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)6
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nokelainen, Petri; Silander, Tomi – Frontline Learning Research, 2014
This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…
Descriptors: Models, Classification, Accuracy, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, A. Adrienne; Engelhard, George, Jr. – Measurement: Interdisciplinary Research and Perspectives, 2014
"Game-Based Assessments: A Promising Way to Create Idiographic Perspectives" (Adrienne Walker and George Englehard) comments on: "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games" by Russell G. Almond, Yoon Jeon Kim, Gertrudes Velasquez, and Valerie J. Shute. Here, Walker and Englehard write…
Descriptors: Educational Games, Task Analysis, Models, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Age, Lars-Johan – Qualitative Report, 2011
Glaserian grounded theory methodology, which has been widely adopted as a scientific methodology in recent decades, has been variously characterised as "hermeneutic" and "positivist." This commentary therefore takes a different approach to characterising grounded theory by undertaking a comprehensive analysis of: (a) the philosophical paradigms of…
Descriptors: Grounded Theory, Critical Theory, Scientific Methodology, Hermeneutics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teplovs, Chris – Journal of Learning Analytics, 2015
This commentary reflects on the contributions to learning analytics and theory by a paper that describes how multiple theoretical frameworks were woven together to inform the creation of a new, automated discourse analysis tool. The commentary highlights the contributions of the original paper, provides some alternative approaches, and touches on…
Descriptors: Data Analysis, Data Collection, Theory Practice Relationship, Instructional Design
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P. – Measurement: Interdisciplinary Research and Perspectives, 2008
In this commentary, the author asks the analogous question, "where's the psychology?" Not because the authors of the focus article "Unique Characteristics of Diagnostic Classification Models: A Comprehensive Review of the Current State-of-the-Art" have not provided a solid review of the technical aspects of Diagnostic…
Descriptors: Theory Practice Relationship, Classification, Psychology, Children
Peer reviewed Peer reviewed
Lynch, Kathleen Bodisch – Educational Evaluation and Policy Analysis, 1983
The current literature is replete with instances of imprecise and conflicting usage of "qualitative" and "quantitative" evaluation. This paper presents a framework for making distinctions among several related concepts to eliminate this potential source of confusion. In the process, the terms "qualitative" and…
Descriptors: Data Collection, Definitions, Evaluation Methods, Models
McAlpine, Lynn – Performance and Instruction, 1992
Presents a model for instructional design that highlights formative evaluation as the central process. Other models of instructional design are discussed; data collection methods are examined; the evaluator's role is considered; and the recursive nature of the design process is emphasized. (seven references) (LRW)
Descriptors: Data Collection, Formative Evaluation, Instructional Design, Models
Aubrecht, Judith D.; Kramer, J. Lance – Continuum, 1982
A series of five questions forms a model for organizing the content of a continuing education staff evaluation system: What purpose will the system serve? What responsibilities should be evaluated? What characteristics are desirable in the position? Who supplies relevant information? Who uses the information? (SK)
Descriptors: Continuing Education, Data Collection, Evaluation Criteria, Higher Education
Horwitz, Jonathan; Kimpel, Howard – Training and Development Journal, 1988
Whenever there is a data gathering need and a group of people have input, the group interview may be a viable approach. The authors offer a step-by-step model for conducting group interviews. (JOW)
Descriptors: Data Collection, Group Discussion, Group Dynamics, Interviews
Gill, Barbara J.; And Others – Educational Technology, 1992
Describes the evaluation of a new model for software evaluation that emphasizes the collection of student performance data to determine the extent to which students learn the knowledge a software package intends to teach. Discussion covers the results of two studies, evaluation criteria and methods, evaluator subjectivity in decision making,…
Descriptors: Computer Software Evaluation, Data Collection, Decision Making, Evaluation Criteria
Owens, Robert G. – 1981
Three issues must be addressed when discussing the standards needed to judge the methodological rigor of naturalistic approaches to administrative research. The first issue involves defining naturalistic inquiry. In contrast to the scientific paradigm, naturalistic inquiry emphasizes, first, the inseparability of variables or events from their…
Descriptors: Credibility, Data Analysis, Data Collection, Field Studies
Peer reviewed Peer reviewed
Owens, Robert G. – Educational Administration Quarterly, 1982
Methodological adequacy in naturalistic inquiry is enhanced by understanding the differences between the naturalistic and rationalistic paradigms; by using simultaneous data collection and analysis, with prolonged field research, an audit trail, multiple sources, and referential materials; and by employing "thick description" extensively…
Descriptors: Credibility, Data Analysis, Data Collection, Field Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Bradlow, Eric T. – Journal of Educational and Behavioral Statistics, 2003
In this article, the author comments on an article by Dunn, Kadane, and Garrow, "Comparing Harm Done by Mobility and Class Absence: Missing Students and Missing Data." He believes the research reported in that article should serve as a model for future applications of Bayesian methods in important educational research problems. The author lauds…
Descriptors: Research Problems, Educational Research, Bayesian Statistics, Researchers
Brandt, Ronald S., Ed. – 1981
Evaluation of a middle school humanities program is the focus of this report. It explains how to identify information needs and set priorities, how to obtain information from a variety of sources, and what to do with the data collected in terms of formulating recommendations for the school board. The variety of evaluation approaches presented are…
Descriptors: Curriculum Evaluation, Data Collection, Evaluation Methods, Humanities
Smith, Barry; And Others – Training and Development Journal, 1986
Examines problem areas of training needs analysis (TNA), first by presenting a model of the TNA process, then by listing sources of data available for TNA (based on an analysis of the literature), and finally by proposing a method of choosing appropriate data-gathering techniques for TNA. (CT)
Descriptors: Data Analysis, Data Collection, Educational Needs, Information Sources
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4