Evaluation Methodology Basics introduces evaluation by focusing on the main kinds of 'big picture' questions that evaluations usually need to answer, and how the nature of such questions are linked to evaluation methodology choices. The author: shows how to identify the right criteria for your evaluation; discusses how to objectively figure out which criteria are more important than the others; and, delves into how to combine a mix of qualitative and quantitative data with 'relevant values' (such as needs) to draw explicitly evaluative conclusions.
In April 1991 BusinessWeek ran a cover story entitled, "I Can't Work This ?#!!@ Thing," about the difficulties many people have with consumer products, such as cell phones and VCRs. More than 15 years later, the situation is much the same-but at a very different level of scale. The disconnect between people and technology has had society-wide consequences in the large-scale system accidents from major human error, such as those at Three Mile Island and in Chernobyl. To prevent both the individually annoying and nationally significant consequences, human capabilities and needs must be considered early and throughout system design and development. One challenge for such consideration has been providing the background and data needed for the seamless integration of humans into the design process from various perspectives: human factors engineering, manpower, personnel, training, safety and health, and, in the military, habitability and survivability. This collection of development activities has come to be called human-system integration (HSI). Human-System Integration in the System Development Process reviews in detail more than 20 categories of HSI methods to provide invaluable guidance and information for system designers and developers.
Economic, academic, and social forces are causing undergraduate schools to start a fresh examination of teaching effectiveness. Administrators face the complex task of developing equitable, predictable ways to evaluate, encourage, and reward good teaching in science, math, engineering, and technology. Evaluating, and Improving Undergraduate Teaching in Science, Technology, Engineering, and Mathematics offers a vision for systematic evaluation of teaching practices and academic programs, with recommendations to the various stakeholders in higher education about how to achieve change. What is good undergraduate teaching? This book discusses how to evaluate undergraduate teaching of science, mathematics, engineering, and technology and what characterizes effective teaching in these fields. Why has it been difficult for colleges and universities to address the question of teaching effectiveness? The committee explores the implications of differences between the research and teaching cultures-and how practices in rewarding researchers could be transferred to the teaching enterprise. How should administrators approach the evaluation of individual faculty members? And how should evaluation results be used? The committee discusses methodologies, offers practical guidelines, and points out pitfalls. Evaluating, and Improving Undergraduate Teaching in Science, Technology, Engineering, and Mathematics provides a blueprint for institutions ready to build effective evaluation programs for teaching in science fields.
The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.
Public programs are designed to reach certain goals and beneficiaries. Methods to understand whether such programs actually work, as well as the level and nature of impacts on intended beneficiaries, are main themes of this book.
With insightful discussion of program evaluation and the efforts of the Centers for Disease Control, this book presents a set of clear-cut recommendations to help ensure that the substantial resources devoted to the fight against AIDS will be used most effectively. This expanded edition of Evaluating AIDS Prevention Programs covers evaluation strategies and outcome measurements, including a realistic review of the factors that make evaluation of AIDS programs particularly difficult. Randomized field experiments are examined, focusing on the use of alternative treatments rather than placebo controls. The book also reviews nonexperimental techniques, including a critical examination of evaluation methods that are observational rather than experimentalâ€"a necessity when randomized experiments are infeasible.
This text provides an introduction to the theory and practice of internal evaluation. It presents the stages of internal evaluation growth, ways of identifying users' needs and selecting appropriate evaluation methods.
Program Evaluation and Performance Measurement: An Introduction to Practice, Second Edition offers an accessible, practical introduction to program evaluation and performance measurement for public and non-profit organizations, and has been extensively updated since the first edition. Using examples, it covers topics in a detailed fashion, making it a useful guide for students as well as practitioners who are participating in program evaluations or constructing and implementing performance measurement systems. Authors James C. McDavid, Irene Huse, and Laura R. L. Hawthorn guide readers through conducting quantitative and qualitative program evaluations, needs assessments, cost-benefit and cost-effectiveness analyses, as well as constructing, implementing and using performance measurement systems. The importance of professional judgment is highlighted throughout the book as an intrinsic feature of evaluation practice.
Updated in its 3rd edition, Basic Methods of Policy Analysis and Planning presents quickly applied methods for analyzing and resolving planning and policy issues at state, regional, and urban levels. Divided into two parts, Methods which presents quick methods in nine chapters and is organized around the steps in the policy analysis process, and Cases which presents seven policy cases, ranging in degree of complexity, the text provides readers with the resources they need for effective policy planning and analysis. Quantitative and qualitative methods are systematically combined to address policy dilemmas and urban planning problems. Readers and analysts utilizing this text gain comprehensive skills and background needed to impact public policy.
This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for conducting large- and small-scale evaluations. Numerous sample studies—many with reflective commentary from the evaluators—reveal the process through which an evaluator incorporates a paradigm into an actual research project. The book shows how theory informs methodological choices (the specifics of planning, implementing, and using evaluations). It offers balanced coverage of quantitative, qualitative, and mixed methods approaches. Useful pedagogical features include: *Examples of large- and small-scale evaluations from multiple disciplines. *Beginning-of-chapter reflection questions that set the stage for the material covered. *"Extending your thinking" questions and practical activities that help readers apply particular theoretical paradigms in their own evaluation projects. *Relevant Web links, including pathways to more details about sampling, data collection, and analysis. *Boxes offering a closer look at key evaluation concepts and additional studies. *Checklists for readers to determine if they have followed recommended practice. *A companion website with resources for further learning.