In October 2016, the National Academies of Sciences, Engineering, and Medicine convened a 1-day public workshop on Principles and Practices for Federal Program Evaluation. The workshop was organized to consider ways to bolster the integrity and protect the objectivity of the evaluation function in federal agenciesâ€"a process that is essential for evidence-based policy making. This publication summarizes the presentations and discussions from the workshop.
This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for conducting large- and small-scale evaluations. Numerous sample studies—many with reflective commentary from the evaluators—reveal the process through which an evaluator incorporates a paradigm into an actual research project. The book shows how theory informs methodological choices (the specifics of planning, implementing, and using evaluations). It offers balanced coverage of quantitative, qualitative, and mixed methods approaches. Useful pedagogical features include: *Examples of large- and small-scale evaluations from multiple disciplines. *Beginning-of-chapter reflection questions that set the stage for the material covered. *"Extending your thinking" questions and practical activities that help readers apply particular theoretical paradigms in their own evaluation projects. *Relevant Web links, including pathways to more details about sampling, data collection, and analysis. *Boxes offering a closer look at key evaluation concepts and additional studies. *Checklists for readers to determine if they have followed recommended practice. *A companion website with resources for further learning.
Educators and administrators are increasingly coming to realize the importance of making decisions based on reliable, accurate data. It is not always clear how to gather that data, however, or what to do with the data once they have been collected. Effective Program Evaluation provides a clear and easily implemented blueprint for evaluating academic programs, practices, or strategies. This blueprint will help answer such questions as: Is our math curriculum adequately targeting higher-order thinking skills? What steps could we take to ensure that our response to intervention (RTI) program meets the needs of all students? How well does our classroom instruction in social studies align with standards and learning goals?
A tremendous amount of money is being steered toward personalized learning (PL) initiatives at the federal, state, and local levels, and it is important to understand the return on the investment in students’ futures. It is only through rigorous discussions that educators and policymakers will be able to determine if PL is a passing fad or if it possesses the staying power necessary to show a positive impact on student achievement. Evaluation of Principles and Best Practices in Personalized Learning is a critical scholarly publication that explores the modern push for schools to implement PL environments and the continuing research to understand the best strategies and implementation methods for personalizing education. It seeks to begin creating a standardized language and standardized approach to the PL initiative and to investigate the implications it has on the educational system. Additionally, this book adds to the professional discussion of PL by looking at both the advantages and disadvantages of PL, the teacher’s role in PL, creating a PL program to scale, the role of technology and PL, the special education population and PL, emerging research on PL, and case studies involving PL. Featuring research on a wide range of topics such as blended learning, preservice teachers, and special education, this book is ideal for teachers, administrators, academicians, policymakers, researchers, and students.
Including a new section on evaluation accountability, this Third Edition details 30 standards which give advice to those interested in planning, implementing and using program evaluations.
Program Evaluation and Performance Measurement: An Introduction to Practice, Second Edition offers an accessible, practical introduction to program evaluation and performance measurement for public and non-profit organizations, and has been extensively updated since the first edition. Using examples, it covers topics in a detailed fashion, making it a useful guide for students as well as practitioners who are participating in program evaluations or constructing and implementing performance measurement systems. Authors James C. McDavid, Irene Huse, and Laura R. L. Hawthorn guide readers through conducting quantitative and qualitative program evaluations, needs assessments, cost-benefit and cost-effectiveness analyses, as well as constructing, implementing and using performance measurement systems. The importance of professional judgment is highlighted throughout the book as an intrinsic feature of evaluation practice.
Policymakers and program managers are continually seeking ways to improve accountability in achieving an entity's mission. A key factor in improving accountability in achieving an entity's mission is to implement an effective internal control system. An effective internal control system helps an entity adapt to shifting environments, evolving demands, changing risks, and new priorities. As programs change and entities strive to improve operational processes and implement new technology, management continually evaluates its internal control system so that it is effective and updated when necessary. Section 3512 (c) and (d) of Title 31 of the United States Code (commonly known as the Federal Managers' Financial Integrity Act (FMFIA)) requires the Comptroller General to issue standards for internal control in the federal government.
This book begins with the context of an agency-based evaluation and describes the method within that context. Students will gain a more complete understanding of this contextual challenge and will learn techniques for operating in the face of these challenges.
The Human Resources Program-Evaluation Handbook is the first book to present state-of-the-art procedures for evaluating and improving human resources programs. Editors Jack E. Edwards, John C. Scott, and Nambury S. Raju provide a user-friendly yet scientifically rigorous "how to" guide to organizational program-evaluation. Integrating perspectives from a variety of human resources and organizational behavior programs, a wide array of contributing professors, consultants, and governmental personnel successfully link scientific information to practical application. Designed for academics and graduate students in industrial-organizational psychology, human resources management, and business, the handbook is also an essential resource for human resources professionals, consultants, and policy makers.
How can evaluation be used most effectively, and what are the strengths and weaknesses of the various methods? Colin Robson provides guidance in a clear and uncluttered way. The issue of collaboration is examined step-by-step; stakeholder models are compared with techniques such as participatory evaluation and practitioner-centred action research; ethical and political considerations are placed in context; and the best ways of communicating findings are discussed. Each chapter is illustrated with helpful exercises to show the practical application of the issues covered, making this an invaluable introduction for anyone new to evaluation.