This book provides the first comprehensive assessment of non-academic research impact in relation to a marginal field of study, namely tourism studies. Informed by interviews with key informants, ethnographic reflections on the author’s extensive work with trade and professional associations, and various secondary data, it paints a picture of inevitable research policy failure. This conclusion is justified by reference to ill-founded official conceptualisations of practitioner and organisational behaviour, and the orientation and quality of tourism research. The author calls for a more serious consideration of research-informed teaching as a means of creating knowledge flows from universities. Research with greater social and economic impact might then be achievable. This radical assessment will be of interest and value to policy makers, university research managers and tourism scholars.
The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.
100 Questions (and Answers) About Action Research by Luke Duesbery and Todd Twyman identifies and answers the essential questions on the process of systematically approaching your practice from an inquiry-oriented perspective, with a focus on improving that practice. This unique text offers progressive instructors an alternative to the research status quo and serves as a reference for readers to improve their practice as advocates for those they serve. The Question and Answer format makes this an ideal supplementary text for traditional research methods courses, and also a helpful guide for practitioners in education, social work, criminal justice, health, business, and other applied disciplines.
The spring of 2020 marked a change in how almost everyone conducted their personal and professional lives, both within science, technology, engineering, mathematics, and medicine (STEMM) and beyond. The COVID-19 pandemic disrupted global scientific conferences and individual laboratories and required people to find space in their homes from which to work. It blurred the boundaries between work and non-work, infusing ambiguity into everyday activities. While adaptations that allowed people to connect became more common, the evidence available at the end of 2020 suggests that the disruptions caused by the COVID-19 pandemic endangered the engagement, experience, and retention of women in academic STEMM, and may roll back some of the achievement gains made by women in the academy to date. Impact of COVID-19 on the Careers of Women in Academic STEMM identifies, names, and documents how the COVID-19 pandemic disrupted the careers of women in academic STEMM during the initial 9-month period since March 2020 and considers how these disruptions - both positive and negative - might shape future progress for women. This publication builds on the 2020 report Promising Practices for Addressing the Underrepresentation of Women in Science, Engineering, and Medicine to develop a comprehensive understanding of the nuanced ways these disruptions have manifested. Impact of COVID-19 on the Careers of Women in Academic STEMM will inform the academic community as it emerges from the pandemic to mitigate any long-term negative consequences for the continued advancement of women in the academic STEMM workforce and build on the adaptations and opportunities that have emerged.
New methods in bibliometrics and alternative metrics provide us with information about research impact at both increasingly granular and global levels. Here, editor Elaine Lasda and a cast of expert contributors present a variety of case studies that demonstrate the practical utilization of these new scholarly metrics.
This book contributes to the current discussion in society, politics and higher education on innovation capacity and the financial and non-financial incentives for researchers. The expert contributions in the book deal with implementation of incentive systems at higher education institutions in order to foster innovation. On the other hand, the book also discusses the extent to which governance structures from economy can be transferred to universities and how scientific performance can be measured and evaluated. This book is essential for decision-makers in knowledge-intensive organizations and higher-educational institutions dealing with the topic of performance management.
Developmental evaluation (DE) offers a powerful approach to monitoring and supporting social innovations by working in partnership with program decision makers. In this book, eminent authority Michael Quinn Patton shows how to conduct evaluations within a DE framework. Patton draws on insights about complex dynamic systems, uncertainty, nonlinearity, and emergence. He illustrates how DE can be used for a range of purposes: ongoing program development, adapting effective principles of practice to local contexts, generating innovations and taking them to scale, and facilitating rapid response in crisis situations. Students and practicing evaluators will appreciate the book's extensive case examples and stories, cartoons, clear writing style, "closer look" sidebars, and summary tables. Provided is essential guidance for making evaluations useful, practical, and credible in support of social change.
This User’s Guide is a resource for investigators and stakeholders who develop and review observational comparative effectiveness research protocols. It explains how to (1) identify key considerations and best practices for research design; (2) build a protocol based on these standards and best practices; and (3) judge the adequacy and completeness of a protocol. Eleven chapters cover all aspects of research design, including: developing study objectives, defining and refining study questions, addressing the heterogeneity of treatment effect, characterizing exposure, selecting a comparator, defining and measuring outcomes, and identifying optimal data sources. Checklists of guidance and key considerations for protocols are provided at the end of each chapter. The User’s Guide was created by researchers affiliated with AHRQ’s Effective Health Care Program, particularly those who participated in AHRQ’s DEcIDE (Developing Evidence to Inform Decisions About Effectiveness) program. Chapters were subject to multiple internal and external independent reviews. More more information, please consult the Agency website: www.effectivehealthcare.ahrq.gov)
Healthcare decision makers in search of reliable information that compares health interventions increasingly turn to systematic reviews for the best summary of the evidence. Systematic reviews identify, select, assess, and synthesize the findings of similar but separate studies, and can help clarify what is known and not known about the potential benefits and harms of drugs, devices, and other healthcare services. Systematic reviews can be helpful for clinicians who want to integrate research findings into their daily practices, for patients to make well-informed choices about their own care, for professional medical societies and other organizations that develop clinical practice guidelines. Too often systematic reviews are of uncertain or poor quality. There are no universally accepted standards for developing systematic reviews leading to variability in how conflicts of interest and biases are handled, how evidence is appraised, and the overall scientific rigor of the process. In Finding What Works in Health Care the Institute of Medicine (IOM) recommends 21 standards for developing high-quality systematic reviews of comparative effectiveness research. The standards address the entire systematic review process from the initial steps of formulating the topic and building the review team to producing a detailed final report that synthesizes what the evidence shows and where knowledge gaps remain. Finding What Works in Health Care also proposes a framework for improving the quality of the science underpinning systematic reviews. This book will serve as a vital resource for both sponsors and producers of systematic reviews of comparative effectiveness research.
The social sector provides services to a wide range of people throughout the world with the aim of creating social value. While doing good is great, doing it well is even better. These organizations, whether nonprofit, for-profit, or public, increasingly need to demonstrate that their efforts are making a positive impact on the world, especially as competition for funding and other scarce resources increases. This heightened focus on impact is positive: learning whether we are making a difference enhances our ability to address pressing social problems effectively and is critical to wise stewardship of resources. Yet demonstrating efficacy remains a big hurdle for most organizations. The Goldilocks Challenge provides a parsimonious framework for measuring the strategies and impact of social sector organizations. A good data strategy starts first with a sound theory of change that helps organizations decide what elements they should monitor and measure. With a theory of change providing solid underpinning, the Goldilocks framework then puts forward four key principles, the CART principles: Credible data that are high quality and analyzed appropriately, Actionable data will actually influence future decisions; Responsible data create more benefits than costs; and Transportable data build knowledge that can be used in the future and by others. Mary Kay Gugerty and Dean Karlan combine their extensive experience working with nonprofits, for-profits and government with their understanding of measuring effectiveness in this insightful guide to thinking about and implementing evidence-based change. This book is an invaluable asset for nonprofit, social enterprise and government leaders, managers, and funders-including anyone considering making a charitable contribution to a nonprofit-to ensure that these organizations get it "just right" by knowing what data to collect, how to collect it, how it can be analyzed, and drawing implications from the analysis. Everyone who wants to make positive change should focus on the top priority: using data to learn, innovate, and improve program implementation over time. Gugerty and Karlan show how.