Some central questions in the natural and social sciences can't be answered by controlled laboratory experiments, often considered to be the hallmark of the scientific method. This impossibility holds for any science concerned with the past. In addition, many manipulative experiments, while possible, would be considered immoral or illegal. One has to devise other methods of observing, describing, and explaining the world. In the historical disciplines, a fruitful approach has been to use natural experiments or the comparative method. This book consists of eight comparative studies drawn from history, archeology, economics, economic history, geography, and political science. The studies cover a spectrum of approaches, ranging from a non-quantitative narrative style in the early chapters to quantitative statistical analyses in the later chapters. The studies range from a simple two-way comparison of Haiti and the Dominican Republic, which share the island of Hispaniola, to comparisons of 81 Pacific islands and 233 areas of India. The societies discussed are contemporary ones, literate societies of recent centuries, and non-literate past societies. Geographically, they include the United States, Mexico, Brazil, western Europe, tropical Africa, India, Siberia, Australia, New Zealand, and other Pacific islands. In an Afterword, the editors discuss how to cope with methodological problems common to these and other natural experiments of history.
"This book is a must for learning about the experimental design–from forming a research question to interpreting the results this text covers it all." –Sarah El Sayed, University of Texas at Arlington Designing Experiments for the Social Sciences: How to Plan, Create, and Execute Research Using Experiments is a practical, applied text for courses in experimental design. The text assumes that students have just a basic knowledge of the scientific method, and no statistics background is required. With its focus on how to effectively design experiments, rather than how to analyze them, the book concentrates on the stage where researchers are making decisions about procedural aspects of the experiment before interventions and treatments are given. Renita Coleman walks readers step-by-step on how to plan and execute experiments from the beginning by discussing choosing and collecting a sample, creating the stimuli and questionnaire, doing a manipulation check or pre-test, analyzing the data, and understanding and interpreting the results. Guidelines for deciding which elements are best used in the creation of a particular kind of experiment are also given. This title offers rich pedagogy, ethical considerations, and examples pertinent to all social science disciplines.
This book is designed to introduce doctoral and graduate students to the process of conducting scientific research in the social sciences, business, education, public health, and related disciplines. It is a one-stop, comprehensive, and compact source for foundational concepts in behavioral research, and can serve as a stand-alone text or as a supplement to research readings in any doctoral seminar or research methods class. This book is currently used as a research text at universities on six continents and will shortly be available in nine different languages.
Educational policy-makers around the world constantly make decisions about how to use scarce resources to improve the education of children. Unfortunately, their decisions are rarely informed by evidence on the consequences of these initiatives in other settings. Nor are decisions typically accompanied by well-formulated plans to evaluate their causal impacts. As a result, knowledge about what works in different situations has been very slow to accumulate. Over the last several decades, advances in research methodology, administrative record keeping, and statistical software have dramatically increased the potential for researchers to conduct compelling evaluations of the causal impacts of educational interventions, and the number of well-designed studies is growing. Written in clear, concise prose, Methods Matter: Improving Causal Inference in Educational and Social Science Research offers essential guidance for those who evaluate educational policies. Using numerous examples of high-quality studies that have evaluated the causal impacts of important educational interventions, the authors go beyond the simple presentation of new analytical methods to discuss the controversies surrounding each study, and provide heuristic explanations that are also broadly accessible. Murnane and Willett offer strong methodological insights on causal inference, while also examining the consequences of a wide variety of educational policies implemented in the U.S. and abroad. Representing a unique contribution to the literature surrounding educational research, this landmark text will be invaluable for students and researchers in education and public policy, as well as those interested in social science.
The feminist philosopher and social scientist shows how "gendering" has affected the social and natural sciences as she reconciles the long-standing dichotomy between the quantitative and qualitative methods and demonstrates the tandem use of both experimental and intuitive approaches.
Throughout the world, voters lack access to information about politicians, government performance, and public services. Efforts to remedy these informational deficits are numerous. Yet do informational campaigns influence voter behavior and increase democratic accountability? Through the first project of the Metaketa Initiative, sponsored by the Evidence in Governance and Politics (EGAP) research network, this book aims to address this substantive question and at the same time introduce a new model for cumulative learning that increases coordination among otherwise independent researcher teams. It presents the overall results (using meta-analysis) from six independently conducted but coordinated field experimental studies, the results from each individual study, and the findings from a related evaluation of whether practitioners utilize this information as expected. It also discusses lessons learned from EGAP's efforts to coordinate field experiments, increase replication of theoretically important studies across contexts, and increase the external validity of field experimental research.