Myoung-jae Lee reviews the three most popular methods (and their extensions) in applied economics and other social sciences: matching, regression discontinuity, and difference in differences. This book introduces the underlying econometric and statistical ideas, shows what is identified and how the identified parameters are estimated, and illustrates how they are applied with real empirical examples. Lee emphasizes how to implement the three methods with data: data and programs are provided in a useful online appendix. All readers-theoretical econometricians/statisticians, applied economists/social-scientists and researchers/students-will find something useful in the book from different perspectives.
Myoung-jae Lee reviews the three most popular methods (and their extensions) in applied economics and other social sciences: matching, regression discontinuity, and difference in differences. This book introduces the underlying econometric and statistical ideas, shows what is identified and how the identified parameters are estimated, and illustrates how they are applied with real empirical examples. Lee emphasizes how to implement the three methods with data: data and programs are provided in a useful online appendix. All readers-theoretical econometricians/statisticians, applied economists/social-scientists and researchers/students-will find something useful in the book from different perspectives.
This book discusses the nature of exogeneity, a central concept in standard econometrics texts, and shows how to test for it through numerous substantive empirical examples from around the world, including the UK, Argentina, Denmark, Finland, and Norway. Part I defines terms and provides the necessary background; Part II contains applications to models of expenditure, money demand, inflation, wages and prices, and exchange rates; and Part III extends various tests of constancy and forecast accuracy, which are central to testing super exogeneity. About the Series Advanced Texts in Econometrics is a distinguished and rapidly expanding series in which leading econometricians assess recent developments in such areas as stochastic probability, panel and time series data analysis, modeling, and cointegration. In both hardback and affordable paperback, each volume explains the nature and applicability of a topic in greater depth than possible in introductory textbooks or single journal articles. Each definitive work is formatted to be as accessible and convenient for those who are not familiar with the detailed primary literature.
In many disciplines of science it is vital to know the effect of a 'treatment' on a response variable of interest; the effect being known as the 'treatment effect'. Here, the treatment can be a drug, an education program or an economic policy, and the response variable can be an illness,academic achievement or GDP. Once the effect is found, it is possible to intervene to adjust the treatment and attain a desired level of the response variable.A basic way to measure the treatment effect is to compare two groups, one of which received the treatment and the other did not. If the two groups are homogenous in all aspects other than their treatment status, then the difference between their response outcomes is the desired treatment effect. Butif they differ in some aspects in addition to the treatment status, the difference in the response outcomes may be due to the combined influence of more than one factor. In non-experimental data where the treatment is not randomly assigned but self-selected, the subjects tend to differ in observedor unobserved characteristics. It is therefore imperative that the comparison be carried out with subjects similar in their characteristics. This book explains how this problem can be overcome so the attributable effect of the treatment can be found.This book brings to the fore recent advances in econometrics for treatment effects. The purpose of this book is to put together various economic treatments effect models in a coherent fashion, make it clear which can be parameters of interest, and show how they can be identified and estimated underweak assumptions. The emphasis throughout the book is on semi- and non-parametric estimation methods, but traditional parametric approaches are also discussed. This book is ideally suited to researchers and graduate students with a basic knowledge of econometrics.
The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.
The book describes and illustrates many advances that have taken place in a number of areas in theoretical and applied econometrics over the past four decades.
Nowadays applied work in business and economics requires a solid understanding of econometric methods to support decision-making. Combining a solid exposition of econometric methods with an application-oriented approach, this rigorous textbook provides students with a working understanding and hands-on experience of current econometrics. Taking a 'learning by doing' approach, it covers basic econometric methods (statistics, simple and multiple regression, nonlinear regression, maximum likelihood, and generalized method of moments), and addresses the creative process of model building with due attention to diagnostic testing and model improvement. Its last part is devoted to two major application areas: the econometrics of choice data (logit and probit, multinomial and ordered choice, truncated and censored data, and duration data) and the econometrics of time series data (univariate time series, trends, volatility, vector autoregressions, and a brief discussion of SUR models, panel data, and simultaneous equations). · Real-world text examples and practical exercise questions stimulate active learning and show how econometrics can solve practical questions in modern business and economic management. · Focuses on the core of econometrics, regression, and covers two major advanced topics, choice data with applications in marketing and micro-economics, and time series data with applications in finance and macro-economics. · Learning-support features include concise, manageable sections of text, frequent cross-references to related and background material, summaries, computational schemes, keyword lists, suggested further reading, exercise sets, and online data sets and solutions. · Derivations and theory exercises are clearly marked for students in advanced courses. This textbook is perfect for advanced undergraduate students, new graduate students, and applied researchers in econometrics, business, and economics, and for researchers in other fields that draw on modern applied econometrics.
An accessible, contemporary introduction to the methods for determining cause and effect in the Social Sciences “Causation versus correlation has been the basis of arguments—economic and otherwise—since the beginning of time. Causal Inference: The Mixtape uses legit real-world examples that I found genuinely thought-provoking. It’s rare that a book prompts readers to expand their outlook; this one did for me.”—Marvin Young (Young MC) Causal inference encompasses the tools that allow social scientists to determine what causes what. In a messy world, causal inference is what helps establish the causes and effects of the actions being studied—for example, the impact (or lack thereof) of increases in the minimum wage on employment, the effects of early childhood education on incarceration later in life, or the influence on economic growth of introducing malaria nets in developing regions. Scott Cunningham introduces students and practitioners to the methods necessary to arrive at meaningful answers to the questions of causation, using a range of modeling techniques and coding instructions for both the R and the Stata programming languages.
Panel data is a data type increasingly used in research in economics, social sciences, and medicine. Its primary characteristic is that the data variation goes jointly over space (across individuals, firms, countries, etc.) and time (over years, months, etc.). Panel data allow examination of problems that cannot be handled by cross-section data or time-series data. Panel data analysis is a core field in modern econometrics and multivariate statistics, and studies based on such data occupy a growing part of the field in many other disciplines. The book is intended as a text for master and advanced undergraduate courses. It may also be useful for PhD-students writing theses in empirical and applied economics and readers conducting empirical work on their own. The book attempts to take the reader gradually from simple models and methods in scalar (simple vector) notation to more complex models in matrix notation. A distinctive feature is that more attention is given to unbalanced panel data, the measurement error problem, random coefficient approaches, the interface between panel data and aggregation, and the interface between unbalanced panels and truncated and censored data sets. The 12 chapters are intended to be largely self-contained, although there is also natural progression. Most of the chapters contain commented examples based on genuine data, mainly taken from panel data applications to economics. Although the book, inter alia, through its use of examples, is aimed primarily at students of economics and econometrics, it may also be useful for readers in social sciences, psychology, and medicine, provided they have a sufficient background in statistics, notably basic regression analysis and elementary linear algebra.