Economic Modeling and Inference takes econometrics to a new level by demonstrating how to combine modern economic theory with the latest statistical inference methods to get the most out of economic data. This graduate-level textbook draws applications from both microeconomics and macroeconomics, paying special attention to financial and labor economics, with an emphasis throughout on what observations can tell us about stochastic dynamic models of rational optimizing behavior and equilibrium. Bent Jesper Christensen and Nicholas Kiefer show how parameters often thought estimable in applications are not identified even in simple dynamic programming models, and they investigate the roles of extensions, including measurement error, imperfect control, and random utility shocks for inference. When all implications of optimization and equilibrium are imposed in the empirical procedures, the resulting estimation problems are often nonstandard, with the estimators exhibiting nonregular asymptotic behavior such as short-ranked covariance, superconsistency, and non-Gaussianity. Christensen and Kiefer explore these properties in detail, covering areas including job search models of the labor market, asset pricing, option pricing, marketing, and retirement planning. Ideal for researchers and practitioners as well as students, Economic Modeling and Inference uses real-world data to illustrate how to derive the best results using a combination of theory and cutting-edge econometric techniques. Covers identification and estimation of dynamic programming models Treats sources of error--measurement error, random utility, and imperfect control Features financial applications including asset pricing, option pricing, and optimal hedging Describes labor applications including job search, equilibrium search, and retirement Illustrates the wide applicability of the approach using micro, macro, and marketing examples
This book covers important topics in econometrics. It discusses methods for efficient estimation in models defined by unconditional and conditional moment restrictions, inference in misspecified models, generalized empirical likelihood estimators, and alternative asymptotic approximations. The first chapter provides a general overview of established nonparametric and parametric approaches to estimation and conventional frameworks for statistical inference. The next several chapters focus on the estimation of models based on moment restrictions implied by economic theory. The final chapters cover nonconventional asymptotic tools that lead to improved finite-sample inference.
This volume includes papers delivered at the Fourth World Congress of the Econometric Society. It will interest economic theorists and econometricians working in universities, government, and business and financial institutions.
The aim of this book is to present the main statistical tools of econometrics. It covers almost all modern econometric methodology and unifies the approach by using a small number of estimation techniques, many from generalized method of moments (GMM) estimation. The work is in four parts: Part I sets forth statistical methods, Part II covers regression models, Part III investigates dynamic models, and Part IV synthesizes a set of problems that are specific models in structural econometrics, namely identification and overidentification, simultaneity, and unobservability. Many theoretical examples illustrate the discussion and can be treated as application exercises.
The Econometric Analysis of Network Data serves as an entry point for advanced students, researchers, and data scientists seeking to perform effective analyses of networks, especially inference problems. It introduces the key results and ideas in an accessible, yet rigorous way. While a multi-contributor reference, the work is tightly focused and disciplined, providing latitude for varied specialties in one authorial voice.
This book examines the consequences of misspecifications for the interpretation of likelihood-based methods of statistical estimation and interference. The analysis concludes with an examination of methods by which the possibility of misspecification can be empirically investigated.
Economic Models for Industrial Organization focuses on the specification and estimation of econometric models for research in industrial organization. In recent decades, empirical work in industrial organization has moved towards dynamic and equilibrium models, involving econometric methods which have features distinct from those used in other areas of applied economics. These lecture notes, aimed for a first or second-year PhD course, motivate and explain these econometric methods, starting from simple models and building to models with the complexity observed in typical research papers. The covered topics include discrete-choice demand analysis, models of dynamic behavior and dynamic games, multiple equilibria in entry games and partial identification, and auction models.
This book is a full-scale exposition of Charles Manski's new methodology for analyzing empirical questions in the social sciences. He recommends that researchers first ask what can be learned from data alone, and then ask what can be learned when data are combined with credible weak assumptions. Inferences predicated on weak assumptions, he argues, can achieve wide consensus, while ones that require strong assumptions almost inevitably are subject to sharp disagreements. Building on the foundation laid in the author's Identification Problems in the Social Sciences (Harvard, 1995), the book's fifteen chapters are organized in three parts. Part I studies prediction with missing or otherwise incomplete data. Part II concerns the analysis of treatment response, which aims to predict outcomes when alternative treatment rules are applied to a population. Part III studies prediction of choice behavior. Each chapter juxtaposes developments of methodology with empirical or numerical illustrations. The book employs a simple notation and mathematical apparatus, using only basic elements of probability theory.
The author draws on examples from a range of disciplines to provide social and behavioural scientists with a toolkit for finding bounds when predicting behaviours based upon nonexperimental and experimental data.