This volume, edited by Jeffrey Racine, Liangjun Su, and Aman Ullah, contains the latest research on nonparametric and semiparametric econometrics and statistics. Chapters by leading international econometricians and statisticians highlight the interface between econometrics and statistical methods for nonparametric and semiparametric procedures.
The majority of empirical research in economics ignores the potential benefits of nonparametric methods, while the majority of advances in nonparametric theory ignore the problems faced in applied econometrics. This book helps bridge this gap between applied economists and theoretical nonparametric econometricians. It discusses in depth, and in terms that someone with only one year of graduate econometrics can understand, basic to advanced nonparametric methods. The analysis starts with density estimation and motivates the procedures through methods that should be familiar to the reader. It then moves on to kernel regression, estimation with discrete data, and advanced methods such as estimation with panel data and instrumental variables models. The book pays close attention to the issues that arise with programming, computing speed, and application. In each chapter, the methods discussed are applied to actual data, paying attention to presentation of results and potential pitfalls.
Interest in nonparametric methodology has grown considerably over the past few decades, stemming in part from vast improvements in computer hardware and the availability of new software that allows practitioners to take full advantage of these numerically intensive methods. This book is written for advanced undergraduate students, intermediate graduate students, and faculty, and provides a complete teaching and learning course at a more accessible level of theoretical rigor than Racine's earlier book co-authored with Qi Li, Nonparametric Econometrics: Theory and Practice (2007). The open source R platform for statistical computing and graphics is used throughout in conjunction with the R package np. Recent developments in reproducible research is emphasized throughout with appendices devoted to helping the reader get up to speed with R, R Markdown, TeX and Git.
This volume presents advanced techniques to modeling markets, with a wide spectrum of topics, including advanced individual demand models, time series analysis, state space models, spatial models, structural models, mediation, models that specify competition and diffusion models. It is intended as a follow-on and companion to Modeling Markets (2015), in which the authors presented the basics of modeling markets along the classical steps of the model building process: specification, data collection, estimation, validation and implementation. This volume builds on the concepts presented in Modeling Markets with an emphasis on advanced methods that are used to specify, estimate and validate marketing models, including structural equation models, partial least squares, mixture models, and hidden Markov models, as well as generalized methods of moments, Bayesian analysis, non/semi-parametric estimation and endogeneity issues. Specific attention is given to big data. The market environment is changing rapidly and constantly. Models that provide information about the sensitivity of market behavior to marketing activities such as advertising, pricing, promotions and distribution are now routinely used by managers for the identification of changes in marketing programs that can improve brand performance. In today’s environment of information overload, the challenge is to make sense of the data that is being provided globally, in real time, from thousands of sources. Although marketing models are now widely accepted, the quality of the marketing decisions is critically dependent upon the quality of the models on which those decisions are based. This volume provides an authoritative and comprehensive review, with each chapter including: · an introduction to the method/methodology · a numerical example/application in marketing · references to other marketing applications · suggestions about software. Featuring contributions from top authors in the field, this volume will explore current and future aspects of modeling markets, providing relevant and timely research and techniques to scientists, researchers, students, academics and practitioners in marketing, management and economics.
The Oxford Handbook of Panel Data examines new developments in the theory and applications of panel data. It includes basic topics like non-stationary panels, co-integration in panels, multifactor panel models, panel unit roots, measurement error in panels, incidental parameters and dynamic panels, spatial panels, nonparametric panel data, random coefficients, treatment effects, sample selection, count panel data, limited dependent variable panel models, unbalanced panel models with interactive effects and influential observations in panel data. Contributors to the Handbook explore applications of panel data to a wide range of topics in economics, including health, labor, marketing, trade, productivity, and macro applications in panels. This Handbook is an informative and comprehensive guide for both those who are relatively new to the field and for those wishing to extend their knowledge to the frontier. It is a trusted and definitive source on panel data, having been edited by Professor Badi Baltagi-widely recognized as one of the foremost econometricians in the area of panel data econometrics. Professor Baltagi has successfully recruited an all-star cast of experts for each of the well-chosen topics in the Handbook.
Financial, Macro and Micro Econometrics Using R, Volume 42, provides state-of-the-art information on important topics in econometrics, including multivariate GARCH, stochastic frontiers, fractional responses, specification testing and model selection, exogeneity testing, causal analysis and forecasting, GMM models, asset bubbles and crises, corporate investments, classification, forecasting, nonstandard problems, cointegration, financial market jumps and co-jumps, among other topics.
Volume 40B of Advances in Econometrics examines innovations in stochastic frontier analysis, nonparametric and semiparametric modeling and estimation, A/B experiments, big-data analysis, and quantile regression.
A concise treatment of modern econometrics and statistics, including underlying ideas from linear algebra, probability theory, and computer programming. This book offers a cogent and concise treatment of econometric theory and methods along with the underlying ideas from statistics, probability theory, and linear algebra. It emphasizes foundations and general principles, but also features many solved exercises, worked examples, and code listings. After mastering the material presented, readers will be ready to take on more advanced work in different areas of quantitative economics and to understand papers from the econometrics literature. The book can be used in graduate-level courses on foundational aspects of econometrics or on fundamental statistical principles. It will also be a valuable reference for independent study. One distinctive aspect of the text is its integration of traditional topics from statistics and econometrics with modern ideas from data science and machine learning; readers will encounter ideas that are driving the current development of statistics and increasingly filtering into econometric methodology. The text treats programming not only as a way to work with data but also as a technique for building intuition via simulation. Many proofs are followed by a simulation that shows the theory in action. As a primer, the book offers readers an entry point into the field, allowing them to see econometrics as a whole rather than as a profusion of apparently unrelated ideas.
Handbook of Industrial Organization, Volume Four highlights new advances in the field, with this new volume presenting interesting chapters written by an international board of expert authors. - Presents authoritative surveys and reviews of advances in theory and econometrics - Reviews recent research on capital raising methods and institutions - Includes discussions on developing countries
Info-metrics is the science of modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information. It is at the intersection of information theory, statistical inference, and decision-making under uncertainty. It plays an important role in helping make informed decisions even when there is inadequate or incomplete information because it provides a framework to process available information with minimal reliance on assumptions that cannot be validated. In this pioneering book, Amos Golan, a leader in info-metrics, focuses on unifying information processing, modeling and inference within a single constrained optimization framework. Foundations of Info-Metrics provides an overview of modeling and inference, rather than a problem specific model, and progresses from the simple premise that information is often insufficient to provide a unique answer for decisions we wish to make. Each decision, or solution, is derived from the available input information along with a choice of inferential procedure. The book contains numerous multidisciplinary applications and case studies, which demonstrate the simplicity and generality of the framework in real world settings. Examples include initial diagnosis at an emergency room, optimal dose decisions, election forecasting, network and information aggregation, weather pattern analyses, portfolio allocation, strategy inference for interacting entities, incorporation of prior information, option pricing, and modeling an interacting social system. Graphical representations illustrate how results can be visualized while exercises and problem sets facilitate extensions. This book is this designed to be accessible for researchers, graduate students, and practitioners across the disciplines.