Large Dimensional Factor Analysis provides a survey of the main theoretical results for large dimensional factor models, emphasizing results that have implications for empirical work. The authors focus on the development of the static factor models and on the use of estimated factors in subsequent estimation and inference. Large Dimensional Factor Analysis discusses how to determine the number of factors, how to conduct inference when estimated factors are used in regressions, how to assess the adequacy pf observed variables as proxies for latent factors, how to exploit the estimated factors to test unit root tests and common trends, and how to estimate panel cointegration models.
This book aims to fill the gap between panel data econometrics textbooks, and the latest development on 'big data', especially large-dimensional panel data econometrics. It introduces important research questions in large panels, including testing for cross-sectional dependence, estimation of factor-augmented panel data models, structural breaks in panels and group patterns in panels. To tackle these high dimensional issues, some techniques used in Machine Learning approaches are also illustrated. Moreover, the Monte Carlo experiments, and empirical examples are also utilised to show how to implement these new inference methods. Large-Dimensional Panel Data Econometrics: Testing, Estimation and Structural Changes also introduces new research questions and results in recent literature in this field.
High-dimensional data appear in many fields, and their analysis has become increasingly important in modern statistics. However, it has long been observed that several well-known methods in multivariate analysis become inefficient, or even misleading, when the data dimension p is larger than, say, several tens. A seminal example is the well-known inefficiency of Hotelling's T2-test in such cases. This example shows that classical large sample limits may no longer hold for high-dimensional data; statisticians must seek new limiting theorems in these instances. Thus, the theory of random matrices (RMT) serves as a much-needed and welcome alternative framework. Based on the authors' own research, this book provides a first-hand introduction to new high-dimensional statistical methods derived from RMT. The book begins with a detailed introduction to useful tools from RMT, and then presents a series of high-dimensional problems with solutions provided by RMT methods.
Greater data availability has been coupled with developments in statistical theory and economic theory to allow more elaborate and complicated models to be entertained. These include factor models, DSGE models, restricted vector autoregressions, and non-linear models.
Latent factor analysis models are an effective type of machine learning model for addressing high-dimensional and sparse matrices, which are encountered in many big-data-related industrial applications. The performance of a latent factor analysis model relies heavily on appropriate hyper-parameters. However, most hyper-parameters are data-dependent, and using grid-search to tune these hyper-parameters is truly laborious and expensive in computational terms. Hence, how to achieve efficient hyper-parameter adaptation for latent factor analysis models has become a significant question. This is the first book to focus on how particle swarm optimization can be incorporated into latent factor analysis for efficient hyper-parameter adaptation, an approach that offers high scalability in real-world industrial applications. The book will help students, researchers and engineers fully understand the basic methodologies of hyper-parameter adaptation via particle swarm optimization in latent factor analysis models. Further, it will enable them to conduct extensive research and experiments on the real-world applications of the content discussed.
A comprehensive introduction to the statistical and econometric methods for analyzing high-frequency financial data High-frequency trading is an algorithm-based computerized trading practice that allows firms to trade stocks in milliseconds. Over the last fifteen years, the use of statistical and econometric methods for analyzing high-frequency financial data has grown exponentially. This growth has been driven by the increasing availability of such data, the technological advancements that make high-frequency trading strategies possible, and the need of practitioners to analyze these data. This comprehensive book introduces readers to these emerging methods and tools of analysis. Yacine Aït-Sahalia and Jean Jacod cover the mathematical foundations of stochastic processes, describe the primary characteristics of high-frequency financial data, and present the asymptotic concepts that their analysis relies on. Aït-Sahalia and Jacod also deal with estimation of the volatility portion of the model, including methods that are robust to market microstructure noise, and address estimation and testing questions involving the jump part of the model. As they demonstrate, the practical importance and relevance of jumps in financial data are universally recognized, but only recently have econometric methods become available to rigorously analyze jump processes. Aït-Sahalia and Jacod approach high-frequency econometrics with a distinct focus on the financial side of matters while maintaining technical rigor, which makes this book invaluable to researchers and practitioners alike.
Presents new models, methods, and techniques and considers important real-world applications in political science, sociology, economics, marketing, and finance Emphasizing interdisciplinary coverage, Bayesian Inference in the Social Sciences builds upon the recent growth in Bayesian methodology and examines an array of topics in model formulation, estimation, and applications. The book presents recent and trending developments in a diverse, yet closely integrated, set of research topics within the social sciences and facilitates the transmission of new ideas and methodology across disciplines while maintaining manageability, coherence, and a clear focus. Bayesian Inference in the Social Sciences features innovative methodology and novel applications in addition to new theoretical developments and modeling approaches, including the formulation and analysis of models with partial observability, sample selection, and incomplete data. Additional areas of inquiry include a Bayesian derivation of empirical likelihood and method of moment estimators, and the analysis of treatment effect models with endogeneity. The book emphasizes practical implementation, reviews and extends estimation algorithms, and examines innovative applications in a multitude of fields. Time series techniques and algorithms are discussed for stochastic volatility, dynamic factor, and time-varying parameter models. Additional features include: Real-world applications and case studies that highlight asset pricing under fat-tailed distributions, price indifference modeling and market segmentation, analysis of dynamic networks, ethnic minorities and civil war, school choice effects, and business cycles and macroeconomic performance State-of-the-art computational tools and Markov chain Monte Carlo algorithms with related materials available via the book’s supplemental website Interdisciplinary coverage from well-known international scholars and practitioners Bayesian Inference in the Social Sciences is an ideal reference for researchers in economics, political science, sociology, and business as well as an excellent resource for academic, government, and regulation agencies. The book is also useful for graduate-level courses in applied econometrics, statistics, mathematical modeling and simulation, numerical methods, computational analysis, and the social sciences.