Large Covariance and Autocovariance Matrices

Large Covariance and Autocovariance Matrices

Author: Arup Bose

Publisher: CRC Press

Published: 2018-07-03

Total Pages: 272

ISBN-13: 1351398164

DOWNLOAD EBOOK

Large Covariance and Autocovariance Matrices brings together a collection of recent results on sample covariance and autocovariance matrices in high-dimensional models and novel ideas on how to use them for statistical inference in one or more high-dimensional time series models. The prerequisites include knowledge of elementary multivariate analysis, basic time series analysis and basic results in stochastic convergence. Part I is on different methods of estimation of large covariance matrices and auto-covariance matrices and properties of these estimators. Part II covers the relevant material on random matrix theory and non-commutative probability. Part III provides results on limit spectra and asymptotic normality of traces of symmetric matrix polynomial functions of sample auto-covariance matrices in high-dimensional linear time series models. These are used to develop graphical and significance tests for different hypotheses involving one or more independent high-dimensional linear time series. The book should be of interest to people in econometrics and statistics (large covariance matrices and high-dimensional time series), mathematics (random matrices and free probability) and computer science (wireless communication). Parts of it can be used in post-graduate courses on high-dimensional statistical inference, high-dimensional random matrices and high-dimensional time series models. It should be particularly attractive to researchers developing statistical methods in high-dimensional time series models. Arup Bose is a professor at the Indian Statistical Institute, Kolkata, India. He is a distinguished researcher in mathematical statistics and has been working in high-dimensional random matrices for the last fifteen years. He has been editor of Sankhyā for several years and has been on the editorial board of several other journals. He is a Fellow of the Institute of Mathematical Statistics, USA and all three national science academies of India, as well as the recipient of the S.S. Bhatnagar Award and the C.R. Rao Award. His first book Patterned Random Matrices was also published by Chapman & Hall. He has a forthcoming graduate text U-statistics, M-estimates and Resampling (with Snigdhansu Chatterjee) to be published by Hindustan Book Agency. Monika Bhattacharjee is a post-doctoral fellow at the Informatics Institute, University of Florida. After graduating from St. Xavier's College, Kolkata, she obtained her master’s in 2012 and PhD in 2016 from the Indian Statistical Institute. Her thesis in high-dimensional covariance and auto-covariance matrices, written under the supervision of Dr. Bose, has received high acclaim.


High-Dimensional Covariance Matrix Estimation

High-Dimensional Covariance Matrix Estimation

Author: Aygul Zagidullina

Publisher: Springer Nature

Published: 2021-10-29

Total Pages: 123

ISBN-13: 3030800652

DOWNLOAD EBOOK

This book presents covariance matrix estimation and related aspects of random matrix theory. It focuses on the sample covariance matrix estimator and provides a holistic description of its properties under two asymptotic regimes: the traditional one, and the high-dimensional regime that better fits the big data context. It draws attention to the deficiencies of standard statistical tools when used in the high-dimensional setting, and introduces the basic concepts and major results related to spectral statistics and random matrix theory under high-dimensional asymptotics in an understandable and reader-friendly way. The aim of this book is to inspire applied statisticians, econometricians, and machine learning practitioners who analyze high-dimensional data to apply the recent developments in their work.


High-Dimensional Covariance Estimation

High-Dimensional Covariance Estimation

Author: Mohsen Pourahmadi

Publisher: John Wiley & Sons

Published: 2013-05-28

Total Pages: 204

ISBN-13: 1118573668

DOWNLOAD EBOOK

Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and machine learning. Recently, the classical sample covariance methodologies have been modified and improved upon to meet the needs of statisticians and researchers dealing with large correlated datasets. High-Dimensional Covariance Estimation focuses on the methodologies based on shrinkage, thresholding, and penalized likelihood with applications to Gaussian graphical models, prediction, and mean-variance portfolio management. The book relies heavily on regression-based ideas and interpretations to connect and unify many existing methods and algorithms for the task. High-Dimensional Covariance Estimation features chapters on: Data, Sparsity, and Regularization Regularizing the Eigenstructure Banding, Tapering, and Thresholding Covariance Matrices Sparse Gaussian Graphical Models Multivariate Regression The book is an ideal resource for researchers in statistics, mathematics, business and economics, computer sciences, and engineering, as well as a useful text or supplement for graduate-level courses in multivariate analysis, covariance estimation, statistical learning, and high-dimensional data analysis.


A Novel Two-Stage Adaptive Method for Estimating Large Covariance and Precision Matrices

A Novel Two-Stage Adaptive Method for Estimating Large Covariance and Precision Matrices

Author: Rajanikanth Rajendran

Publisher:

Published: 2019

Total Pages: 73

ISBN-13:

DOWNLOAD EBOOK

Estimating large covariance and precision (inverse covariance) matrices has become increasingly important in high dimensional statistics because of its wide applications. The estimation problem is challenging not only theoretically due to the constraint of its positive definiteness, but also computationally because of the curse of dimensionality. Many types of estimators have been proposed such as thresholding under the sparsity assumption of the target matrix, banding and tapering the sample covariance matrix. However, these estimators are not always guaranteed to be positive-definite, especially, for finite samples, and the sparsity assumption is rather restrictive. We propose a novel two-stage adaptive method based on the Cholesky decomposition of a general covariance matrix. By banding the precision matrix in the first stage and adapting the estimates to the second stage estimation, we develop a computationally efficient and statistically accurate method for estimating high dimensional precision matrices. We demonstrate the finite-sample performance of the proposed method by simulations from autoregressive, moving average, and long-range dependent processes. We illustrate its wide applicability by analyzing financial data such S&P 500 index and IBM stock returns, and electric power consumption of individual households. The theoretical properties of the proposed method are also investigated within a large class of covariance matrices.


Large Sample Covariance Matrices and High-Dimensional Data Analysis

Large Sample Covariance Matrices and High-Dimensional Data Analysis

Author: Jianfeng Yao

Publisher: Cambridge University Press

Published: 2015-03-26

Total Pages: 0

ISBN-13: 9781107065178

DOWNLOAD EBOOK

High-dimensional data appear in many fields, and their analysis has become increasingly important in modern statistics. However, it has long been observed that several well-known methods in multivariate analysis become inefficient, or even misleading, when the data dimension p is larger than, say, several tens. A seminal example is the well-known inefficiency of Hotelling's T2-test in such cases. This example shows that classical large sample limits may no longer hold for high-dimensional data; statisticians must seek new limiting theorems in these instances. Thus, the theory of random matrices (RMT) serves as a much-needed and welcome alternative framework. Based on the authors' own research, this book provides a first-hand introduction to new high-dimensional statistical methods derived from RMT. The book begins with a detailed introduction to useful tools from RMT, and then presents a series of high-dimensional problems with solutions provided by RMT methods.


Estimation of Autocovariance Matrices for Infinite Dimensional Vector Linear Process

Estimation of Autocovariance Matrices for Infinite Dimensional Vector Linear Process

Author: Monika Bhattacharjee

Publisher:

Published: 2014

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Consider an infinite dimensional vector linear process. Under suitable assumptions on the parameter space, we provide consistent estimators of the autocovariance matrices. In particular, under causality, this includes the infinite-dimensional vector autoregressive (IVAR) process. In that case, we obtain consistent estimators for the parameter matrices. An explicit expression for the estimators is obtained for IVAR(1), under a fairly realistic parameter space. We also show that under some mild restrictions, the consistent estimator of the marginal large dimensional variance-covariance matrix has the same convergence rate as that in case of i.i.d.)samples.


Spectral Analysis of Large Dimensional Random Matrices

Spectral Analysis of Large Dimensional Random Matrices

Author: Zhidong Bai

Publisher: Springer Science & Business Media

Published: 2009-12-10

Total Pages: 560

ISBN-13: 1441906614

DOWNLOAD EBOOK

The aim of the book is to introduce basic concepts, main results, and widely applied mathematical tools in the spectral analysis of large dimensional random matrices. The core of the book focuses on results established under moment conditions on random variables using probabilistic methods, and is thus easily applicable to statistics and other areas of science. The book introduces fundamental results, most of them investigated by the authors, such as the semicircular law of Wigner matrices, the Marcenko-Pastur law, the limiting spectral distribution of the multivariate F matrix, limits of extreme eigenvalues, spectrum separation theorems, convergence rates of empirical distributions, central limit theorems of linear spectral statistics, and the partial solution of the famous circular law. While deriving the main results, the book simultaneously emphasizes the ideas and methodologies of the fundamental mathematical tools, among them being: truncation techniques, matrix identities, moment convergence theorems, and the Stieltjes transform. Its treatment is especially fitting to the needs of mathematics and statistics graduate students and beginning researchers, having a basic knowledge of matrix theory and an understanding of probability theory at the graduate level, who desire to learn the concepts and tools in solving problems in this area. It can also serve as a detailed handbook on results of large dimensional random matrices for practical users. This second edition includes two additional chapters, one on the authors' results on the limiting behavior of eigenvectors of sample covariance matrices, another on applications to wireless communications and finance. While attempting to bring this edition up-to-date on recent work, it also provides summaries of other areas which are typically considered part of the general field of random matrix theory.


Large Dimensional Covariance Matrix Estimation with Decomposition-based Regularization

Large Dimensional Covariance Matrix Estimation with Decomposition-based Regularization

Author:

Publisher:

Published: 2014

Total Pages: 129

ISBN-13:

DOWNLOAD EBOOK

Estimation of population covariance matrices from samples of multivariate data is of great importance. When the dimension of a covariance matrix is large but the sample size is limited, it is well known that the sample covariance matrix is dissatisfactory. However, the improvement of covariance matrix estimation is not straightforward, mainly because of the constraint of positive definiteness. This thesis work considers decomposition-based methods to circumvent this primary difficulty. Two ways of covariance matrix estimation with regularization on factor matrices from decompositions are included. One approach replies on the modified Cholesky decomposition from Pourahmadi, and the other technique, matrix exponential or matrix logarithm, is closely related to the spectral decomposition. We explore the usage of covariance matrix estimation by imposing L1 regularization on the entries of Cholesky factor matrices, and find the estimates from this approach are not sensitive to the orders of variables. A given order of variables is the prerequisite in the application of the modified Cholesky decomposition, while in practice, information on the order of variables is often unknown. We take advantage of this property to remove the requirement of order information, and propose an order-invariant covariance matrix estimate by refining estimates corresponding to different orders of variables. The refinement not only guarantees the positive definiteness of the estimated covariance matrix, but also is applicable in general situations without the order of variables being pre-specified. The refined estimate can be approximated by only combining a moderate number of representative estimates. Numerical simulations are conducted to evaluate the performance of the proposed method in comparison with several other estimates. By applying the matrix exponential technique, the problem of estimating positive definite covariance matrices is transformed into a problem of estimating symmetric matrices. There are close connections between covariance matrices and their logarithm matrices, and thus, pursing a matrix logarithm with certain properties helps restoring the original covariance matrix. The covariance matrix estimate from applying L1 regularization to the entries of the matrix logarithm is compared to some other estimates in simulation studies and real data analysis.


Shrinkage Estimation of Large Covariance Matrices

Shrinkage Estimation of Large Covariance Matrices

Author: Olivier Ledoit

Publisher:

Published: 2019

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Under rotation-equivariant decision theory, sample covariance matrix eigenvalues can be optimally shrunk by recombining sample eigenvectors with a (potentially nonlinear) function of the unobservable population covariance matrix. The optimal shape of this function reflects the loss/risk that is to be minimized. We introduce a broad family of covariance matrix estimators that can handle all regular functional transformations of the population covariance matrix under large-dimensional asymptotics. We solve the problem of optimal covariance matrix estimation under a variety of loss functions motivated by statistical precedent, probability theory, and differential geometry. The key statistical ingredient of our nonlinear shrinkage methodology is a new estimator of the angle between sample and population eigenvectors, without making strong assumptions on the population eigenvalues. We also compare our methodology to two simpler ones from the literature, linear shrinkage and shrinkage based on the spiked covariance model, via both Monte Carlo simulations and an empirical application.