Exploration of Regularized Covariance Estimates with Analytical Shrinkage Intensity for Producing Invertible Covariance Matrices in High Dimensional Hyperspectral Data

Exploration of Regularized Covariance Estimates with Analytical Shrinkage Intensity for Producing Invertible Covariance Matrices in High Dimensional Hyperspectral Data

Author:

Publisher:

Published: 2007

Total Pages: 18

ISBN-13:

DOWNLOAD EBOOK

Removing background from hyperspectral scenes is a common step in the process of searching for materials of interest. Some approaches to background subtraction use spectral library data and require invertible covariance matrices for each member of the library. This is challenging because the covariance matrix can be calculated but standard methods for estimating the inverse requires that the data set for each library member have many more spectral measurements than spectral channels, which is rarely the case. An alternative approach is called shrinkage estimation. This method is investigated as an approach to providing an invertible covariance matrix estimate in the case where the number of spectral measurements is less than the number of spectral channels. The approach is an analytic method for arriving at a target matrix and the shrinkage parameter that modify the existing covariance matrix for the data to make it invertible. The theory is discussed to develop different estimates. The resulting estimates are computed and inspected on a set of hyperspectral data. This technique shows some promise for arriving at an invertible covariance estimate for small hyperspectral data sets.


High-Dimensional Covariance Matrix Estimation: Shrinkage Toward a Diagonal Target

High-Dimensional Covariance Matrix Estimation: Shrinkage Toward a Diagonal Target

Author: Mr. Sakai Ando

Publisher: International Monetary Fund

Published: 2023-12-08

Total Pages: 32

ISBN-13:

DOWNLOAD EBOOK

This paper proposes a novel shrinkage estimator for high-dimensional covariance matrices by extending the Oracle Approximating Shrinkage (OAS) of Chen et al. (2009) to target the diagonal elements of the sample covariance matrix. We derive the closed-form solution of the shrinkage parameter and show by simulation that, when the diagonal elements of the true covariance matrix exhibit substantial variation, our method reduces the Mean Squared Error, compared with the OAS that targets an average variance. The improvement is larger when the true covariance matrix is sparser. Our method also reduces the Mean Squared Error for the inverse of the covariance matrix.


Large Dimensional Covariance Matrix Estimation with Decomposition-based Regularization

Large Dimensional Covariance Matrix Estimation with Decomposition-based Regularization

Author:

Publisher:

Published: 2014

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Estimation of population covariance matrices from samples of multivariate data is of great importance. When the dimension of a covariance matrix is large but the sample size is limited, it is well known that the sample covariance matrix is dissatisfactory. However, the improvement of covariance matrix estimation is not straightforward, mainly because of the constraint of positive definiteness. This thesis work considers decomposition-based methods to circumvent this primary difficulty. Two ways of covariance matrix estimation with regularization on factor matrices from decompositions are included. One approach replies on the modified Cholesky decomposition from Pourahmadi, and the other technique, matrix exponential or matrix logarithm, is closely related to the spectral decomposition. We explore the usage of covariance matrix estimation by imposing L1 regularization on the entries of Cholesky factor matrices, and find the estimates from this approach are not sensitive to the orders of variables. A given order of variables is the prerequisite in the application of the modified Cholesky decomposition, while in practice, information on the order of variables is often unknown. We take advantage of this property to remove the requirement of order information, and propose an order-invariant covariance matrix estimate by refining estimates corresponding to different orders of variables. The refinement not only guarantees the positive definiteness of the estimated covariance matrix, but also is applicable in general situations without the order of variables being pre-specified. The refined estimate can be approximated by only combining a moderate number of representative estimates. Numerical simulations are conducted to evaluate the performance of the proposed method in comparison with several other estimates. By applying the matrix exponential technique, the problem of estimating positive definite covariance matrices is transformed into a problem of estimating symmetric matrices. There are close connections between covariance matrices and their logarithm matrices, and thus, pursing a matrix logarithm with certain properties helps restoring the original covariance matrix. The covariance matrix estimate from applying L1 regularization to the entries of the matrix logarithm is compared to some other estimates in simulation studies and real data analysis.


Estimation of Covariance Matrix for High-dimensional Data and High-frequency Data

Estimation of Covariance Matrix for High-dimensional Data and High-frequency Data

Author: Changgee Chang

Publisher:

Published: 2012

Total Pages: 86

ISBN-13: 9781267601360

DOWNLOAD EBOOK

The second part is multivariate volatility estimation in high frequency. I propose an estimator that extends the realized kernel method, which was introduced for univariate data. I look at the estimator from a different view and suggest a natural extension. Several asymptotic properties are discussed. I also investigate the optimal kernels and provide a regularization method that produces positive-definite covariance matrix. I conduct a simulation study to verify the asymptotic theory and the finite sample performance of the proposed method.


Regularized Semiparametric Estimation of High Dimensional Dynamic Conditional Covariance Matrices

Regularized Semiparametric Estimation of High Dimensional Dynamic Conditional Covariance Matrices

Author: Claudio Morana

Publisher:

Published: 2019

Total Pages: 58

ISBN-13:

DOWNLOAD EBOOK

This paper proposes a three-step estimation strategy for dynamic conditional correlation models. In the first step, conditional variances for individual and aggregate series are estimated by means of QML equation by equation. In the second step, conditional covariances are estimated by means of the polarization identity, and conditional correlations are estimated by their usual normalization. In the third step, the two-step conditional covariance and correlation matrices are regularized by means of a new non-linear shrinkage procedure and used as starting value for the maximization of the joint likelihood of the model. This yields the final, third step smoothed estimate of the conditional covariance and correlation matrices. Due to its scant computational burden, the proposed strategy allows to estimate high dimensional conditional covariance and correlation matrices. An application to global minimum variance portfolio is also provided, confirming that SP-DCC is a simple and viable alternative to existing DCC models.


Conditional Covariance Estimation for Dimension Reduction and Sensivity Analysis

Conditional Covariance Estimation for Dimension Reduction and Sensivity Analysis

Author: Maikol Solís

Publisher:

Published: 2014

Total Pages: 137

ISBN-13:

DOWNLOAD EBOOK

This thesis will be focused in the estimation of conditional covariance matrices and their applications, in particular, in dimension reduction and sensitivity analyses. In Chapter 2, we are in a context of high-dimensional nonlinear regression. The main objective is to use the sliced inverse regression methodology. Using a functional operator depending on the joint density, we apply a Taylor decomposition around a preliminary estimator. We will prove two things: our estimator is asymptotical normal with variance depending only the linear part, and this variance is efficient from the Cramér-Rao point of view. In the Chapter 3, we study the estimation of conditional covariance matrices, first coordinate-wise where those parameters depend on the unknown joint density which we will replace it by a kernel estimator. We prove that the mean squared error of the nonparametric estimator has a parametric rate of convergence if the joint distribution belongs to some class of smooth functions. Otherwise, we get a slower rate depending on the regularity of the model. For the estimator of the whole matrix estimator, we will apply a regularization of type "banding". Finally, in Chapter 4, we apply our results to estimate the Sobol or sensitivity indices. These indices measure the influence of the inputs with respect to the output in complex models. The advantage of our implementation is that we can estimate the Sobol indices without use computing expensive Monte-Carlo methods. Some illustrations are presented in the chapter showing the capabilities of our estimator.