Optimality Guarantees for Non-convex Low Rank Matrix Recovery Problems

Optimality Guarantees for Non-convex Low Rank Matrix Recovery Problems

Author: Christopher Dale White

Publisher:

Published: 2015

Total Pages: 196

ISBN-13:

DOWNLOAD EBOOK

Low rank matrices lie at the heart of many techniques in scientific computing and machine learning. In this thesis, we examine various scenarios in which we seek to recover an underlying low rank matrix from compressed or noisy measurements. Specifically, we consider the recovery of a rank r positive semidefinite matrix XX[superscript T] [element] R[superscript n x n] from m scalar measurements of the form [mathematic equation] via minimization of the natural l2 loss function [mathematic equation]; we also analyze the quadratic nonnegative matrix factorization (QNMF) approach to clustering where the matrix to be factorized is the transition matrix for a reversible Markov chain. In all of these instances, the optimization problem we wish to solve has many local optima and is highly non-convex. Instead of analyzing convex relaxations, which tend to be complicated and computationally expensive, we operate directly on the natural non-convex problems and prove both local and global optimality guarantees for a family of algorithms.


Large Scale Matrix Factorization with Guarantees

Large Scale Matrix Factorization with Guarantees

Author: Venkata Sesha Pavana Srinadh Bhojanapalli

Publisher:

Published: 2015

Total Pages: 422

ISBN-13:

DOWNLOAD EBOOK

Low rank matrix factorization is an important step in many high dimensional machine learning algorithms. Traditional algorithms for factorization do not scale well with the growing data sizes and there is a need for faster/scalable algorithms. In this dissertation we explore the following two major themes to design scalable factorization algorithms for the problems: matrix completion, low rank approximation (PCA) and semi-definite optimization. (a) Sampling: We develop the optimal way to sample entries of any matrix while preserving its spectral properties. Using this sparse sketch (set of sampled entries) instead of the entire matrix, gives rise to scalable algorithms with good approximation guarantees. (b) Bi-linear factorization structure: We design algorithms that operate explicitly on the factor space instead on the matrix. While bi-linear structure of the factorization, in general, leads to a non-convex optimization problem, we show that under appropriate conditions they indeed recover the solution for the above problems. Both these techniques (individually or in combination) lead to algorithms with lower computational complexity and memory usage. Finally we extend these ideas of sampling and explicit factorization to design algorithms for higher order tensors.


Non-convex Optimization for Machine Learning

Non-convex Optimization for Machine Learning

Author: Prateek Jain

Publisher: Foundations and Trends in Machine Learning

Published: 2017-12-04

Total Pages: 218

ISBN-13: 9781680833683

DOWNLOAD EBOOK

Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equips the reader with the tools and techniques needed to apply and analyze simple but powerful procedures for non-convex problems. Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. The monograph initiates the discussion with entire chapters devoted to presenting a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. The monograph concludes with a look at four interesting applications in the areas of machine learning and signal processing, and exploring how the non-convex optimization techniques introduced earlier can be used to solve these problems. The monograph also contains, for each of the topics discussed, exercises and figures designed to engage the reader, as well as extensive bibliographic notes pointing towards classical works and recent advances. Non-convex Optimization for Machine Learning can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.


Non-convex Optimization Methods for Sparse and Low-rank Reconstruction

Non-convex Optimization Methods for Sparse and Low-rank Reconstruction

Author: Penghang Yin

Publisher:

Published: 2016

Total Pages: 93

ISBN-13: 9781339830124

DOWNLOAD EBOOK

An algorithmic framework, based on the difference of convex functions algorithm, is proposed for minimizing difference of ℓ1 and ℓ 2 norms (ℓ1-2 minimization) as well as a wide class of concave sparse metrics for compressed sensing problems. The resulting algorithm iterates a sequence of ℓ1 minimization problems. An exact sparse recovery theory is established to show that the proposed framework always improves on the basis pursuit (ℓ1 minimization) and inherits robustness from it. Numerical examples on success rates of sparse solution recovery illustrate further that, unlike most existing non-convex compressed sensing solvers in the literature, our method always out-performs basis pursuit, no matter how ill-conditioned the measurement matrix is.As the counterpart of ℓ1-2 minimization for low-rank matrix recovery, we present a phase retrieval method via minimization of the difference of trace and Frobenius norms which we call PhaseLiftOff. The associated least squares minimization with this penalty as regularization is equivalent to the original rank-one least squares problem under a mild condition on the measurement noise. Numerical results show that PhaseLiftOff outperforms the convex PhaseLift and its non-convex variant (log-determinant regularization), and successfully recovers signals near the theoretical lower limit on the number of measurements without the noise.


Ultra-Dense Networks

Ultra-Dense Networks

Author: Haijun Zhang

Publisher: Cambridge University Press

Published: 2020-11-26

Total Pages: 335

ISBN-13: 1108497934

DOWNLOAD EBOOK

Understand the theory, key technologies and applications of UDNs with this authoritative survey.


Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Author: Thierry Bouwmans

Publisher: CRC Press

Published: 2016-09-20

Total Pages: 510

ISBN-13: 1315353539

DOWNLOAD EBOOK

Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.


Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Author: Thierry Bouwmans

Publisher: CRC Press

Published: 2016-05-27

Total Pages: 553

ISBN-13: 1498724639

DOWNLOAD EBOOK

Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.


Compressed Sensing and Its Applications

Compressed Sensing and Its Applications

Author: Holger Boche

Publisher: Birkhäuser

Published: 2019-08-13

Total Pages: 305

ISBN-13: 3319730746

DOWNLOAD EBOOK

The chapters in this volume highlight the state-of-the-art of compressed sensing and are based on talks given at the third international MATHEON conference on the same topic, held from December 4-8, 2017 at the Technical University in Berlin. In addition to methods in compressed sensing, chapters provide insights into cutting edge applications of deep learning in data science, highlighting the overlapping ideas and methods that connect the fields of compressed sensing and deep learning. Specific topics covered include: Quantized compressed sensing Classification Machine learning Oracle inequalities Non-convex optimization Image reconstruction Statistical learning theory This volume will be a valuable resource for graduate students and researchers in the areas of mathematics, computer science, and engineering, as well as other applied scientists exploring potential applications of compressed sensing.


Nonconvex Matrix Completion

Nonconvex Matrix Completion

Author: Ji Chen

Publisher:

Published: 2020

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Techniques of matrix completion aim to impute a large portion of missing entries in a data matrix through a small portion of observed ones, with broad machine learning applications including collaborative filtering, system identification, global positioning, etc. This dissertation aims to analyze the nonconvex matrix problem from geometric and algorithmic perspectives. The first part of the dissertation, i.e., Chapter 2 and 3, focuses on analyzing the nonconvex matrix completion problem from the geometric perspective. Geometric analysis has been conducted on various low-rank recovery problems including phase retrieval, matrix factorization and matrix completion in recent few years. Taking matrix completion as an example, with assumptions on the underlying matrix and the sampling rate, all the local minima of the nonconvex objective function were shown to be global minima, i.e., nonconvex optimization can recover the underlying matrix exactly. In Chapter 2, we propose a model-free framework for nonconvex matrix completion: We characterize how well local-minimum based low-rank factorization approximates the underlying matrix without any assumption on it. As an implication, a corollary of our main theorem improves the state-of-the-art sampling rate required for nonconvex matrix completion to rule out spurious local minima. In practice, additional structures are usually employed in order to improve the accuracy of matrix completion. Examples include subspace constraints formed by side information in collaborative filtering, and skew symmetry in pairwise ranking. Chapter 3 performs a unified geometric analysis of nonconvex matrix completion with linearly parameterized factorization, which covers the aforementioned examples as special cases. Uniform upper bounds for estimation errors are established for all local minima, provided assumptions on the sampling rate and the underlying matrix are satisfied. The second part of the dissertation (Chapter 4) focuses on algorithmic analysis of nonconvex matrix completion. Row-wise projection/regularization has become a widely adapted assumption due to its convenience for analysis, though it was observed to be unnecessary in numerical simulations. Recently the gap between theory and practice has been overcome for positive semidefinite matrix completion via so called leave-one-out analysis. In Chapter 4, we extend the leave-one-out analysis to the rectangular case, and more significantly, improve the required sampling rate for convergence guarantee.


Computer Vision – ECCV 2020

Computer Vision – ECCV 2020

Author: Andrea Vedaldi

Publisher: Springer Nature

Published: 2020-11-18

Total Pages: 829

ISBN-13: 3030585832

DOWNLOAD EBOOK

The 30-volume set, comprising the LNCS books 12346 until 12375, constitutes the refereed proceedings of the 16th European Conference on Computer Vision, ECCV 2020, which was planned to be held in Glasgow, UK, during August 23-28, 2020. The conference was held virtually due to the COVID-19 pandemic. The 1360 revised papers presented in these proceedings were carefully reviewed and selected from a total of 5025 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation.