Nonconvex Matrix Completion

Nonconvex Matrix Completion

Author: Ji Chen

Publisher:

Published: 2020

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Techniques of matrix completion aim to impute a large portion of missing entries in a data matrix through a small portion of observed ones, with broad machine learning applications including collaborative filtering, system identification, global positioning, etc. This dissertation aims to analyze the nonconvex matrix problem from geometric and algorithmic perspectives. The first part of the dissertation, i.e., Chapter 2 and 3, focuses on analyzing the nonconvex matrix completion problem from the geometric perspective. Geometric analysis has been conducted on various low-rank recovery problems including phase retrieval, matrix factorization and matrix completion in recent few years. Taking matrix completion as an example, with assumptions on the underlying matrix and the sampling rate, all the local minima of the nonconvex objective function were shown to be global minima, i.e., nonconvex optimization can recover the underlying matrix exactly. In Chapter 2, we propose a model-free framework for nonconvex matrix completion: We characterize how well local-minimum based low-rank factorization approximates the underlying matrix without any assumption on it. As an implication, a corollary of our main theorem improves the state-of-the-art sampling rate required for nonconvex matrix completion to rule out spurious local minima. In practice, additional structures are usually employed in order to improve the accuracy of matrix completion. Examples include subspace constraints formed by side information in collaborative filtering, and skew symmetry in pairwise ranking. Chapter 3 performs a unified geometric analysis of nonconvex matrix completion with linearly parameterized factorization, which covers the aforementioned examples as special cases. Uniform upper bounds for estimation errors are established for all local minima, provided assumptions on the sampling rate and the underlying matrix are satisfied. The second part of the dissertation (Chapter 4) focuses on algorithmic analysis of nonconvex matrix completion. Row-wise projection/regularization has become a widely adapted assumption due to its convenience for analysis, though it was observed to be unnecessary in numerical simulations. Recently the gap between theory and practice has been overcome for positive semidefinite matrix completion via so called leave-one-out analysis. In Chapter 4, we extend the leave-one-out analysis to the rectangular case, and more significantly, improve the required sampling rate for convergence guarantee.


Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Author: Thierry Bouwmans

Publisher: CRC Press

Published: 2016-05-27

Total Pages: 553

ISBN-13: 1498724639

DOWNLOAD EBOOK

Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.


Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Author: Thierry Bouwmans

Publisher: CRC Press

Published: 2016-09-20

Total Pages: 510

ISBN-13: 1315353539

DOWNLOAD EBOOK

Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.


Non-negative Matrix Factorization Techniques

Non-negative Matrix Factorization Techniques

Author: Ganesh R. Naik

Publisher: Springer

Published: 2015-09-25

Total Pages: 200

ISBN-13: 3662483319

DOWNLOAD EBOOK

This book collects new results, concepts and further developments of NMF. The open problems discussed include, e.g. in bioinformatics: NMF and its extensions applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining etc. The research results previously scattered in different scientific journals and conference proceedings are methodically collected and presented in a unified form. While readers can read the book chapters sequentially, each chapter is also self-contained. This book can be a good reference work for researchers and engineers interested in NMF, and can also be used as a handbook for students and professionals seeking to gain a better understanding of the latest applications of NMF.


Matrix Methods in Data Mining and Pattern Recognition

Matrix Methods in Data Mining and Pattern Recognition

Author: Lars Elden

Publisher: SIAM

Published: 2007-07-12

Total Pages: 226

ISBN-13: 0898716268

DOWNLOAD EBOOK

Several very powerful numerical linear algebra techniques are available for solving problems in data mining and pattern recognition. This application-oriented book describes how modern matrix methods can be used to solve these problems, gives an introduction to matrix theory and decompositions, and provides students with a set of tools that can be modified for a particular application.Matrix Methods in Data Mining and Pattern Recognition is divided into three parts. Part I gives a short introduction to a few application areas before presenting linear algebra concepts and matrix decompositions that students can use in problem-solving environments such as MATLAB®. Some mathematical proofs that emphasize the existence and properties of the matrix decompositions are included. In Part II, linear algebra techniques are applied to data mining problems. Part III is a brief introduction to eigenvalue and singular value algorithms. The applications discussed by the author are: classification of handwritten digits, text mining, text summarization, pagerank computations related to the GoogleÔ search engine, and face recognition. Exercises and computer assignments are available on a Web page that supplements the book.Audience The book is intended for undergraduate students who have previously taken an introductory scientific computing/numerical analysis course. Graduate students in various data mining and pattern recognition areas who need an introduction to linear algebra techniques will also find the book useful.Contents Preface; Part I: Linear Algebra Concepts and Matrix Decompositions. Chapter 1: Vectors and Matrices in Data Mining and Pattern Recognition; Chapter 2: Vectors and Matrices; Chapter 3: Linear Systems and Least Squares; Chapter 4: Orthogonality; Chapter 5: QR Decomposition; Chapter 6: Singular Value Decomposition; Chapter 7: Reduced-Rank Least Squares Models; Chapter 8: Tensor Decomposition; Chapter 9: Clustering and Nonnegative Matrix Factorization; Part II: Data Mining Applications. Chapter 10: Classification of Handwritten Digits; Chapter 11: Text Mining; Chapter 12: Page Ranking for a Web Search Engine; Chapter 13: Automatic Key Word and Key Sentence Extraction; Chapter 14: Face Recognition Using Tensor SVD. Part III: Computing the Matrix Decompositions. Chapter 15: Computing Eigenvalues and Singular Values; Bibliography; Index.


Non-convex Optimization for Machine Learning

Non-convex Optimization for Machine Learning

Author: Prateek Jain

Publisher: Foundations and Trends in Machine Learning

Published: 2017-12-04

Total Pages: 218

ISBN-13: 9781680833683

DOWNLOAD EBOOK

Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equips the reader with the tools and techniques needed to apply and analyze simple but powerful procedures for non-convex problems. Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. The monograph initiates the discussion with entire chapters devoted to presenting a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. The monograph concludes with a look at four interesting applications in the areas of machine learning and signal processing, and exploring how the non-convex optimization techniques introduced earlier can be used to solve these problems. The monograph also contains, for each of the topics discussed, exercises and figures designed to engage the reader, as well as extensive bibliographic notes pointing towards classical works and recent advances. Non-convex Optimization for Machine Learning can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.


Generalized Low Rank Models

Generalized Low Rank Models

Author: Madeleine Udell

Publisher:

Published: 2015

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. This dissertation extends the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features. We propose several parallel algorithms for fitting generalized low rank models, and describe implementations and numerical results.


The Birth of Numerical Analysis

The Birth of Numerical Analysis

Author: Adhemar Bultheel

Publisher: World Scientific

Published: 2010

Total Pages: 240

ISBN-13: 9812836268

DOWNLOAD EBOOK

The 1947 paper by John von Neumann and Herman Goldstine, OC Numerical Inverting of Matrices of High OrderOCO ( Bulletin of the AMS, Nov. 1947), is considered as the birth certificate of numerical analysis. Since its publication, the evolution of this domain has been enormous. This book is a unique collection of contributions by researchers who have lived through this evolution, testifying about their personal experiences and sketching the evolution of their respective subdomains since the early years. Sample Chapter(s). Chapter 1: Some pioneers of extrapolation methods (323 KB). Contents: Some Pioneers of Extrapolation Methods (C Brezinski); Very Basic Multidimensional Extrapolation Quadrature (J N Lyness); Numerical Methods for Ordinary Differential Equations: Early Days (J C Butcher); Interview with Herbert Bishop Keller (H M Osinga); A Personal Perspective on the History of the Numerical Analysis of Fredholm Integral Equations of the Second Kind (K Atkinson); Memoires on Building on General Purpose Numerical Algorithms Library (B Ford); Recent Trends in High Performance Computing (J J Dongarra et al.); Nonnegativity Constraints in Numerical Analysis (D-H Chen & R J Plemmons); On Nonlinear Optimization Since 1959 (M J D Powell); The History and Development of Numerical Analysis in Scotland: A Personal Perspective (G Alistair Watson); Remembering Philip Rabinowitz (P J Davis & A S Fraenkel); My Early Experiences with Scientific Computation (P J Davis); Applications of Chebyshev Polynomials: From Theoretical Kinematics to Practical Computations (R Piessens). Readership: Mathematicians in numerical analysis and mathematicians who are interested in the history of mathematics.


Optimality Guarantees for Non-convex Low Rank Matrix Recovery Problems

Optimality Guarantees for Non-convex Low Rank Matrix Recovery Problems

Author: Christopher Dale White

Publisher:

Published: 2015

Total Pages: 196

ISBN-13:

DOWNLOAD EBOOK

Low rank matrices lie at the heart of many techniques in scientific computing and machine learning. In this thesis, we examine various scenarios in which we seek to recover an underlying low rank matrix from compressed or noisy measurements. Specifically, we consider the recovery of a rank r positive semidefinite matrix XX[superscript T] [element] R[superscript n x n] from m scalar measurements of the form [mathematic equation] via minimization of the natural l2 loss function [mathematic equation]; we also analyze the quadratic nonnegative matrix factorization (QNMF) approach to clustering where the matrix to be factorized is the transition matrix for a reversible Markov chain. In all of these instances, the optimization problem we wish to solve has many local optima and is highly non-convex. Instead of analyzing convex relaxations, which tend to be complicated and computationally expensive, we operate directly on the natural non-convex problems and prove both local and global optimality guarantees for a family of algorithms.