"Contains numerous simple examples and illustrative diagrams....For anyone seeking information about eigenvalue inclusion theorems, this book will be a great reference." --Mathematical Reviews This book studies the original results, and their extensions, of the Russian mathematician S.A. Geršgorin who wrote a seminal paper in 1931 on how to easily obtain estimates of all n eigenvalues (characteristic values) of any given n-by-n complex matrix.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
Handbook of Sinc Numerical Methods presents an ideal road map for handling general numeric problems. Reflecting the author's advances with Sinc since 1995, the text most notably provides a detailed exposition of the Sinc separation of variables method for numerically solving the full range of partial differential equations (PDEs) of interest to sci
The first in-depth, complete, and unified theoretical discussion of the two most important classes of algorithms for solving matrix eigenvalue problems: QR-like algorithms for dense problems and Krylov subspace methods for sparse problems. The author discusses the theory of the generic GR algorithm, including special cases (for example, QR, SR, HR), and the development of Krylov subspace methods. This book also addresses a generic Krylov process and the Arnoldi and various Lanczos algorithms, which are obtained as special cases. Theoretical and computational exercises guide students, step by step, to the results. Downloadable MATLAB programs, compiled by the author, are available on a supplementary Web site. Readers of this book are expected to be familiar with the basic ideas of linear algebra and to have had some experience with matrix computations. Ideal for graduate students, or as a reference book for researchers and users of eigenvalue codes.
The essential reference book on matrices—now fully updated and expanded, with new material on scalar and vector mathematics Since its initial publication, this book has become the essential reference for users of matrices in all branches of engineering, science, and applied mathematics. In this revised and expanded edition, Dennis Bernstein combines extensive material on scalar and vector mathematics with the latest results in matrix theory to make this the most comprehensive, current, and easy-to-use book on the subject. Each chapter describes relevant theoretical background followed by specialized results. Hundreds of identities, inequalities, and facts are stated clearly and rigorously, with cross-references, citations to the literature, and helpful comments. Beginning with preliminaries on sets, logic, relations, and functions, this unique compendium covers all the major topics in matrix theory, such as transformations and decompositions, polynomial matrices, generalized inverses, and norms. Additional topics include graphs, groups, convex functions, polynomials, and linear systems. The book also features a wealth of new material on scalar inequalities, geometry, combinatorics, series, integrals, and more. Now more comprehensive than ever, Scalar, Vector, and Matrix Mathematics includes a detailed list of symbols, a summary of notation and conventions, an extensive bibliography and author index with page references, and an exhaustive subject index. Fully updated and expanded with new material on scalar and vector mathematics Covers the latest results in matrix theory Provides a list of symbols and a summary of conventions for easy and precise use Includes an extensive bibliography with back-referencing plus an author index
This IMA Volume in Mathematics and its Applications COMBINATORIAL AND GRAPH-THEORETICAL PROBLEMS IN LINEAR ALGEBRA is based on the proceedings of a workshop that was an integral part of the 1991-92 IMA program on "Applied Linear Algebra." We are grateful to Richard Brualdi, George Cybenko, Alan George, Gene Golub, Mitchell Luskin, and Paul Van Dooren for planning and implementing the year-long program. We especially thank Richard Brualdi, Shmuel Friedland, and Victor Klee for organizing this workshop and editing the proceedings. The financial support of the National Science Foundation made the workshop possible. A vner Friedman Willard Miller, Jr. PREFACE The 1991-1992 program of the Institute for Mathematics and its Applications (IMA) was Applied Linear Algebra. As part of this program, a workshop on Com binatorial and Graph-theoretical Problems in Linear Algebra was held on November 11-15, 1991. The purpose of the workshop was to bring together in an informal setting the diverse group of people who work on problems in linear algebra and matrix theory in which combinatorial or graph~theoretic analysis is a major com ponent. Many of the participants of the workshop enjoyed the hospitality of the IMA for the entire fall quarter, in which the emphasis was discrete matrix analysis.
Each chapter in this book describes relevant background theory followed by specialized results. Hundreds of identities, inequalities, and matrix facts are stated clearly with cross references, citations to the literature, and illuminating remarks.
In the recently published book of E. Bodewig, Matrix Calculus some results of the earlier parts of this paper are mentioned. It is stated there that they are of theoretical interest, but have no practical value. In this paper it will be shown that they can easily be used for practical computations.