This book is a revised version of the first edition, regarded as a classic in its field. In some places, newer research results have been incorporated in the revision, and in other places, new material has been added to the chapters in the form of additional up-to-date references and some recent theorems to give readers some new directions to pursue.
This book is a revised version of the first edition, regarded as a classic in its field. In some places, newer research results have been incorporated in the revision, and in other places, new material has been added to the chapters in the form of additional up-to-date references and some recent theorems to give readers some new directions to pursue.
Iterative Solution of Large Linear Systems describes the systematic development of a substantial portion of the theory of iterative methods for solving large linear systems, with emphasis on practical techniques. The focal point of the book is an analysis of the convergence properties of the successive overrelaxation (SOR) method as applied to a linear system where the matrix is "consistently ordered". Comprised of 18 chapters, this volume begins by showing how the solution of a certain partial differential equation by finite difference methods leads to a large linear system with a sparse matrix. The next chapter reviews matrix theory and the properties of matrices, as well as several theorems of matrix theory without proof. A number of iterative methods, including the SOR method, are then considered. Convergence theorems are also given for various iterative methods under certain assumptions on the matrix A of the system. Subsequent chapters deal with the eigenvalues of the SOR method for consistently ordered matrices; the optimum relaxation factor; nonstationary linear iterative methods; and semi-iterative methods. This book will be of interest to students and practitioners in the fields of computer science and applied mathematics.
Iterative Methods for Linear Systems?offers a mathematically rigorous introduction to fundamental iterative methods for systems of linear algebraic equations. The book distinguishes itself from other texts on the topic by providing a straightforward yet comprehensive analysis of the Krylov subspace methods, approaching the development and analysis of algorithms from various algorithmic and mathematical perspectives, and going beyond the standard description of iterative methods by connecting them in a natural way to the idea of preconditioning.??
C. F. GauS in a letter from Dec. 26, 1823 to Gerling: 3c~ empfe~le 3~nen biegen IDlobu9 aur 9tac~a~mung. ec~werlic~ werben eie ie wieber bi reet eliminiren, wenigftens nic~t, wenn eie me~r als 2 Unbefannte ~aben. :Da9 inbirecte 93erfa~ren 109st sic~ ~alb im ec~lafe ausfii~ren, ober man fann wo~renb be9gelben an anbere :Dinge benfen. [CO F. GauS: Werke vol. 9, Gottingen, p. 280, 1903] What difference exists between solving large and small systems of equations? The standard methods well-known to any student oflinear algebra are appli cable to all systems, whether large or small. The necessary amount of work, however, increases dramatically with the size, so one has to search for algo rithms that most efficiently and accurately solve systems of 1000, 10,000, or even one million equations. The choice of algorithms depends on the special properties the matrices in practice have. An important class of large systems arises from the discretisation of partial differential equations. In this case, the matrices are sparse (i. e. , they contain mostly zeros) and well-suited to iterative algorithms. Because of the background in partial differential equa tions, this book is closely connected with the author's Theory and Numerical Treatment of Elliptic Differential Equations, whose English translation has also been published by Springer-Verlag. This book grew out of a series of lectures given by the author at the Christian-Albrecht University of Kiel to students of mathematics.
This comprehensive book is presented in two parts; the first part introduces the basics of matrix analysis necessary for matrix computations, and the second part presents representative methods and the corresponding theories in matrix computations. Among the key features of the book are the extensive exercises at the end of each chapter. Matrix Analysis and Computations provides readers with the matrix theory necessary for matrix computations, especially for direct and iterative methods for solving systems of linear equations. It includes systematic methods and rigorous theory on matrix splitting iteration methods and Krylov subspace iteration methods, as well as current results on preconditioning and iterative methods for solving standard and generalized saddle-point linear systems. This book can be used as a textbook for graduate students as well as a self-study tool and reference for researchers and engineers interested in matrix analysis and matrix computations. It is appropriate for courses in numerical analysis, numerical optimization, data science, and approximation theory, among other topics
Iterative methods use successive approximations to obtain more accurate solutions. This book gives an introduction to iterative methods and preconditioning for solving discretized elliptic partial differential equations and optimal control problems governed by the Laplace equation, for which the use of matrix-free procedures is crucial. All methods are explained and analyzed starting from the historical ideas of the inventors, which are often quoted from their seminal works. Iterative Methods and Preconditioners for Systems of Linear Equations grew out of a set of lecture notes that were improved and enriched over time, resulting in a clear focus for the teaching methodology, which derives complete convergence estimates for all methods, illustrates and provides MATLAB codes for all methods, and studies and tests all preconditioners first as stationary iterative solvers. This textbook is appropriate for undergraduate and graduate students who want an overview or deeper understanding of iterative methods. Its focus on both analysis and numerical experiments allows the material to be taught with very little preparation, since all the arguments are self-contained, and makes it appropriate for self-study as well. It can be used in courses on iterative methods, Krylov methods and preconditioners, and numerical optimal control. Scientists and engineers interested in new topics and applications will also find the text useful.