The era of interior point methods (IPMs) was initiated by N. Karmarkar’s 1984 paper, which triggered turbulent research and reshaped almost all areas of optimization theory and computational practice. This book offers comprehensive coverage of IPMs. It details the main results of more than a decade of IPM research. Numerous exercises are provided to aid in understanding the material.
The approach to LO in this book is new in many aspects. In particular the IPM based development of duality theory is surprisingly elegant. The algorithmic parts of the book contain a complete discussion of many algorithmic variants, including predictor-corrector methods, partial updating, higher order methods and sensitivity and parametric analysis.
In the past decade, primal-dual algorithms have emerged as the most important and useful algorithms from the interior-point class. This book presents the major primal-dual algorithms for linear programming in straightforward terms. A thorough description of the theoretical properties of these methods is given, as are a discussion of practical and computational aspects and a summary of current software. This is an excellent, timely, and well-written work. The major primal-dual algorithms covered in this book are path-following algorithms (short- and long-step, predictor-corrector), potential-reduction algorithms, and infeasible-interior-point algorithms. A unified treatment of superlinear convergence, finite termination, and detection of infeasible problems is presented. Issues relevant to practical implementation are also discussed, including sparse linear algebra and a complete specification of Mehrotra's predictor-corrector algorithm. Also treated are extensions of primal-dual algorithms to more general problems such as monotone complementarity, semidefinite programming, and general convex programming problems.
This book describes the rapidly developing field of interior point methods (IPMs). An extensive analysis is given of path-following methods for linear programming, quadratic programming and convex programming. These methods, which form a subclass of interior point methods, follow the central path, which is an analytic curve defined by the problem. Relatively simple and elegant proofs for polynomiality are given. The theory is illustrated using several explicit examples. Moreover, an overview of other classes of IPMs is given. It is shown that all these methods rely on the same notion as the path-following methods: all these methods use the central path implicitly or explicitly as a reference path to go to the optimum. For specialists in IPMs as well as those seeking an introduction to IPMs. The book is accessible to any mathematician with basic mathematical programming knowledge.
Operations research and mathematical programming would not be as advanced today without the many advances in interior point methods during the last decade. These methods can now solve very efficiently and robustly large scale linear, nonlinear and combinatorial optimization problems that arise in various practical applications. The main ideas underlying interior point methods have influenced virtually all areas of mathematical programming including: analyzing and solving linear and nonlinear programming problems, sensitivity analysis, complexity analysis, the analysis of Newton's method, decomposition methods, polynomial approximation for combinatorial problems etc. This book covers the implications of interior techniques for the entire field of mathematical programming, bringing together many results in a uniform and coherent way. For the topics mentioned above the book provides theoretical as well as computational results, explains the intuition behind the main ideas, gives examples as well as proofs, and contains an extensive up-to-date bibliography. Audience: The book is intended for students, researchers and practitioners with a background in operations research, mathematics, mathematical programming, or statistics.
This book provides practitioners as well as students of this general methodology with an easily accessible introduction to the new class of algorithms known as interior-point methods for linear programming.
Here is a book devoted to well-structured and thus efficiently solvable convex optimization problems, with emphasis on conic quadratic and semidefinite programming. The authors present the basic theory underlying these problems as well as their numerous applications in engineering, including synthesis of filters, Lyapunov stability analysis, and structural design. The authors also discuss the complexity issues and provide an overview of the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming. The book's focus on well-structured convex problems in conic form allows for unified theoretical and algorithmical treatment of a wide spectrum of important optimization problems arising in applications.
Specialists working in the areas of optimization, mathematical programming, or control theory will find this book invaluable for studying interior-point methods for linear and quadratic programming, polynomial-time methods for nonlinear convex programming, and efficient computational methods for control problems and variational inequalities. A background in linear algebra and mathematical programming is necessary to understand the book. The detailed proofs and lack of "numerical examples" might suggest that the book is of limited value to the reader interested in the practical aspects of convex optimization, but nothing could be further from the truth. An entire chapter is devoted to potential reduction methods precisely because of their great efficiency in practice.
The starting point of this volume was a conference entitled "Progress in Mathematical Programming," held at the Asilomar Conference Center in Pacific Grove, California, March 1-4, 1987. The main topic of the conference was developments in the theory and practice of linear programming since Karmarkar's algorithm. There were thirty presentations and approximately fifty people attended. Presentations included new algorithms, new analyses of algorithms, reports on computational experience, and some other topics related to the practice of mathematical programming. Interestingly, most of the progress reported at the conference was on the theoretical side. Several new polynomial algorithms for linear program ming were presented (Barnes-Chopra-Jensen, Goldfarb-Mehrotra, Gonzaga, Kojima-Mizuno-Yoshise, Renegar, Todd, Vaidya, and Ye). Other algorithms presented were by Betke-Gritzmann, Blum, Gill-Murray-Saunders-Wright, Nazareth, Vial, and Zikan-Cottle. Efforts in the theoretical analysis of algo rithms were also reported (Anstreicher, Bayer-Lagarias, Imai, Lagarias, Megiddo-Shub, Lagarias, Smale, and Vanderbei). Computational experiences were reported by Lustig, Tomlin, Todd, Tone, Ye, and Zikan-Cottle. Of special interest, although not in the main direction discussed at the conference, was the report by Rinaldi on the practical solution of some large traveling salesman problems. At the time of the conference, it was still not clear whether the new algorithms developed since Karmarkar's algorithm would replace the simplex method in practice. Alan Hoffman presented results on conditions under which linear programming problems can be solved by greedy algorithms."
The book is an introductory textbook mainly for students of computer science and mathematics. Our guiding phrase is "what every theoretical computer scientist should know about linear programming". A major focus is on applications of linear programming, both in practice and in theory. The book is concise, but at the same time, the main results are covered with complete proofs and in sufficient detail, ready for presentation in class. The book does not require more prerequisites than basic linear algebra, which is summarized in an appendix. One of its main goals is to help the reader to see linear programming "behind the scenes".