Specialists working in the areas of optimization, mathematical programming, or control theory will find this book invaluable for studying interior-point methods for linear and quadratic programming, polynomial-time methods for nonlinear convex programming, and efficient computational methods for control problems and variational inequalities. A background in linear algebra and mathematical programming is necessary to understand the book. The detailed proofs and lack of "numerical examples" might suggest that the book is of limited value to the reader interested in the practical aspects of convex optimization, but nothing could be further from the truth. An entire chapter is devoted to potential reduction methods precisely because of their great efficiency in practice.
In the past decade, primal-dual algorithms have emerged as the most important and useful algorithms from the interior-point class. This book presents the major primal-dual algorithms for linear programming in straightforward terms. A thorough description of the theoretical properties of these methods is given, as are a discussion of practical and computational aspects and a summary of current software. This is an excellent, timely, and well-written work. The major primal-dual algorithms covered in this book are path-following algorithms (short- and long-step, predictor-corrector), potential-reduction algorithms, and infeasible-interior-point algorithms. A unified treatment of superlinear convergence, finite termination, and detection of infeasible problems is presented. Issues relevant to practical implementation are also discussed, including sparse linear algebra and a complete specification of Mehrotra's predictor-corrector algorithm. Also treated are extensions of primal-dual algorithms to more general problems such as monotone complementarity, semidefinite programming, and general convex programming problems.
Optimization is an important tool used in decision science and for the analysis of physical systems used in engineering. One can trace its roots to the Calculus of Variations and the work of Euler and Lagrange. This natural and reasonable approach to mathematical programming covers numerical methods for finite-dimensional optimization problems. It begins with very simple ideas progressing through more complicated concepts, concentrating on methods for both unconstrained and constrained optimization.
In model predictive control (MPC) an optimization problem has to be solved at each time step, which in real-time applications makes it important to solve these efficiently and to have good upper bounds on worst-case solution time. Often for linear MPC problems, the optimization problem in question is a quadratic program (QP) that depends on parameters such as system states and reference signals. A popular class of methods for solving such QPs is active-set methods, where a sequence of linear systems of equations is solved. The primary contribution of this thesis is a method which determines which sequence of subproblems a popular class of such active-set algorithms need to solve, for every possible QP instance that might arise from a given linear MPC problem (i.e, for every possible state and reference signal). By knowing these sequences, worst-case bounds on how many iterations, floating-point operations and, ultimately, the maximum solution time, these active-set algorithms require to compute a solution can be determined, which is of importance when, e.g, linear MPC is used in safety-critical applications. After establishing this complexity certification method, its applicability is extended by showing how it can be used indirectly to certify the complexity of another, efficient, type of active-set QP algorithm which reformulates the QP as a nonnegative least-squares method. Finally, the proposed complexity certification method is extended further to situations when enhancements to the active-set algorithms are used, namely, when they are terminated early (to save computations) and when outer proximal-point iterations are performed (to improve numerical stability).
This introductory textbook adopts a practical and intuitive approach, rather than emphasizing mathematical rigor. Computationally oriented books in this area generally present algorithms alone, and expect readers to perform computations by hand, and are often written in traditional computer languages, such as Basic, Fortran or Pascal. This book, on the other hand, is the first text to use Mathematica to develop a thorough understanding of optimization algorithms, fully exploiting Mathematica's symbolic, numerical and graphic capabilities.
Surveys the theory and history of the alternating direction method of multipliers, and discusses its applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others.
Analyses Lagrange multiplier theory and demonstrates its impact on the development of numerical algorithms for variational problems in function spaces.
In the late forties, Mathematical Programming became a scientific discipline in its own right. Since then it has experienced a tremendous growth. Beginning with economic and military applications, it is now among the most important fields of applied mathematics with extensive use in engineering, natural sciences, economics, and biological sciences. The lively activity in this area is demonstrated by the fact that as early as 1949 the first "Symposium on Mathe matical Programming" took place in Chicago. Since then mathematical programmers from all over the world have gath ered at the intfrnational symposia of the Mathematical Programming Society roughly every three years to present their recent research, to exchange ideas with their colleagues and to learn about the latest developments in their own and related fields. In 1982, the XI. International Symposium on Mathematical Programming was held at the University of Bonn, W. Germany, from August 23 to 27. It was organized by the Institut fUr Okonometrie und Operations Re search of the University of Bonn in collaboration with the Sonderforschungs bereich 21 of the Deutsche Forschungsgemeinschaft. This volume constitutes part of the outgrowth of this symposium and docu ments its scientific activities. Part I of the book contains information about the symposium, welcoming addresses, lists of committees and sponsors and a brief review about the Ful kerson Prize and the Dantzig Prize which were awarded during the opening ceremony.