Optimization is an important tool used in decision science and for the analysis of physical systems used in engineering. One can trace its roots to the Calculus of Variations and the work of Euler and Lagrange. This natural and reasonable approach to mathematical programming covers numerical methods for finite-dimensional optimization problems. It begins with very simple ideas progressing through more complicated concepts, concentrating on methods for both unconstrained and constrained optimization.
Papers from a workshop held at Cornell University, Oct. 1989, and sponsored by Cornell's Mathematical Sciences Institute. Annotation copyright Book News, Inc. Portland, Or.
This book presents tools and methods for large-scale and distributed optimization. Since many methods in "Big Data" fields rely on solving large-scale optimization problems, often in distributed fashion, this topic has over the last decade emerged to become very important. As well as specific coverage of this active research field, the book serves as a powerful source of information for practitioners as well as theoreticians. Large-Scale and Distributed Optimization is a unique combination of contributions from leading experts in the field, who were speakers at the LCCC Focus Period on Large-Scale and Distributed Optimization, held in Lund, 14th–16th June 2017. A source of information and innovative ideas for current and future research, this book will appeal to researchers, academics, and students who are interested in large-scale optimization.
An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
This book reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research. Individual chapters, contributed by eminent authorities, provide an up-to-date overview of the field from different and complementary standpoints, including theoretical analysis, algorithmic development, implementation issues and applications.
This book provides an introduction to the mathematical theory of optimization. It emphasizes the convergence theory of nonlinear optimization algorithms and applications of nonlinear optimization to combinatorial optimization. Mathematical Theory of Optimization includes recent developments in global convergence, the Powell conjecture, semidefinite programming, and relaxation techniques for designs of approximation solutions of combinatorial optimization problems.
On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abroad. Accurate modeling of scientific problems often leads to the formulation of large scale optimization problems involving thousands of continuous and/or discrete vari ables. Large scale optimization has seen a dramatic increase in activities in the past decade. This has been a natural consequence of new algorithmic developments and of the increased power of computers. For example, decomposition ideas proposed by G. Dantzig and P. Wolfe in the 1960's, are now implement able in distributed process ing systems, and today many optimization codes have been implemented on parallel machines.
The Handbook of Mathematical Methods in Imaging provides a comprehensive treatment of the mathematical techniques used in imaging science. The material is grouped into two central themes, namely, Inverse Problems (Algorithmic Reconstruction) and Signal and Image Processing. Each section within the themes covers applications (modeling), mathematics, numerical methods (using a case example) and open questions. Written by experts in the area, the presentation is mathematically rigorous. The entries are cross-referenced for easy navigation through connected topics. Available in both print and electronic forms, the handbook is enhanced by more than 150 illustrations and an extended bibliography. It will benefit students, scientists and researchers in applied mathematics. Engineers and computer scientists working in imaging will also find this handbook useful.
LANCELOT is a software package for solving large-scale nonlinear optimization problems. This book is our attempt to provide a coherent overview of the package and its use. This includes details of how one might present examples to the package, how the algorithm tries to solve these examples and various technical issues which may be useful to implementors of the software. We hope this book will be of use to both researchers and practitioners in nonlinear programming. Although the book is primarily concerned with a specific optimization package, the issues discussed have much wider implications for the design and im plementation of large-scale optimization algorithms. In particular, the book contains a proposal for a standard input format for large-scale optimization problems. This proposal is at the heart of the interface between a user's problem and the LANCE LOT optimization package. Furthermore, a large collection of over five hundred test ex amples has already been written in this format and will shortly be available to those who wish to use them. We would like to thank the many people and organizations who supported us in our enterprise. We first acknowledge the support provided by our employers, namely the the Facultes Universitaires Notre-Dame de la Paix (Namur, Belgium), Harwell Laboratory (UK), IBM Corporation (USA), Rutherford Appleton Laboratory (UK) and the University of Waterloo (Canada). We are grateful for the support we obtained from NSERC (Canada), NATO and AMOCO (UK).
The primary goal of this book is to provide a self-contained, comprehensive study of the main ?rst-order methods that are frequently used in solving large-scale problems. First-order methods exploit information on values and gradients/subgradients (but not Hessians) of the functions composing the model under consideration. With the increase in the number of applications that can be modeled as large or even huge-scale optimization problems, there has been a revived interest in using simple methods that require low iteration cost as well as low memory storage. The author has gathered, reorganized, and synthesized (in a unified manner) many results that are currently scattered throughout the literature, many of which cannot be typically found in optimization books. First-Order Methods in Optimization offers comprehensive study of first-order methods with the theoretical foundations; provides plentiful examples and illustrations; emphasizes rates of convergence and complexity analysis of the main first-order methods used to solve large-scale problems; and covers both variables and functional decomposition methods.