Sample-Efficient Nonconvex Optimization Algorithms in Machine Learning and Reinforcement Learning

Sample-Efficient Nonconvex Optimization Algorithms in Machine Learning and Reinforcement Learning

Author: Pan Xu

Publisher:

Published: 2021

Total Pages: 246

ISBN-13:

DOWNLOAD EBOOK

Machine learning and reinforcement learning have achieved tremendous success in solving problems in various real-world applications. Many modern learning problems boil down to a nonconvex optimization problem, where the objective function is the average or the expectation of some loss function over a finite or infinite dataset. Solving such nonconvex optimization problems, in general, can be NP-hard. Thus one often tackles such a problem through incremental steps based on the nature and the goal of the problem: finding a first-order stationary point, finding a second-order stationary point (or a local optimum), and finding a global optimum. With the size and complexity of the machine learning datasets rapidly increasing, it has become a fundamental challenge to design efficient and scalable machine learning algorithms that can improve the performance in terms of accuracy and save computational cost in terms of sample efficiency at the same time. Though many algorithms based on stochastic gradient descent have been developed and widely studied theoretically and empirically for nonconvex optimization, it has remained an open problem whether we can achieve the optimal sample complexity for finding a first-order stationary point and for finding local optima in nonconvex optimization. In this thesis, we start with the stochastic nested variance reduced gradient (SNVRG) algorithm, which is developed based on stochastic gradient descent methods and variance reduction techniques. We prove that SNVRG achieves the near-optimal convergence rate among its type for finding a first-order stationary point of a nonconvex function. We further build algorithms to efficiently find the local optimum of a nonconvex objective function by examining the curvature information at the stationary point found by SNVRG. With the ultimate goal of finding the global optimum in nonconvex optimization, we then provide a unified framework to analyze the global convergence of stochastic gradient Langevin dynamics-based algorithms for a nonconvex objective function. In the second part of this thesis, we generalize the aforementioned sample-efficient stochastic nonconvex optimization methods to reinforcement learning problems, including policy gradient, actor-critic, and Q-learning. For these problems, we propose novel algorithms and prove that they enjoy state-of-the-art theoretical guarantees on the sample complexity. The works presented in this thesis form an incomplete collection of the recent advances and developments of sample-efficient nonconvex optimization algorithms for both machine learning and reinforcement learning.


First-order and Stochastic Optimization Methods for Machine Learning

First-order and Stochastic Optimization Methods for Machine Learning

Author: Guanghui Lan

Publisher: Springer Nature

Published: 2020-05-15

Total Pages: 591

ISBN-13: 3030395685

DOWNLOAD EBOOK

This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.


Efficient Second-order Methods for Machine Learning

Efficient Second-order Methods for Machine Learning

Author: Peng Xu

Publisher:

Published: 2018

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Due to the large-scale nature of many modern machine learning applications, including but not limited to deep learning problems, people have been focusing on studying and developing efficient optimization algorithms. Most of these are first-order methods which use only gradient information. The conventional wisdom in the machine learning community is that second-order methods that use Hessian information are inappropriate to use since they can not be efficient. In this thesis, we consider second-order optimization methods: we develop new sub-sampled Newton-type algorithms for both convex and non-convex optimization problems; we prove that they are efficient and scalable; and we provide a detailed empirical evaluation of their scalability as well as usefulness. In the convex setting, we present a subsampled Newton-type algorithm (SSN) that exploits non-uniform subsampling Hessians as well as inexact updates to reduce the computational complexity. Theoretically we show that our algorithms achieve a linear-quadratic convergence rate and empirically we demonstrate the efficiency of our methods on several real datasets. In addition, we extend our methods into a distributed setting and propose a distributed Newton-type method, Globally Improved Approximate NewTon method (GIANT). Theoretically we show that GIANT is highly communication efficient comparing with existing distributed optimization algorithms. Empirically we demonstrate the scalability and efficiency of GIANT in Spark. In the non-convex setting, we consider two classic non-convex Newton-type methods -- Trust Region method (TR) and Cubic Regularization method (CR). We relax the Hessian approximation condition that has been assumed in the existing works of using inexact Hessian for those algorithms. Under the relaxed Hessian approximation condition, we show that worst-case iteration complexities to converge an approximate second-order stationary point are retained for both methods. Using the similar idea of SSN, we present the sub-sampled TR and CR methods along with the sampling complexities to achieve the Hessian approximation condition. To understand the empirical performances of those methods, we conduct an extensive empirical study on some non-convex machine learning problems and showcase the efficiency and robustness of these Newton-type methods under various settings.


Non-convex Optimization for Machine Learning

Non-convex Optimization for Machine Learning

Author: Prateek Jain

Publisher: Foundations and Trends in Machine Learning

Published: 2017-12-04

Total Pages: 218

ISBN-13: 9781680833683

DOWNLOAD EBOOK

Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equips the reader with the tools and techniques needed to apply and analyze simple but powerful procedures for non-convex problems. Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. The monograph initiates the discussion with entire chapters devoted to presenting a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. The monograph concludes with a look at four interesting applications in the areas of machine learning and signal processing, and exploring how the non-convex optimization techniques introduced earlier can be used to solve these problems. The monograph also contains, for each of the topics discussed, exercises and figures designed to engage the reader, as well as extensive bibliographic notes pointing towards classical works and recent advances. Non-convex Optimization for Machine Learning can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.


Non-convex Optimization in Machine Learning

Non-convex Optimization in Machine Learning

Author: Majid Janzamin

Publisher:

Published: 2016

Total Pages: 351

ISBN-13: 9781339835105

DOWNLOAD EBOOK

In the last decade, machine learning algorithms have been substantially developed and they have gained tremendous empirical success. But, there is limited theoretical understanding about this success. Most real learning problems can be formulated as non-convex optimization problems which are difficult to analyze due to the existence of several local optimal solutions. In this dissertation, we provide simple and efficient algorithms for learning some probabilistic models with provable guarantees on the performance of the algorithm. We particularly focus on analyzing tensor methods which entail non-convex optimization. Furthermore, our main focus is on challenging overcomplete models. Although many existing approaches for learning probabilistic models fail in the challenging overcomplete regime, we provide scalable algorithms for learning such models with low computational and statistical complexity.In probabilistic modeling, the underlying structure which describes the observed variables can be represented by latent variables. In the overcomplete models, these hidden underlying structures are in a higher dimension compared to the dimension of observed variables. A wide range of applications such as speech and image are well-described by overcomplete models. In this dissertation, we propose and analyze overcomplete tensor decomposition methods and exploit them for learning several latent representations and latent variable models in the unsupervised setting. This include models such as multiview mixture model, Gaussian mixtures, Independent Component Analysis, and Sparse Coding (Dictionary Learning). Since latent variables are not observed, we also have the identifiability issue in latent variable modeling and characterizing latent representations. We also propose sufficient conditions for identifiability of overcomplete topic models. In addition to unsupervised setting, we adapt the tensor techniques to supervised setting for learning neural networks and mixtures of generalized linear models.


Algorithms for Reinforcement Learning

Algorithms for Reinforcement Learning

Author: Csaba Grossi

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 89

ISBN-13: 3031015517

DOWNLOAD EBOOK

Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms' merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations. Table of Contents: Markov Decision Processes / Value Prediction Problems / Control / For Further Exploration


Efficient Reinforcement Learning Using Gaussian Processes

Efficient Reinforcement Learning Using Gaussian Processes

Author: Marc Peter Deisenroth

Publisher: KIT Scientific Publishing

Published: 2010

Total Pages: 226

ISBN-13: 3866445695

DOWNLOAD EBOOK

This book examines Gaussian processes in both model-based reinforcement learning (RL) and inference in nonlinear dynamic systems.First, we introduce PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available. PILCO takes model uncertainties consistently into account during long-term planning to reduce model bias. Second, we propose principled algorithms for robust filtering and smoothing in GP dynamic systems.


Evaluation Complexity of Algorithms for Nonconvex Optimization

Evaluation Complexity of Algorithms for Nonconvex Optimization

Author: Coralia Cartis

Publisher: SIAM

Published: 2022-07-06

Total Pages: 549

ISBN-13: 1611976995

DOWNLOAD EBOOK

A popular way to assess the “effort” needed to solve a problem is to count how many evaluations of the problem functions (and their derivatives) are required. In many cases, this is often the dominating computational cost. Given an optimization problem satisfying reasonable assumptions—and given access to problem-function values and derivatives of various degrees—how many evaluations might be required to approximately solve the problem? Evaluation Complexity of Algorithms for Nonconvex Optimization: Theory, Computation, and Perspectives addresses this question for nonconvex optimization problems, those that may have local minimizers and appear most often in practice. This is the first book on complexity to cover topics such as composite and constrained optimization, derivative-free optimization, subproblem solution, and optimal (lower and sharpness) bounds for nonconvex problems. It is also the first to address the disadvantages of traditional optimality measures and propose useful surrogates leading to algorithms that compute approximate high-order critical points, and to compare traditional and new methods, highlighting the advantages of the latter from a complexity point of view. This is the go-to book for those interested in solving nonconvex optimization problems. It is suitable for advanced undergraduate and graduate students in courses on advanced numerical analysis, data science, numerical optimization, and approximation theory.


Optimization for Machine Learning

Optimization for Machine Learning

Author: Suvrit Sra

Publisher: MIT Press

Published: 2012

Total Pages: 509

ISBN-13: 026201646X

DOWNLOAD EBOOK

An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.