Distributionally Robust Learning

Distributionally Robust Learning

Author: Ruidi Chen

Publisher:

Published: 2020

Total Pages: 252

ISBN-13: 9781680837735

DOWNLOAD EBOOK

This monograph provides insight into a technique that has gained a lot of recent interest in developing robust supervised learning solutions that are founded in sound mathematical principles. It will be enlightening for researchers, practitioners and students in the optimization of machine learning systems.


Wasserstein Distributionally Robust Learning

Wasserstein Distributionally Robust Learning

Author: OROOSH Shafieezadeh Abadeh

Publisher:

Published: 2020

Total Pages: 195

ISBN-13:

DOWNLOAD EBOOK

Mots-clés de l'auteur: Distributionally robust optimization ; Wasserstein distance ; Regularization ; Supervised Learning ; Inverse optimization ; Kalman filter ; Frank-Wolfe algorithm.


Robust Optimization

Robust Optimization

Author: Aharon Ben-Tal

Publisher: Princeton University Press

Published: 2009-08-10

Total Pages: 565

ISBN-13: 1400831059

DOWNLOAD EBOOK

Robust optimization is still a relatively new approach to optimization problems affected by uncertainty, but it has already proved so useful in real applications that it is difficult to tackle such problems today without considering this powerful methodology. Written by the principal developers of robust optimization, and describing the main achievements of a decade of research, this is the first book to provide a comprehensive and up-to-date account of the subject. Robust optimization is designed to meet some major challenges associated with uncertainty-affected optimization problems: to operate under lack of full information on the nature of uncertainty; to model the problem in a form that can be solved efficiently; and to provide guarantees about the performance of the solution. The book starts with a relatively simple treatment of uncertain linear programming, proceeding with a deep analysis of the interconnections between the construction of appropriate uncertainty sets and the classical chance constraints (probabilistic) approach. It then develops the robust optimization theory for uncertain conic quadratic and semidefinite optimization problems and dynamic (multistage) problems. The theory is supported by numerous examples and computational illustrations. An essential book for anyone working on optimization and decision making under uncertainty, Robust Optimization also makes an ideal graduate textbook on the subject.


Data Analysis and Applications 3

Data Analysis and Applications 3

Author: Andreas Makrides

Publisher: John Wiley & Sons

Published: 2020-03-31

Total Pages: 262

ISBN-13: 1119721822

DOWNLOAD EBOOK

Data analysis as an area of importance has grown exponentially, especially during the past couple of decades. This can be attributed to a rapidly growing computer industry and the wide applicability of computational techniques, in conjunction with new advances of analytic tools. This being the case, the need for literature that addresses this is self-evident. New publications are appearing, covering the need for information from all fields of science and engineering, thanks to the universal relevance of data analysis and statistics packages. This book is a collective work by a number of leading scientists, analysts, engineers, mathematicians and statisticians who have been working at the forefront of data analysis. The chapters included in this volume represent a cross-section of current concerns and research interests in these scientific areas. The material is divided into two parts: Computational Data Analysis, and Classification Data Analysis, with methods for both - providing the reader with both theoretical and applied information on data analysis methods, models and techniques and appropriate applications.


Reliable Machine Learning Via Distributional Robustness

Reliable Machine Learning Via Distributional Robustness

Author: Hongseok Namkoong

Publisher:

Published: 2019

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

As machine learning systems increasingly get applied in high-stake domains such as autonomous vehicles and medical diagnosis, it is imperative that they maintain good performance when deployed. Modeling assumptions rarely hold due to noisy inputs, shifts in environment, unmeasured confounders, and even adversarial attacks to the system. The standard machine learning paradigm that optimize average performance is brittle to even small amounts of noise, and exhibit poor performance on underrepresented minority groups. We study \emph{distributionally robust} learning procedures that explicitly protect against potential shifts in the data-generating distribution. Instead of doing well just on average, distributionally robust methods learn models that can do well on a range of scenarios that are different to the training distribution. In the first part of thesis, we show that robustness to small perturbations in the data allows better generalization by optimally trading between approximation and estimation error. We show that robust solutions provide asymptotically exact confidence intervals and finite-sample guarantees for stochastic optimization problems. In the second part of the thesis, we focus on notions of distributional robustness that correspond to uniform performance across different subpopulations. We build procedures that balance tail-performance alongside classical notions of average performance. To trade these multiple goals \emph{optimally}, we show fundamental trade-offs (lower bounds), and develop efficient procedures that achieve these limits (upper bounds). Then, we extend our formulation to study partial covariate shifts, where we are interested in marginal distributional shifts on a subset of the feature vector. We provide convex procedures for these robust formulations, and characterize their non-asymptotic convergence properties. In the final part of the thesis, we develop and analyze distributionally robust approaches using Wasserstein distances, which allows models to generalize to distributions that have different support than the training distribution. We show that for smooth neural networks, our robust procedure guarantees performance under imperceptible adversarial perturbations. Extending such notions to protect against distributions defined on learned feature spaces, we show these models can also improve performance across unseen domains.


Distributionally Robust Optimization and Its Applications in Machine Learning

Distributionally Robust Optimization and Its Applications in Machine Learning

Author: Yang Kang

Publisher:

Published: 2017

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Optimal transport costs include as a special case the so-called Wasserstein distance, which is popular in various statistical applications. The use of optimal transport costs is advantageous relative to the use of divergence-based formulations because the region of distributional uncertainty contains distributions which explore samples outside of the support of the empirical measure, therefore explaining why many machine learning algorithms have the ability to improve generalization. Moreover, the DRO representations that we use to unify the previously mentioned machine learning algorithms, provide a clear interpretation of the so-called regularization parameter, which is known to play a crucial role in controlling generalization error. As we establish, the regularization parameter corresponds exactly to the size of the distributional uncertainty region. Another contribution of this dissertation is the development of statistical methodology to study data-driven DRO formulations based on optimal transport costs.


Distributionally Robust Optimization and Its Applications in Mathematical Finance, Statistics, and Reinforcement Learning

Distributionally Robust Optimization and Its Applications in Mathematical Finance, Statistics, and Reinforcement Learning

Author: Zhengqing Zhou

Publisher:

Published: 2021

Total Pages:

ISBN-13:

DOWNLOAD EBOOK

Distributionally robust optimization (DRO) is a zero-sum game between a decision-maker and an adversarial player. The decision-maker aims to minimize the expected loss, while the adversarial player wishes the loss to be maximized by replacing the underlying probability measure with another measure within a distributional uncertainty set. DRO has emerged as an important paradigm for machine learning, statistics, and operations research. DRO produces powerful insights in terms of statistical interpretability, performance guarantees, and parameter tuning. In this thesis, we apply DRO to three different topics: martingale optimal transport, convex regression, and offline reinforcement learning. We show how the DRO formulations/techniques improve the existing results in the literature.


Decision and Game Theory for Security

Decision and Game Theory for Security

Author: Quanyan Zhu

Publisher: Springer Nature

Published: 2020-12-21

Total Pages: 518

ISBN-13: 3030647935

DOWNLOAD EBOOK

This book constitutes the refereed proceedings of the 11th International Conference on Decision and Game Theory for Security, GameSec 2020,held in College Park, MD, USA, in October 2020. Due to COVID-19 pandemic the conference was held virtually The 21 full papers presented together with 2 short papers were carefully reviewed and selected from 29 submissions. The papers focus on machine learning and security; cyber deception; cyber-physical systems security; security of network systems; theoretic foundations of security games; emerging topics.