The Gamma Function

The Gamma Function

Author: Emil Artin

Publisher: Courier Dover Publications

Published: 2015-01-28

Total Pages: 52

ISBN-13: 0486803007

DOWNLOAD EBOOK

This brief monograph on the gamma function was designed by the author to fill what he perceived as a gap in the literature of mathematics, which often treated the gamma function in a manner he described as both sketchy and overly complicated. Author Emil Artin, one of the twentieth century's leading mathematicians, wrote in his Preface to this book, "I feel that this monograph will help to show that the gamma function can be thought of as one of the elementary functions, and that all of its basic properties can be established using elementary methods of the calculus." Generations of teachers and students have benefitted from Artin's masterly arguments and precise results. Suitable for advanced undergraduates and graduate students of mathematics, his treatment examines functions, the Euler integrals and the Gauss formula, large values of x and the multiplication formula, the connection with sin x, applications to definite integrals, and other subjects.


Data-Driven Evolutionary Optimization

Data-Driven Evolutionary Optimization

Author: Yaochu Jin

Publisher: Springer Nature

Published: 2021-06-28

Total Pages: 393

ISBN-13: 3030746402

DOWNLOAD EBOOK

Intended for researchers and practitioners alike, this book covers carefully selected yet broad topics in optimization, machine learning, and metaheuristics. Written by world-leading academic researchers who are extremely experienced in industrial applications, this self-contained book is the first of its kind that provides comprehensive background knowledge, particularly practical guidelines, and state-of-the-art techniques. New algorithms are carefully explained, further elaborated with pseudocode or flowcharts, and full working source code is made freely available. This is followed by a presentation of a variety of data-driven single- and multi-objective optimization algorithms that seamlessly integrate modern machine learning such as deep learning and transfer learning with evolutionary and swarm optimization algorithms. Applications of data-driven optimization ranging from aerodynamic design, optimization of industrial processes, to deep neural architecture search are included.


Regularization, Optimization, Kernels, and Support Vector Machines

Regularization, Optimization, Kernels, and Support Vector Machines

Author: Johan A.K. Suykens

Publisher: CRC Press

Published: 2014-10-23

Total Pages: 528

ISBN-13: 1482241390

DOWNLOAD EBOOK

Regularization, Optimization, Kernels, and Support Vector Machines offers a snapshot of the current state of the art of large-scale machine learning, providing a single multidisciplinary source for the latest research and advances in regularization, sparsity, compressed sensing, convex and large-scale optimization, kernel methods, and support vector machines. Consisting of 21 chapters authored by leading researchers in machine learning, this comprehensive reference: Covers the relationship between support vector machines (SVMs) and the Lasso Discusses multi-layer SVMs Explores nonparametric feature selection, basis pursuit methods, and robust compressive sensing Describes graph-based regularization methods for single- and multi-task learning Considers regularized methods for dictionary learning and portfolio selection Addresses non-negative matrix factorization Examines low-rank matrix and tensor-based models Presents advanced kernel methods for batch and online machine learning, system identification, domain adaptation, and image processing Tackles large-scale algorithms including conditional gradient methods, (non-convex) proximal techniques, and stochastic gradient descent Regularization, Optimization, Kernels, and Support Vector Machines is ideal for researchers in machine learning, pattern recognition, data mining, signal processing, statistical learning, and related areas.


Non-convex Optimization for Machine Learning

Non-convex Optimization for Machine Learning

Author: Prateek Jain

Publisher: Foundations and Trends in Machine Learning

Published: 2017-12-04

Total Pages: 218

ISBN-13: 9781680833683

DOWNLOAD EBOOK

Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equips the reader with the tools and techniques needed to apply and analyze simple but powerful procedures for non-convex problems. Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. The monograph initiates the discussion with entire chapters devoted to presenting a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. The monograph concludes with a look at four interesting applications in the areas of machine learning and signal processing, and exploring how the non-convex optimization techniques introduced earlier can be used to solve these problems. The monograph also contains, for each of the topics discussed, exercises and figures designed to engage the reader, as well as extensive bibliographic notes pointing towards classical works and recent advances. Non-convex Optimization for Machine Learning can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.


Discovery in Physics

Discovery in Physics

Author: Katharina Morik

Publisher: Walter de Gruyter GmbH & Co KG

Published: 2022-12-31

Total Pages: 364

ISBN-13: 311078596X

DOWNLOAD EBOOK

Machine learning is part of Artificial Intelligence since its beginning. Certainly, not learning would only allow the perfect being to show intelligent behavior. All others, be it humans or machines, need to learn in order to enhance their capabilities. In the eighties of the last century, learning from examples and modeling human learning strategies have been investigated in concert. The formal statistical basis of many learning methods has been put forward later on and is still an integral part of machine learning. Neural networks have always been in the toolbox of methods. Integrating all the pre-processing, exploitation of kernel functions, and transformation steps of a machine learning process into the architecture of a deep neural network increased the performance of this model type considerably. Modern machine learning is challenged on the one hand by the amount of data and on the other hand by the demand of real-time inference. This leads to an interest in computing architectures and modern processors. For a long time, the machine learning research could take the von-Neumann architecture for granted. All algorithms were designed for the classical CPU. Issues of implementation on a particular architecture have been ignored. This is no longer possible. The time for independently investigating machine learning and computational architecture is over. Computing architecture has experienced a similarly rampant development from mainframe or personal computers in the last century to now very large compute clusters on the one hand and ubiquitous computing of embedded systems in the Internet of Things on the other hand. Cyber-physical systems’ sensors produce a huge amount of streaming data which need to be stored and analyzed. Their actuators need to react in real-time. This clearly establishes a close connection with machine learning. Cyber-physical systems and systems in the Internet of Things consist of diverse components, heterogeneous both in hard- and software. Modern multi-core systems, graphic processors, memory technologies and hardware-software codesign offer opportunities for better implementations of machine learning models. Machine learning and embedded systems together now form a field of research which tackles leading edge problems in machine learning, algorithm engineering, and embedded systems. Machine learning today needs to make the resource demands of learning and inference meet the resource constraints of used computer architecture and platforms. A large variety of algorithms for the same learning method and, moreover, diverse implementations of an algorithm for particular computing architectures optimize learning with respect to resource efficiency while keeping some guarantees of accuracy. The trade-off between a decreased energy consumption and an increased error rate, to just give an example, needs to be theoretically shown for training a model and the model inference. Pruning and quantization are ways of reducing the resource requirements by either compressing or approximating the model. In addition to memory and energy consumption, timeliness is an important issue, since many embedded systems are integrated into large products that interact with the physical world. If the results are delivered too late, they may have become useless. As a result, real-time guarantees are needed for such systems. To efficiently utilize the available resources, e.g., processing power, memory, and accelerators, with respect to response time, energy consumption, and power dissipation, different scheduling algorithms and resource management strategies need to be developed. This book series addresses machine learning under resource constraints as well as the application of the described methods in various domains of science and engineering. Turning big data into smart data requires many steps of data analysis: methods for extracting and selecting features, filtering and cleaning the data, joining heterogeneous sources, aggregating the data, and learning predictions need to scale up. The algorithms are challenged on the one hand by high-throughput data, gigantic data sets like in astrophysics, on the other hand by high dimensions like in genetic data. Resource constraints are given by the relation between the demands for processing the data and the capacity of the computing machinery. The resources are runtime, memory, communication, and energy. Novel machine learning algorithms are optimized with regard to minimal resource consumption. Moreover, learned predictions are applied to program executions in order to save resources. The three books will have the following subtopics: Volume 1: Machine Learning under Resource Constraints - Fundamentals Volume 2: Machine Learning and Physics under Resource Constraints - Discovery Volume 3: Machine Learning under Resource Constraints - Applications Volume 2 is about machine learning for knowledge discovery in particle and astroparticle physics. Their instruments, e.g., particle accelerators or telescopes, gather petabytes of data. Here, machine learning is necessary not only to process the vast amounts of data and to detect the relevant examples efficiently, but also as part of the knowledge discovery process itself. The physical knowledge is encoded in simulations that are used to train the machine learning models. At the same time, the interpretation of the learned models serves to expand the physical knowledge. This results in a cycle of theory enhancement supported by machine learning.


Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms

Author: Kalyanmoy Deb

Publisher: John Wiley & Sons

Published: 2001-07-05

Total Pages: 540

ISBN-13: 9780471873396

DOWNLOAD EBOOK

Optimierung mit mehreren Zielen, evolutionäre Algorithmen: Dieses Buch wendet sich vorrangig an Einsteiger, denn es werden kaum Vorkenntnisse vorausgesetzt. Geboten werden alle notwendigen Grundlagen, um die Theorie auf Probleme der Ingenieurtechnik, der Vorhersage und der Planung anzuwenden. Der Autor gibt auch einen Ausblick auf Forschungsaufgaben der Zukunft.


Machine Learning under Resource Constraints - Applications

Machine Learning under Resource Constraints - Applications

Author: Katharina Morik

Publisher: Walter de Gruyter GmbH & Co KG

Published: 2022-12-31

Total Pages: 497

ISBN-13: 3110786141

DOWNLOAD EBOOK

Machine Learning under Resource Constraints addresses novel machine learning algorithms that are challenged by high-throughput data, by high dimensions, or by complex structures of the data in three volumes. Resource constraints are given by the relation between the demands for processing the data and the capacity of the computing machinery. The resources are runtime, memory, communication, and energy. Hence, modern computer architectures play a significant role. Novel machine learning algorithms are optimized with regard to minimal resource consumption. Moreover, learned predictions are executed on diverse architectures to save resources. It provides a comprehensive overview of the novel approaches to machine learning research that consider resource constraints, as well as the application of the described methods in various domains of science and engineering. Volume 3 describes how the resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples. In the areas of health and medicine, it is demonstrated how machine learning can improve risk modelling, diagnosis, and treatment selection for diseases. Machine learning supported quality control during the manufacturing process in a factory allows to reduce material and energy cost and save testing times is shown by the diverse real-time applications in electronics and steel production as well as milling. Additional application examples show, how machine-learning can make traffic, logistics and smart cities more effi cient and sustainable. Finally, mobile communications can benefi t substantially from machine learning, for example by uncovering hidden characteristics of the wireless channel.