Machine Learning Algorithm and System Co-design for Hardware Efficiency

Machine Learning Algorithm and System Co-design for Hardware Efficiency

Author: Cheng Fu

Publisher:

Published: 2023

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Deep Neural Networks (DNNs) are increasingly adopted in various fields due to their unprecedented performance. Yet, the computation overhead of DNN evaluation and training continues to grow exponentially. Enormous system-level advancements have recently been witnessed to improve the efficiency of DNN computation; many efficient DNN algorithms are proposed to reduce the cost of DNN computation. However, this is far from optimal. Most current DNN computation systems do not fully exploit efficient ML algorithms, while ML algorithms fail to consider novel systems for deployment. This thesis focuses on designing efficient DNN algorithms and DNN computation systems to enable fast DNN training and evaluation. Instead of solely focusing on creating either new DNN algorithms or systems, we propose a co-design approach by exploring DNN models that are system-aware and DNN computation systems that are algorithm-aware. By leveraging such a co-design approach, we effectively advance the Pareto-frontier between task accuracy and efficiency of DNN execution in various application domains. We present a set of works that explore the co-design method. Firstly, we present an algorithm-aware DNN compiler for quantized DNN. By leveraging the weight repetition feature of this efficient DNN algorithm, we can greatly reduce the computation overhead of DNN inference on both CPU and GPU. This work illustrates how algorithm-aware system design can help in pushing the Pareto-frontier. Secondly, we discuss a hardware-aware DNN algorithm with enhanced model parallelism. We observe that previous works design efficient DNNs for single-device platforms. When customizing the DNN design for a multi-device system, we can reduce the DNN inference latency by a large margin while previous models can hardly be parallelized across multiple devices. Thirdly, we present a hardware-friendly transfer learning framework for natural language processing tasks. The existing transfer learning frameworks have a lot of computation redundancy when deploying on the existing systems. By reusing the computation of different transfer learning models, we can greatly reduce the computation overhead as well. Lastly, we introduce a novel training method to reduce the computation cost of DNN training and DNN design process. The key idea is to initialize the large models using small pretrained weights. The implicit knowledge in the pretrained models facilitates faster convergence of the large models. Besides, changing only the initialization phase of training means no extra computation overhead will be introduced to the existing training systems. Also, this new training method can be applied to accelerate the design process of system-aware DNN models. As Moore's Law is slowing down, the computational capacity of current DNN systems is plateauing. This thesis sheds light on how to overcome this limitation by designing domain-specific DNN algorithms and computation systems.


Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Author: Sudeep Pasricha

Publisher: Springer Nature

Published: 2023-10-09

Total Pages: 481

ISBN-13: 3031399323

DOWNLOAD EBOOK

This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits. Discusses efficient implementation of machine learning in embedded, CPS, IoT, and edge computing; Offers comprehensive coverage of hardware design, software design, and hardware/software co-design and co-optimization; Describes real applications to demonstrate how embedded, CPS, IoT, and edge applications benefit from machine learning.


Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Author: Sudeep Pasricha

Publisher: Springer Nature

Published: 2023-11-01

Total Pages: 418

ISBN-13: 303119568X

DOWNLOAD EBOOK

This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits.


Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Author: Nan Zheng

Publisher: John Wiley & Sons

Published: 2019-10-18

Total Pages: 296

ISBN-13: 1119507391

DOWNLOAD EBOOK

Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks. The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware. Includes cross-layer survey of hardware accelerators for neuromorphic algorithms Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.


Hardware-aware Algorithms for Efficient Machine Learning

Hardware-aware Algorithms for Efficient Machine Learning

Author: Tri Dao Phuc Quang

Publisher:

Published: 2023

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Machine learning (ML) training will continue to grow to consume more cycles, their inference will proliferate on more kinds of devices, and their capabilities will be used in more domains. Some goals central to this future are to make ML models efficient so they remain practical to train and deploy, and to unlock new application domains with new capabilities. We describe some recent developments in hardware-aware algorithms to improve the efficiency-quality tradeoff of ML models and equip them with long context. In Chapter 2, we focus on structured sparsity, a natural approach to mitigate the extensive compute and memory cost of large ML models. We describe a line of work on learnable fast transforms that, thanks to their expressiveness and efficiency, yields some of the first sparse training methods to speed up large models in wall-clock time (2x) without compromising their quality. In Chapter 3, we focus on efficient Transformer training and inference for long sequences. We describe FlashAttention, a fast and memory-efficient algorithm to compute attention with no approximation. By careful accounting of reads/writes between different levels of memory hierarchy, FlashAttention is 2-4x faster and uses 10-20x less memory compared to the best existing attention implementations, allowing us to train higher-quality Transformers with 8x longer context. FlashAttention is now widely used in some of the largest research labs and companies. In Chapter 4, we examine state-space models, a promising architecture designed for long-range memory. As we seek to understand why early state-space models did not perform well on language modeling tasks, we propose simple multiplicative interaction that expands their expressiveness. We also design hardware-friendly algorithms to train them. As a result, we are able to train state-space models to multi-billion parameter scale, demonstrating a new kind of model competitive with the dominant Transformers in language modeling. We conclude with some exciting directions in ML and systems, such as software-hardware co-design, structured sparsity for scientific AI, and long context for new AI workflows and modalities.


Efficient Processing of Deep Neural Networks

Efficient Processing of Deep Neural Networks

Author: Vivienne Sze

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 254

ISBN-13: 3031017668

DOWNLOAD EBOOK

This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas.


Deep Learning Systems

Deep Learning Systems

Author: Andres Rodriguez

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 245

ISBN-13: 3031017692

DOWNLOAD EBOOK

This book describes deep learning systems: the algorithms, compilers, and processor components to efficiently train and deploy deep learning models for commercial applications. The exponential growth in computational power is slowing at a time when the amount of compute consumed by state-of-the-art deep learning (DL) workloads is rapidly growing. Model size, serving latency, and power constraints are a significant challenge in the deployment of DL models for many applications. Therefore, it is imperative to codesign algorithms, compilers, and hardware to accelerate advances in this field with holistic system-level and algorithm solutions that improve performance, power, and efficiency. Advancing DL systems generally involves three types of engineers: (1) data scientists that utilize and develop DL algorithms in partnership with domain experts, such as medical, economic, or climate scientists; (2) hardware designers that develop specialized hardware to accelerate the components in the DL models; and (3) performance and compiler engineers that optimize software to run more efficiently on a given hardware. Hardware engineers should be aware of the characteristics and components of production and academic models likely to be adopted by industry to guide design decisions impacting future hardware. Data scientists should be aware of deployment platform constraints when designing models. Performance engineers should support optimizations across diverse models, libraries, and hardware targets. The purpose of this book is to provide a solid understanding of (1) the design, training, and applications of DL algorithms in industry; (2) the compiler techniques to map deep learning code to hardware targets; and (3) the critical hardware features that accelerate DL systems. This book aims to facilitate co-innovation for the advancement of DL systems. It is written for engineers working in one or more of these areas who seek to understand the entire system stack in order to better collaborate with engineers working in other parts of the system stack. The book details advancements and adoption of DL models in industry, explains the training and deployment process, describes the essential hardware architectural features needed for today's and future models, and details advances in DL compilers to efficiently execute algorithms across various hardware targets. Unique in this book is the holistic exposition of the entire DL system stack, the emphasis on commercial applications, and the practical techniques to design models and accelerate their performance. The author is fortunate to work with hardware, software, data scientist, and research teams across many high-technology companies with hyperscale data centers. These companies employ many of the examples and methods provided throughout the book.


Algorithm and Hardware Co-design for Learning On-a-chip

Algorithm and Hardware Co-design for Learning On-a-chip

Author: Zihan Xu

Publisher:

Published: 2017

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Machine learning technology has made a lot of incredible achievements in recent years. It has rivalled or exceeded human performance in many intellectual tasks including image recognition, face detection and the Go game. Many machine learning algorithms require huge amount of computation such as in multiplication of large matrices. As silicon technology has scaled to sub-14nm regime, simply scaling down the device cannot provide enough speed-up any more. New device technologies and system architectures are needed to improve the computing capacity. Designing specific hardware for machine learning is highly in demand. Efforts need to be made on a joint design and optimization of both hardware and algorithm. For machine learning acceleration, traditional SRAM and DRAM based system suffer from low capacity, high latency, and high standby power. Instead, emerging memories, such as Phase Change Random Access Memory (PRAM), Spin-Transfer Torque Magnetic Random Access Memory (STT-MRAM), and Resistive Random Access Memory (RRAM), are promising candidates providing low standby power, high data density, fast access and excellent scalability. This dissertation proposes a hierarchical memory modeling framework and models PRAM and STT-MRAM in four different levels of abstraction. With the proposed models, various simulations are conducted to investigate the performance, optimization, variability, reliability, and scalability. Emerging memory devices such as RRAM can work as a 2-D crosspoint array to speed up the multiplication and accumulation in machine learning algorithms. This dissertation proposes a new parallel programming scheme to achieve in-memory learning with RRAM crosspoint array. The programming circuitry is designed and simulated in TSMC 65nm technology showing 900X speedup for the dictionary learning task compared to the CPU performance. From the algorithm perspective, inspired by the high accuracy and low power of the brain, this dissertation proposes a bio-plausible feedforward inhibition spiking neural network with Spike-Rate-Dependent-Plasticity (SRDP) learning rule. It achieves more than 95% accuracy on the MNIST dataset, which is comparable to the sparse coding algorithm, but requires far fewer number of computations. The role of inhibition in this network is systematically studied and shown to improve the hardware efficiency in learning.


Explainable Machine Learning Models and Architectures

Explainable Machine Learning Models and Architectures

Author: Suman Lata Tripathi

Publisher: John Wiley & Sons

Published: 2023-10-03

Total Pages: 277

ISBN-13: 1394185847

DOWNLOAD EBOOK

EXPLAINABLE MACHINE LEARNING MODELS AND ARCHITECTURES This cutting-edge new volume covers the hardware architecture implementation, the software implementation approach, and the efficient hardware of machine learning applications. Machine learning and deep learning modules are now an integral part of many smart and automated systems where signal processing is performed at different levels. Signal processing in the form of text, images, or video needs large data computational operations at the desired data rate and accuracy. Large data requires more use of integrated circuit (IC) area with embedded bulk memories that further lead to more IC area. Trade-offs between power consumption, delay and IC area are always a concern of designers and researchers. New hardware architectures and accelerators are needed to explore and experiment with efficient machine-learning models. Many real-time applications like the processing of biomedical data in healthcare, smart transportation, satellite image analysis, and IoT-enabled systems have a lot of scope for improvements in terms of accuracy, speed, computational powers, and overall power consumption. This book deals with the efficient machine and deep learning models that support high-speed processors with reconfigurable architectures like graphic processing units (GPUs) and field programmable gate arrays (FPGAs), or any hybrid system. Whether for the veteran engineer or scientist working in the field or laboratory, or the student or academic, this is a must-have for any library.


Algorithm-accelerator Co-design for High-performance and Secure Deep Learning

Algorithm-accelerator Co-design for High-performance and Secure Deep Learning

Author: Weizhe Hua

Publisher:

Published: 2022

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Deep learning has emerged as a new engine for many of today's artificial intelligence/machine learning systems, leading to several recent breakthroughs in vision and natural language processing tasks.However, as we move into the era of deep learning with billions and even trillions of parameters, meeting the computational and memory requirements to train and serve state-of-the-art models has become extremely challenging. Optimizing the computational cost and memory footprint of deep learning models for better system performance is critical to the widespread deployment of deep learning. Moreover, a massive amount of sensitive and private user data is exposed to the deep learning system during the training or serving process. Therefore, it is essential to investigate potential vulnerabilities in existing deep learning hardware, and then design secure deep learning systems that provide strong privacy guarantees for user data and the models that learn from the data. In this dissertation, we propose to co-design the deep learning algorithms and hardware architectural techniques to improve both the performance and security/privacy of deep learning systems. On high-performance deep learning, we first introduce channel gating neural network (CGNet), which exploits the dynamic sparsity of specific inputs to reduce computation of convolutional neural networks. We also co-develop an ASIC accelerator for CGNet that can turn theoretical FLOP reduction into wall-clock speedup. Secondly, we present Fast Linear Attention with a Single Head (FLASH), a state-of-the-art language model specifically designed for Google's TPU that can achieve transformer-level quality with linear complexity with respect to the sequence length. Through our empirical studies on masked language modeling, auto-regressive language modeling, and fine-tuning for question answering, FLASH achieves at least similar if not better quality compared to the augmented transformer, while being significantly faster (e.g., up to 12 times faster). On the security of deep learning, we study the side-channel vulnerabilities of existing deep learning accelerators. We then introduce a secure accelerator architecture for privacy-preserving deep learning, named GuardNN. GuardNN provides a trusted execution environment (TEE) with specialized protection for deep learning, and achieves a small trusted computing base and low protection overhead at the same time. The FPGA prototype of GuardNN achieves a maximum performance overhead of 2.4\% across four different modern DNNs models for ImageNet.