Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
This book targets computer scientists and engineers who are familiar with concepts in classical computer systems but are curious to learn the general architecture of quantum computing systems. It gives a concise presentation of this new paradigm of computing from a computer systems' point of view without assuming any background in quantum mechanics. As such, it is divided into two parts. The first part of the book provides a gentle overview on the fundamental principles of the quantum theory and their implications for computing. The second part is devoted to state-of-the-art research in designing practical quantum programs, building a scalable software systems stack, and controlling quantum hardware components. Most chapters end with a summary and an outlook for future directions. This book celebrates the remarkable progress that scientists across disciplines have made in the past decades and reveals what roles computer scientists and engineers can play to enable practical-scale quantum computing.
By using computer simulations in research and development, computational science and engineering (CSE) allows empirical inquiry where traditional experimentation and methods of inquiry are difficult, inefficient, or prohibitively expensive. The Handbook of Research on Computational Science and Engineering: Theory and Practice is a reference for interested researchers and decision-makers who want a timely introduction to the possibilities in CSE to advance their ongoing research and applications or to discover new resources and cutting edge developments. Rather than reporting results obtained using CSE models, this comprehensive survey captures the architecture of the cross-disciplinary field, explores the long term implications of technology choices, alerts readers to the hurdles facing CSE, and identifies trends in future development.
The fields of computer vision and image processing are constantly evolving as new research and applications in these areas emerge. Staying abreast of the most up-to-date developments in this field is necessary in order to promote further research and apply these developments in real-world settings. Computer Vision and Image Processing in Intelligent Systems and Multimedia Technologies features timely and informative research on the design and development of computer vision and image processing applications in intelligent agents as well as in multimedia technologies. Covering a diverse set of research in these areas, this publication is ideally designed for use by academicians, technology professionals, students, and researchers interested in uncovering the latest innovations in the field.
Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
This book discusses uncertain threats, which are caused by unknown attacks based on unknown vulnerabilities or backdoors in the information system or control devices and software/hardware. Generalized robustness control architecture and the mimic defense mechanisms are presented in this book, which could change “the easy-to-attack and difficult-to-defend game” in cyberspace. The endogenous uncertain effects from the targets of the software/hardware based on this architecture can produce magic “mimic defense fog”, and suppress in a normalized mode random disturbances caused by physical or logic elements, as well as effects of non-probability disturbances brought by uncertain security threats. Although progress has been made in the current security defense theories in cyberspace and various types of security technologies have come into being, the effectiveness of such theories and technologies often depends on the scale of the prior knowledge of the attackers, on the part of the defender and on the acquired real-timing and accuracy regarding the attackers’ behavior features and other information. Hence, there lacks an efficient active defense means to deal with uncertain security threats from the unknown. Even if the bottom-line defense technologies such as encrypted verification are adopted, the security of hardware/software products cannot be quantitatively designed, verified or measured. Due to the “loose coupling” relationship and border defense modes between the defender and the protected target, there exist insurmountable theoretical and technological challenges in the protection of the defender and the target against the utilization of internal vulnerabilities or backdoors, as well as in dealing with attack scenarios based on backdoor-activated collaboration from both inside and outside, no matter how augmented or accumulated protective measures are adopted. Therefore, it is urgent to jump out of the stereotyped thinking based on conventional defense theories and technologies, find new theories and methods to effectively reduce the utilization of vulnerabilities and backdoors of the targets without relying on the priori knowledge and feature information, and to develop new technological means to offset uncertain threats based on unknown vulnerabilities and backdoors from an innovative perspective. This book provides a solution both in theory and engineering implementation to the difficult problem of how to avoid the uncontrollability of product security caused by globalized marketing, COTS and non-trustworthy software/hardware sources. It has been proved that this revolutionary enabling technology has endowed software/hardware products in IT/ICT/CPS with endogenous security functions and has overturned the attack theories and methods based on hardware/software design defects or resident malicious codes. This book is designed for educators, theoretical and technological researchers in cyber security and autonomous control and for business technicians who are engaged in the research on developing a new generation of software/hardware products by using endogenous security enabling technologies and for other product users. Postgraduates in IT/ICT/CPS/ICS will discover that (as long as the law of “structure determines the nature and architecture determines the security is properly used), the problem of software/hardware design defects or malicious code embedding will become the swelling of Achilles in the process of informationization and will no longer haunt Pandora’s box in cyberspace. Security and opening-up, advanced progressiveness and controllability seem to be contradictory, but there can be theoretically and technologically unified solutions to the problem.
Cloud-based Intelligent Informative Engineering for Society 5.0 is a model for the dissemination of cutting-edge technological innovation and assistive devices for people with physical impairments. This book showcases Cloud-based, high-performance Information systems and Informatics-based solutions for the verification of the information support requirements of the modern engineering, healthcare, modern business, organization, and academic communities. Features: Includes broad variety of methodologies and technical developments to improve research in informative engineering Explore the Internet of Things (IoT), blockchain technology, deep learning, data analytics, and cloud Highlight Cloud-based high-performance Information systems and Informatics-based solutions This book is beneficial for graduate students and researchers in computer sciences, cloud computing and related subject areas.
This book gathers the proceedings of the International Conference on Advanced Technologies for Humanity (ICATH’2021), held on November 26-27, 2021, in INSEA, Rabat, Morocco. ICATH’2021 was jointly co-organized by the National Institute of Statistics and Applied Economics (INSEA) in collaboration with the Moroccan School of Engineering Sciences (EMSI), the Hassan II Institute of Agronomy and Veterinary Medicine (IAV-Hassan II), the National Institute of Posts and Telecommunications (INPT), the National School of Mineral Industry (ENSMR), the Faculty of Sciences of Rabat (UM5-FSR), the National School of Applied Sciences of Kenitra (ENSAK) and the Future University in Egypt (FUE). ICATH’2021 was devoted to practical models and industrial applications related to advanced technologies for Humanity. It was considered as a meeting point for researchers and practitioners to enable the implementation of advanced information technologies into various industries. This book is helpful for PhD students as well as researchers. The 48 full papers were carefully reviewed and selected from 105 submissions. The papers presented in the volume are organized in topical sections on synergies between (i) smart and sustainable cities, (ii) communication systems, signal and image processing for humanity, (iii) cybersecurity, database and language processing for human applications, (iV) renewable and sustainable energies, (V) civil engineering and structures for sustainable constructions, (Vi) materials and smart buildings and (Vii) Industry 4.0 for smart factories. All contributions were subject to a double-blind review. The review process was highly competitive. We had to review 105 submissions from 12 countries. A team of over 100 program committee members and reviewers did this terrific job. Our special thanks go to all of them.