This publication examines complex performance evaluation of various typical parallel algorithms (shared memory, distributed memory) and their practical implementations. As real application examples we demonstrate the various influences during the process of modelling and performance evaluation and the consequences of their distributed parallel implementations.
From the preface, page xv: [...] My goal in writing Parallel and Distributed Simulation Systems, is to give an in-depth treatment of technical issues concerning the execution of discrete event simulation programs on computing platforms composed of many processores interconnected through a network"
This text is based on a simple and fully reactive computational model that allows for intuitive comprehension and logical designs. The principles and techniques presented can be applied to any distributed computing environment (e.g., distributed systems, communication networks, data networks, grid networks, internet, etc.). The text provides a wealth of unique material for learning how to design algorithms and protocols perform tasks efficiently in a distributed computing environment.
This highly acclaimed work, first published by Prentice Hall in 1989, is a comprehensive and theoretically sound treatment of parallel and distributed numerical methods. It focuses on algorithms that are naturally suited for massive parallelization, and it explores the fundamental convergence, rate of convergence, communication, and synchronization issues associated with such algorithms. This is an extensive book, which aside from its focus on parallel and distributed algorithms, contains a wealth of material on a broad variety of computation and optimization topics. It is an excellent supplement to several of our other books, including Convex Optimization Algorithms (Athena Scientific, 2015), Nonlinear Programming (Athena Scientific, 1999), Dynamic Programming and Optimal Control (Athena Scientific, 2012), Neuro-Dynamic Programming (Athena Scientific, 1996), and Network Optimization (Athena Scientific, 1998). The on-line edition of the book contains a 95-page solutions manual.
Designed for introductory parallel computing courses at the advanced undergraduate or beginning graduate level, Elements of Parallel Computing presents the fundamental concepts of parallel computing not from the point of view of hardware, but from a more abstract view of algorithmic and implementation patterns. The aim is to facilitate the teaching of parallel programming by surveying some key algorithmic structures and programming models, together with an abstract representation of the underlying hardware. The presentation is friendly and informal. The content of the book is language neutral, using pseudocode that represents common programming language models. The first five chapters present core concepts in parallel computing. SIMD, shared memory, and distributed memory machine models are covered, along with a brief discussion of what their execution models look like. The book also discusses decomposition as a fundamental activity in parallel algorithmic design, starting with a naive example, and continuing with a discussion of some key algorithmic structures. Important programming models are presented in depth, as well as important concepts of performance analysis, including work-depth analysis of task graphs, communication analysis of distributed memory algorithms, key performance metrics, and a discussion of barriers to obtaining good performance. The second part of the book presents three case studies that reinforce the concepts of the earlier chapters. One feature of these chapters is to contrast different solutions to the same problem, using select problems that aren't discussed frequently in parallel computing textbooks. They include the Single Source Shortest Path Problem, the Eikonal equation, and a classical computational geometry problem: computation of the two-dimensional convex hull. After presenting the problem and sequential algorithms, each chapter first discusses the sources of parallelism then surveys parallel algorithms.
I wish to welcome all of you to the International Symposium on High Perf- mance Computing 2000 (ISHPC 2000) in the megalopolis of Tokyo. After having two great successes with ISHPC’97 (Fukuoka, November 1997) and ISHPC’99 (Kyoto, May 1999), many people have requested that the symposium would be held in the capital of Japan and we have agreed. I am very pleased to serve as Conference Chair at a time when high p- formance computing (HPC) has a signi?cant in?uence on computer science and technology. In particular, HPC has had and will continue to have a signi?cant - pact on the advanced technologies of the “IT” revolution. The many conferences and symposiums that are held on the subject around the world are an indication of the importance of this area and the interest of the research community. One of the goals of this symposium is to provide a forum for the discussion of all aspects of HPC (from system architecture to real applications) in a more informal and personal fashion. Today we are delighted to have this symposium, which includes excellent invited talks, tutorials and workshops, as well as high quality technical papers.
Here, authors from academia and practice provide practitioners, scientists and graduates with basic methods and paradigms, as well as important issues and trends across the spectrum of parallel and distributed processing. In particular, they cover such fundamental topics as efficient parallel algorithms, languages for parallel processing, parallel operating systems, architecture of parallel and distributed systems, management of resources, tools for parallel computing, parallel database systems and multimedia object servers, as well as the relevant networking aspects. A chapter is dedicated to each of parallel and distributed scientific computing, high-performance computing in molecular sciences, and multimedia applications for parallel and distributed systems.