This book presents the refereed proceedings of the Second International Workshop on Applied Parallel Computing in Physics, Chemistry and Engineering Science, PARA'95, held in Lyngby, Denmark, in August 1995. The 60 revised full papers included have been contributed by physicists, chemists, and engineers, as well as by computer scientists and mathematicians, and document the successful cooperation of different scientific communities in the booming area of computational science and high performance computing. Many widely-used numerical algorithms and their applications on parallel computers are treated in detail.
The book provides a practical guide to computational scientists and engineers to help advance their research by exploiting the superpower of supercomputers with many processors and complex networks. This book focuses on the design and analysis of basic parallel algorithms, the key components for composing larger packages for a wide range of applications.
Although the last decade has witnessed significant advances in control theory for finite and infinite dimensional systems, the stability and control of time-delay systems have not been fully investigated. Many problems exist in this field that are still unresolved, and there is a tendency for the numerical methods available either to be too general or too specific to be applied accurately across a range of problems. This monograph brings together the latest trends and new results in this field, with the aim of presenting methods covering a large range of techniques. Particular emphasis is placed on methods that can be directly applied to specific problems. The resulting book is one that will be of value to both researchers and practitioners.
Computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
This volume summarizes the state of the art in supercomputing, with special emphasis on the industrial relevance of the presented results and methods. The book showcases an innovative usage of state-of-the-art modeling, novel numerical algorithms and the use of leading-edge high-performance computing systems in a GRID-like environment.
The research and its outcomes presented in this collection focus on various aspects of high-performance computing (HPC) software and its development which is confronted with various challenges as today's supercomputer technology heads towards exascale computing. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The collection thereby highlights pioneering research findings as well as innovative concepts in exascale software development that have been conducted under the umbrella of the priority programme "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) and that have been presented at the SPPEXA Symposium, Jan 25-27 2016, in Munich. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest.
The papers in this volume were presented at PARA 2000, the Fifth International Workshop on Applied Parallel Computing. PARA 2000 was held in Bergen, Norway, June 18-21, 2000. The workshop was organized by Parallab and the Department of Informatics at the University of Bergen. The general theme for PARA 2000 was New paradigms for HPC in industry and academia focusing on: { High-performance computing applications in academia and industry, { The use of Java in high-performance computing, { Grid and Meta computing, { Directions in high-performance computing and networking, { Education in Computational Science. The workshop included 9 invited presentations and 39 contributed pres- tations. The PARA 2000 meeting began with a one-day tutorial on OpenMP programming led by Timothy Mattson. This was followed by a three-day wor- hop. The rst three PARA workshops were held at the Technical University of Denmark (DTU), Lyngby (1994, 1995, and 1996). Following PARA’96, an - ternational steering committee for the PARA meetings was appointed and the committee decided that a workshop should take place every second year in one of the Nordic countries. The 1998 workshop was held at Ume a University, Sweden. One important aim of these workshops is to strengthen the ties between HPC centers, academia, and industry in the Nordic countries as well as worldwide. The University of Bergen organized the 2000 workshop and the next workshop in the year 2002 will take place at the Helsinki University of Technology, Espoo, Finland.
This two-volume-set (LNCS 8384 and 8385) constitutes the refereed proceedings of the 10th International Conference of Parallel Processing and Applied Mathematics, PPAM 2013, held in Warsaw, Poland, in September 2013. The 143 revised full papers presented in both volumes were carefully reviewed and selected from numerous submissions. The papers cover important fields of parallel/distributed/cloud computing and applied mathematics, such as numerical algorithms and parallel scientific computing; parallel non-numerical algorithms; tools and environments for parallel/distributed/cloud computing; applications of parallel computing; applied mathematics, evolutionary computing and metaheuristics.