Parallel Computing: Methods, Algorithms and Applications presents a collection of original papers presented at the international meeting on parallel processing, methods, algorithms, and applications at Verona, Italy in September 1989.
This introduction to parallel programming explores the fundamentals of parallelism, parallel system architecture (MIMD and SIMD), and parallel programming languages, and presents methods for designing parallel algorithms, for writing efficient parallel programs, and for computing performance data and judging it.
The Parallel Computing '91 International Conference was a continuation of the series of conferences held in 1983, 1985 and 1989. The aim of this proceedings volume is to provide an overview of new and recent developments, applications and trends in parallel computing. The emphasis is on applications, with the invited lectures covering thriving topics including: artificial intelligence, neural networks, parallel computer performance, parallel numerical and non-numerical algorithms. Contributed papers address a wider variety of topics. Main Features: Surveys of recent work in parallel computing involving computer architectures, parallel software and algorithms, and applications. Recent work in parallel computing presented by active researchers. Information on parallel computing activities.
These proceedings are devoted to communicating significant developments in all areas pertinent to Parallel Symbolic Computation.The scope includes algorithms, languages, software systems and application in any area of parallel symbolic computation, where parallelism is interpreted broadly to include concurrent, distributive, cooperative schemes, and so forth.
Parallel architectures are no longer pure research vehicles, as they were some years ago. There are now many commercial systems competing for market segments in scientific computing. The 1990s are likely to become the decade of parallel processing. CONPAR 90 - VAPP IV is the joint successor meeting of two highly successful international conference series in the field of vector and parallel processing. This volume contains the 79 papers presented at the conference. The various topics of the papers include hardware, software and application issues. Some of the session titles best reflect the contents: new models of computation, logic programming, large-grain data flow, interconnection networks, communication issues, reconfigurable and scalable systems, novel architectures and languages, high performance systems and accelerators, performance prediction / analysis / measurement, performance monitoring and debugging, compile-time analysis and restructurers, load balancing, process partitioning and concurrency control, visualization and runtime analysis, parallel linear algebra, architectures for image processing, efficient use of vector computers, transputer tools and applications, array processors, algorithmic studies for hypercube-type systems, systolic arrays and algorithms. The volume gives a comprehensive view of the state of the art in a field of current interest.
A comprehensive overview of OpenMP, the standard application programming interface for shared memory parallel computing—a reference for students and professionals. "I hope that readers will learn to use the full expressibility and power of OpenMP. This book should provide an excellent introduction to beginners, and the performance section should help those with some experience who want to push OpenMP to its limits." —from the foreword by David J. Kuck, Intel Fellow, Software and Solutions Group, and Director, Parallel and Distributed Solutions, Intel Corporation OpenMP, a portable programming interface for shared memory parallel computers, was adopted as an informal standard in 1997 by computer scientists who wanted a unified model on which to base programs for shared memory systems. OpenMP is now used by many software developers; it offers significant advantages over both hand-threading and MPI. Using OpenMP offers a comprehensive introduction to parallel programming concepts and a detailed overview of OpenMP. Using OpenMP discusses hardware developments, describes where OpenMP is applicable, and compares OpenMP to other programming interfaces for shared and distributed memory parallel architectures. It introduces the individual features of OpenMP, provides many source code examples that demonstrate the use and functionality of the language constructs, and offers tips on writing an efficient OpenMP program. It describes how to use OpenMP in full-scale applications to achieve high performance on large-scale architectures, discussing several case studies in detail, and offers in-depth troubleshooting advice. It explains how OpenMP is translated into explicitly multithreaded code, providing a valuable behind-the-scenes account of OpenMP program performance. Finally, Using OpenMP considers trends likely to influence OpenMP development, offering a glimpse of the possibilities of a future OpenMP 3.0 from the vantage point of the current OpenMP 2.5. With multicore computer use increasing, the need for a comprehensive introduction and overview of the standard interface is clear. Using OpenMP provides an essential reference not only for students at both undergraduate and graduate levels but also for professionals who intend to parallelize existing codes or develop new parallel programs for shared memory computer architectures.
Parallel computer architectures are now going to real applications! This fact is demonstrated by the large number of application areas covered in this book (see section on applications of parallel computer architectures). The applications range from image analysis to quantum mechanics and data bases. Still, the use of parallel architectures poses serious problems and requires the development of new techniques and tools. This book is a collection of best papers presented at the first workshop on two major research activities at the Universitiit Erlangen-Niirnberg and Technis che Universitiit Miinchen. At both universities, more than 100 researchers are working in the field of multiprocessor systems and network configurations and methods and tools for parallel systems. Indeed, the German Science Founda tion (Deutsche Forschungsgemeinschaft) has been sponsoring the projects under grant numbers SFB 182 and SFB 342. Research grants in the form of a Sonder forschungsbereich are given to selected German Universities in portions of three years following a thoroughful reviewing process. The overall duration of such a research grant is restricted to 12 years. The initiative at Erlangen-Niirnberg was started in 1987 and has been headed since this time by Prof. Dr. H. Wedekind. Work at TU-Miinchen began in 1990, head of this initiative is Prof. Dr. A. Bode. The authors of this book are grateful to the Deutsche Forschungsgemeinschaft for its continuing support in the field of research on parallel processing. The first section of the book is devoted to hardware aspects of parallel systems.
This book presents the proceedings of the First International EURO-PAR Conference on Parallel Processing, held in Stockholm, Sweden in August 1995. EURO-PAR is the merger of the former PARLE and CONPAR-VAPP conference series; the aim of this merger is to create the premier annual scientific conference on parallel processing in Europe. The book presents 50 full revised research papers and 11 posters selected from a total of 196 submissions on the basis of 582 reviews. The scope of the contributions spans the full spectrum of parallel processing ranging from theory over design to application; thus the volume is a "must" for anybody interested in the scientific aspects of parallel processing or its advanced applications.