UNDERSTANDING PARALLEL SUPERCOMPUTING is an exhaustive, applications-oriented survey of the world's largest and fastest computers. Beginning with the evolution of parallel supercomputing technology in recent history, author R. Michael Hord goes on to illustrate architectural concepts and implementations at the very center of today's cutting-edge technology. Topics featured include: technology benefits and drawbacks, software tools and programming languages, major programming concepts, sample parallel programs, algorithmic methods, both SIMD and MIMD architectures. This carefully written text will be of interest to engineers, scientists, and program managers involved in geologic exploration, aircraft design, image processing, weather modeling, operations, research, chemical synthesis, and medical applications. It will also be of practical use to computer specialists.
Parallel Supercomputing in MIMD Architectures is devoted to supercomputing on a wide variety of Multiple-Instruction-Multiple-Data (MIMD)-class parallel machines. This book describes architectural concepts, commercial and research hardware implementations, major programming concepts, algorithmic methods, representative applications, and benefits and drawbacks. Commercial machines described include Connection Machine 5, NCUBE, Butterfly, Meiko, Intel iPSC, iPSC/2 and iWarp, DSP3, Multimax, Sequent, and Teradata. Research machines covered include the J-Machine, PAX, Concert, and ASP. Operating systems, languages, translating sequential programs to parallel, and semiautomatic parallelizing are aspects of MIMD software addressed in Parallel Supercomputing in MIMD Architectures. MIMD issues such as scalability, partitioning, processor utilization, and heterogenous networks are discussed as well.This book is packed with important information and richly illustrated with diagrams and tables, Parallel Supercomputing in MIMD Architectures is an essential reference for computer professionals, program managers, applications system designers, scientists, engineers, and students in the computer sciences.
A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop algorithms for frequently used mathematicalcomputations. They also devise performance models, measure the performancecharacteristics of several computers, and create a high-performancecomputing facility based exclusively on parallel computers. By addressingall issues involved in scientific problem solving, Parallel ComputingWorks! provides valuable insight into computational science for large-scaleparallel architectures. For those in the sciences, the findings reveal theusefulness of an important experimental tool. Anyone in supercomputing andrelated computational fields will gain a new perspective on the potentialcontributions of parallelism. Includes over 30 full-color illustrations.
Weather forecasting and climatology have traditionally been users of the world's fastest supercomputers. The recent emergence of massively parallel supercomputers as likely successors to current vector supercomputers has created an acute need to convert weather and climate models to suit parallel supercomputers with thousands of processors. Several major efforts are underway worldwide to accomplish this. ECMWF has established itself as the central venue for bringing together operational weather forecasters, climate researchers and parallel computer manufacturers to share their experience on these efforts every second year. The recent dramatic developments in supercomputer manufacturing have made the 1992 ECMWF Workshop timelier than before.
A complete source of information on almost all aspects of parallel computing from introduction, to architectures, to programming paradigms, to algorithms, to programming standards. It covers traditional Computer Science algorithms, scientific computing algorithms and data intensive algorithms.
MIMD supercomputers, software, and issues Parallel Supercomputing in MIMD Architectures is devoted to supercomputing on a wide variety of Multiple-Instruction-Multiple-Data (MIMD)-class parallel machines. The book describes architectural concepts, commercial and research hardware implementations, major programming concepts, algorithmic methods, representative applications, and benefits and drawbacks. Commercial machines described include Connection Machine 5, NCUBE, Butterfly, Meiko, Intel iPSC, iPSC/2 and iWarp, DSP3, Multimax, Sequent, and Teradata. Research machines covered include the J-Machine, PAX, Concert, and ASP. Operating systems, languages, translating sequential programs to parallel, and semiautomatic parallelizing are aspects of MIMD software addressed in Parallel Supercomputing in MIMD Architectures. MIMD issues such as scalability, partitioning, processor utilization, and heterogenous networks are discussed as well. Packed with important information and richly illustrated with diagrams and tables, Parallel Supercomputing in MIMD Architectures is an essential reference for computer professionals, program managers, applications system designers, scientists, engineers, and students in the computer sciences.
Supercomputers play a significant and growing role in a variety of areas important to the nation. They are used to address challenging science and technology problems. In recent years, however, progress in supercomputing in the United States has slowed. The development of the Earth Simulator supercomputer by Japan that the United States could lose its competitive advantage and, more importantly, the national competence needed to achieve national goals. In the wake of this development, the Department of Energy asked the NRC to assess the state of U.S. supercomputing capabilities and relevant R&D. Subsequently, the Senate directed DOE in S. Rpt. 107-220 to ask the NRC to evaluate the Advanced Simulation and Computing program of the National Nuclear Security Administration at DOE in light of the development of the Earth Simulator. This report provides an assessment of the current status of supercomputing in the United States including a review of current demand and technology, infrastructure and institutions, and international activities. The report also presents a number of recommendations to enable the United States to meet current and future needs for capability supercomputers.
An overview of the most prominent contemporary parallel processing programming models, written in a unique tutorial style. With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today. The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations. Contributors Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng
Programming Massively Parallel Processors: A Hands-on Approach, Second Edition, teaches students how to program massively parallel processors. It offers a detailed discussion of various techniques for constructing parallel programs. Case studies are used to demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. This guide shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth. This revised edition contains more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. It also provides new coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more; increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism; and two new case studies (on MRI reconstruction and molecular visualization) that explore the latest applications of CUDA and GPUs for scientific research and high-performance computing. This book should be a valuable resource for advanced students, software engineers, programmers, and hardware engineers. - New coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more - Increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism - Two new case studies (on MRI reconstruction and molecular visualization) explore the latest applications of CUDA and GPUs for scientific research and high-performance computing
This uniquely comprehensive book brings together the vast amount of technical, economic, and political information and the analyses of supercomputing that have hitherto been buried in the frequently inaccessible "gray literature." Seventy-nine distinguished participants in the second Frontiers of Supercomputing conference offer perceptive and often controversial views on the emerging computing environment in the United States. This title is part of UC Press's Voices Revived program, which commemorates University of California Press’s mission to seek out and cultivate the brightest minds and give them voice, reach, and impact. Drawing on a backlist dating to 1893, Voices Revived makes high-quality, peer-reviewed scholarship accessible once again using print-on-demand technology. This title was originally published in 1994.