This book introduces new compilation techniques, using the polyhedron model for the resource-adaptive parallel execution of loop programs on massively parallel processor arrays. The authors show how to compute optimal symbolic assignments and parallel schedules of loop iterations at compile time, for cases where the number of available cores becomes known only at runtime. The compile/runtime symbolic parallelization approach the authors describe reduces significantly the runtime overhead, compared to dynamic or just‐in-time compilation. The new, on‐demand fault‐tolerant loop processing approach described in this book protects loop nests for parallel execution against soft errors.
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
Containing over 300 entries in an A-Z format, the Encyclopedia of Parallel Computing provides easy, intuitive access to relevant information for professionals and researchers seeking access to any aspect within the broad field of parallel computing. Topics for this comprehensive reference were selected, written, and peer-reviewed by an international pool of distinguished researchers in the field. The Encyclopedia is broad in scope, covering machine organization, programming languages, algorithms, and applications. Within each area, concepts, designs, and specific implementations are presented. The highly-structured essays in this work comprise synonyms, a definition and discussion of the topic, bibliographies, and links to related literature. Extensive cross-references to other entries within the Encyclopedia support efficient, user-friendly searchers for immediate access to useful information. Key concepts presented in the Encyclopedia of Parallel Computing include; laws and metrics; specific numerical and non-numerical algorithms; asynchronous algorithms; libraries of subroutines; benchmark suites; applications; sequential consistency and cache coherency; machine classes such as clusters, shared-memory multiprocessors, special-purpose machines and dataflow machines; specific machines such as Cray supercomputers, IBM’s cell processor and Intel’s multicore machines; race detection and auto parallelization; parallel programming languages, synchronization primitives, collective operations, message passing libraries, checkpointing, and operating systems. Topics covered: Speedup, Efficiency, Isoefficiency, Redundancy, Amdahls law, Computer Architecture Concepts, Parallel Machine Designs, Benmarks, Parallel Programming concepts & design, Algorithms, Parallel applications. This authoritative reference will be published in two formats: print and online. The online edition features hyperlinks to cross-references and to additional significant research. Related Subjects: supercomputing, high-performance computing, distributed computing
This book contains papers selected for presentation at the Sixth Annual Workshop on Languages and Compilers for Parallel Computing. The workshop washosted by the Oregon Graduate Institute of Science and Technology. All the major research efforts in parallel languages and compilers are represented in this workshop series. The 36 papers in the volume aregrouped under nine headings: dynamic data structures, parallel languages, High Performance Fortran, loop transformation, logic and dataflow language implementations, fine grain parallelism, scalar analysis, parallelizing compilers, and analysis of parallel programs. The book represents a valuable snapshot of the state of research in the field in 1993.
Scalable parallel systems or, more generally, distributed memory systems offer a challenging model of computing and pose fascinating problems regarding compiler optimization, ranging from language design to run time systems. Research in this area is foundational to many challenges from memory hierarchy optimizations to communication optimization. This unique, handbook-like monograph assesses the state of the art in the area in a systematic and comprehensive way. The 21 coherent chapters by leading researchers provide complete and competent coverage of all relevant aspects of compiler optimization for scalable parallel systems. The book is divided into five parts on languages, analysis, communication optimizations, code generation, and run time systems. This book will serve as a landmark source for education, information, and reference to students, practitioners, professionals, and researchers interested in updating their knowledge about or active in parallel computing.
These proceedings are devoted to communicating significant developments in all areas pertinent to Parallel Symbolic Computation.The scope includes algorithms, languages, software systems and application in any area of parallel symbolic computation, where parallelism is interpreted broadly to include concurrent, distributive, cooperative schemes, and so forth.
This three-volume work presents a compendium of current and seminal papers on parallel/distributed processing offered at the 22nd International Conference on Parallel Processing, held August 16-20, 1993 in Chicago, Illinois. Topics include processor architectures; mapping algorithms to parallel systems, performance evaluations; fault diagnosis, recovery, and tolerance; cube networks; portable software; synchronization; compilers; hypercube computing; and image processing and graphics. Computer professionals in parallel processing, distributed systems, and software engineering will find this book essential to their complete computer reference library.
This book presents the refereed proceedings of the Eighth Annual Workshop on Languages and Compilers for Parallel Computing, held in Columbus, Ohio in August 1995. The 38 full revised papers presented were carefully selected for inclusion in the proceedings and reflect the state of the art of research and advanced applications in parallel languages, restructuring compilers, and runtime systems. The papers are organized in sections on fine-grain parallelism, interprocedural analysis, program analysis, Fortran 90 and HPF, loop parallelization for HPF compilers, tools and libraries, loop-level optimization, automatic data distribution, compiler models, irregular computation, object-oriented and functional parallelism.
This two-volume-set (LNCS 7203 and 7204) constitutes the refereed proceedings of the 9th International Conference on Parallel Processing and Applied Mathematics, PPAM 2011, held in Torun, Poland, in September 2011. The 130 revised full papers presented in both volumes were carefully reviewed and selected from numerous submissions. The papers address issues such as parallel/distributed architectures and mobile computing; numerical algorithms and parallel numerics; parallel non-numerical algorithms; tools and environments for parallel/distributed/grid computing; applications of parallel/distributed computing; applied mathematics, neural networks and evolutionary computing; history of computing.