High-performance system architectures are increasingly exploiting heterogeneity. The HipHaC workshop aims at combining new aspects of parallel, heterogeneous, and reconfigurable microprocessor technologies with concepts of high-performance computing and, particularly, numerical solution methods. Compute- and memory-intensive applications can only benefit from the fullhardware potential if all features on all levels are taken into account in a holistic approach.
Graph partitioning and graph clustering are ubiquitous subtasks in many applications where graphs play an important role. Generally speaking, both techniques aim at the identification of vertex subsets with many internal and few external edges. To name only a few, problems addressed by graph partitioning and graph clustering algorithms are: What are the communities within an (online) social network? How do I speed up a numerical simulation by mapping it efficiently onto a parallel computer? How must components be organized on a computer chip such that they can communicate efficiently with each other? What are the segments of a digital image? Which functions are certain genes (most likely) responsible for? The 10th DIMACS Implementation Challenge Workshop was devoted to determining realistic performance of algorithms where worst case analysis is overly pessimistic and probabilistic models are too unrealistic. Articles in the volume describe and analyze various experimental data with the goal of getting insight into realistic algorithm performance in situations where analysis fails.
Although the highly anticipated petascale computers of the near future will perform at an order of magnitude faster than today's quickest supercomputer, the scaling up of algorithms and applications for this class of computers remains a tough challenge. From scalable algorithm design for massive concurrency toperformance analyses and scientific vis
This volume presents a compelling collection of state-of-the-art work in algorithmic computational biology, honoring the legacy of Professor Bernard M.E. Moret in this field. Reflecting the wide-ranging influences of Prof. Moret’s research, the coverage encompasses such areas as phylogenetic tree and network estimation, genome rearrangements, cancer phylogeny, species trees, divide-and-conquer strategies, and integer linear programming. Each self-contained chapter provides an introduction to a cutting-edge problem of particular computational and mathematical interest. Topics and features: addresses the challenges in developing accurate and efficient software for the NP-hard maximum likelihood phylogeny estimation problem; describes the inference of species trees, covering strategies to scale phylogeny estimation methods to large datasets, and the construction of taxonomic supertrees; discusses the inference of ultrametric distances from additive distance matrices, and the inference of ancestral genomes under genome rearrangement events; reviews different techniques for inferring evolutionary histories in cancer, from the use of chromosomal rearrangements to tumor phylogenetics approaches; examines problems in phylogenetic networks, including questions relating to discrete mathematics, and issues of statistical estimation; highlights how evolution can provide a framework within which to understand comparative and functional genomics; provides an introduction to Integer Linear Programming and its use in computational biology, including its use for solving the Traveling Salesman Problem. Offering an invaluable source of insights for computer scientists, applied mathematicians, and statisticians, this illuminating volume will also prove useful for graduate courses on computational biology and bioinformatics.
The ability of parallel computing to process large data sets and handle time-consuming operations has resulted in unprecedented advances in biological and scientific computing, modeling, and simulations. Exploring these recent developments, the Handbook of Parallel Computing: Models, Algorithms, and Applications provides comprehensive coverage on a
Every area of science and engineering today has to process voluminous data sets. Using exact, or even approximate, algorithms to solve intractable problems in critical areas, such as computational biology, takes time that is exponential in some of the underlying parameters. Parallel computing addresses this issue and has become affordable with the
The current exponential growth in graph data has forced a shift to parallel computing for executing graph algorithms. Implementing parallel graph algorithms and achieving good parallel performance have proven difficult. This book addresses these challenges by exploiting the well-known duality between a canonical representation of graphs as abstract collections of vertices and edges and a sparse adjacency matrix representation. This linear algebraic approach is widely accessible to scientists and engineers who may not be formally trained in computer science. The authors show how to leverage existing parallel matrix computation techniques and the large amount of software infrastructure that exists for these computations to implement efficient and scalable parallel graph algorithms. The benefits of this approach are reduced algorithmic complexity, ease of implementation, and improved performance.
No single solution applied at one particular layer can help applications solve all performance-related issues with communication services. Instead, this book shows that a coordinated effort is needed among the layers. It covers many different types of technologies and layers across the stack, from the architectural features of the hardware, through the protocols and their implementation in operating system kernels, to the manner in which application services and middleware are using underlying platforms. The book also describes key developments in high-end platforms, high performance interconnection fabrics and communication libraries, and multi- and many-core systems.
Combinatorial Scientific Computing explores the latest research on creating algorithms and software tools to solve key combinatorial problems on large-scale high-performance computing architectures. It includes contributions from international researchers who are pioneers in designing software and applications for high-performance computing systems. The book offers a state-of-the-art overview of the latest research, tool development, and applications. It focuses on load balancing and parallelization on high-performance computers, large-scale optimization, algorithmic differentiation of numerical simulation code, sparse matrix software tools, and combinatorial challenges and applications in large-scale social networks. The authors unify these seemingly disparate areas through a common set of abstractions and algorithms based on combinatorics, graphs, and hypergraphs. Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations and their importance continues to grow with the demands of new applications and advanced architectures. By addressing current challenges in the field, this volume sets the stage for the accelerated development and deployment of fundamental enabling technologies in high-performance scientific computing.