Describes several useful paradigms for the design and implementation of efficient external memory (EM) algorithms and data structures. The problem domains considered include sorting, permuting, FFT, scientific computing, computational geometry, graphs, databases, geographic information systems, and text and string processing.
Massive modern datasets make traditional data structures and algorithms grind to a halt. This fun and practical guide introduces cutting-edge techniques that can reliably handle even the largest distributed datasets. In Algorithms and Data Structures for Massive Datasets you will learn: Probabilistic sketching data structures for practical problems Choosing the right database engine for your application Evaluating and designing efficient on-disk data structures and algorithms Understanding the algorithmic trade-offs involved in massive-scale systems Deriving basic statistics from streaming data Correctly sampling streaming data Computing percentiles with limited space resources Algorithms and Data Structures for Massive Datasets reveals a toolbox of new methods that are perfect for handling modern big data applications. You’ll explore the novel data structures and algorithms that underpin Google, Facebook, and other enterprise applications that work with truly massive amounts of data. These effective techniques can be applied to any discipline, from finance to text analysis. Graphics, illustrations, and hands-on industry examples make complex ideas practical to implement in your projects—and there’s no mathematical proofs to puzzle over. Work through this one-of-a-kind guide, and you’ll find the sweet spot of saving space without sacrificing your data’s accuracy. About the technology Standard algorithms and data structures may become slow—or fail altogether—when applied to large distributed datasets. Choosing algorithms designed for big data saves time, increases accuracy, and reduces processing cost. This unique book distills cutting-edge research papers into practical techniques for sketching, streaming, and organizing massive datasets on-disk and in the cloud. About the book Algorithms and Data Structures for Massive Datasets introduces processing and analytics techniques for large distributed data. Packed with industry stories and entertaining illustrations, this friendly guide makes even complex concepts easy to understand. You’ll explore real-world examples as you learn to map powerful algorithms like Bloom filters, Count-min sketch, HyperLogLog, and LSM-trees to your own use cases. What's inside Probabilistic sketching data structures Choosing the right database engine Designing efficient on-disk data structures and algorithms Algorithmic tradeoffs in massive-scale systems Computing percentiles with limited space resources About the reader Examples in Python, R, and pseudocode. About the author Dzejla Medjedovic earned her PhD in the Applied Algorithms Lab at Stony Brook University, New York. Emin Tahirovic earned his PhD in biostatistics from University of Pennsylvania. Illustrator Ines Dedovic earned her PhD at the Institute for Imaging and Computer Vision at RWTH Aachen University, Germany. Table of Contents 1 Introduction PART 1 HASH-BASED SKETCHES 2 Review of hash tables and modern hashing 3 Approximate membership: Bloom and quotient filters 4 Frequency estimation and count-min sketch 5 Cardinality estimation and HyperLogLog PART 2 REAL-TIME ANALYTICS 6 Streaming data: Bringing everything together 7 Sampling from data streams 8 Approximate quantiles on data streams PART 3 DATA STRUCTURES FOR DATABASES AND EXTERNAL MEMORY ALGORITHMS 9 Introducing the external memory model 10 Data structures for databases: B-trees, Bε-trees, and LSM-trees 11 External memory sorting
Algorithms that have to process large data sets have to take into account that the cost of memory access depends on where the data is stored. Traditional algorithm design is based on the von Neumann model where accesses to memory have uniform cost. Actual machines increasingly deviate from this model: while waiting for memory access, nowadays, microprocessors can in principle execute 1000 additions of registers; for hard disk access this factor can reach six orders of magnitude. The 16 coherent chapters in this monograph-like tutorial book introduce and survey algorithmic techniques used to achieve high performance on memory hierarchies; emphasis is placed on methods interesting from a theoretical as well as important from a practical point of view.
This practical text contains fairly "traditional" coverage of data structures with a clear and complete use of algorithm analysis, and some emphasis on file processing techniques as relevant to modern programmers. It fully integrates OO programming with these topics, as part of the detailed presentation of OO programming itself.Chapter topics include lists, stacks, and queues; binary and general trees; graphs; file processing and external sorting; searching; indexing; and limits to computation.For programmers who need a good reference on data structures.
Dmytro's study of software engineering, particularly software economics and developer productivity, influenced this book's emphasis on simplicity and preference for solution methods applicable to a variety of problems.--back cover
Algorithms are at the heart of every nontrivial computer application, and algorithmics is a modern and active area of computer science. Every computer scientist and every professional programmer should know about the basic algorithmic toolbox: structures that allow efficient organization and retrieval of data, frequently used algorithms, and basic techniques for modeling, understanding and solving algorithmic problems. This book is a concise introduction addressed to students and professionals familiar with programming and basic mathematical language. Individual chapters cover arrays and linked lists, hash tables and associative arrays, sorting and selection, priority queues, sorted sequences, graph representation, graph traversal, shortest paths, minimum spanning trees, and optimization. The algorithms are presented in a modern way, with explicitly formulated invariants, and comment on recent trends such as algorithm engineering, memory hierarchies, algorithm libraries and certifying algorithms. The authors use pictures, words and high-level pseudocode to explain the algorithms, and then they present more detail on efficient implementations using real programming languages like C++ and Java. The authors have extensive experience teaching these subjects to undergraduates and graduates, and they offer a clear presentation, with examples, pictures, informal explanations, exercises, and some linkage to the real world. Most chapters have the same basic structure: a motivation for the problem, comments on the most important applications, and then simple solutions presented as informally as possible and as formally as necessary. For the more advanced issues, this approach leads to a more mathematical treatment, including some theorems and proofs. Finally, each chapter concludes with a section on further findings, providing views on the state of research, generalizations and advanced solutions.
Comprehensive treatment focuses on creation of efficient data structures and algorithms and selection or design of data structure best suited to specific problems. This edition uses Java as the programming language.
The papers in this volume were presented at the Sixth Workshop on Algorithms and Data Structures (WADS '99). The workshop took place August 11 - 14, 1999, in Vancouver, Canada. The workshop alternates with the Scandinavian Workshop on Algorithms Theory (SWAT), continuing the tradition of SWAT and WADS starting with SWAT'88 and WADS'89. In response to the program committee's call for papers, 71 papers were submitted. From these submissions, the program committee selected 32 papers for presentation at the workshop. In addition to these submitted papers, the program committee invited the following researchers to give plenary lectures at the workshop: C. Leiserson, N. Magnenat-Thalmann, M. Snir, U. Vazarani, and 1. Vitter. On behalf of the program committee, we would like to express our appreciation to the six plenary lecturers who accepted our invitation to speak, to all the authors who submitted papers to W ADS'99, and to the Pacific Institute for Mathematical Sciences for their sponsorship. Finally, we would like to express our gratitude to all the people who reviewed papers at the request of the program committee. August 1999 F. Dehne A. Gupta J.-R. Sack R. Tamassia VI Conference Chair: A. Gupta Program Committee Chairs: F. Dehne, A. Gupta, J.-R. Sack, R. Tamassia Program Committee: A. Andersson, A. Apostolico, G. Ausiello, G. Bilardi, K. Clarkson, R. Cleve, M. Cosnard, L. Devroye, P. Dymond, M. Farach-Colton, P. Fraigniaud, M. Goodrich, A.
The design and analysis of efficient data structures has long been recognized as a key component of the Computer Science curriculum. Goodrich, Tomassia and Goldwasser's approach to this classic topic is based on the object-oriented paradigm as the framework of choice for the design of data structures. For each ADT presented in the text, the authors provide an associated Java interface. Concrete data structures realizing the ADTs are provided as Java classes implementing the interfaces. The Java code implementing fundamental data structures in this book is organized in a single Java package, net.datastructures. This package forms a coherent library of data structures and algorithms in Java specifically designed for educational purposes in a way that is complimentary with the Java Collections Framework.