This Festschrift volume, published in honour of J. Ian Munro, contains contributions written by some of his colleagues, former students, and friends. In celebration of his 66th birthday the colloquium "Conference on Space Efficient Data Structures, Streams and Algorithms" was held in Waterloo, ON, Canada, during August 15-16, 2013. The articles presented herein cover some of the main topics of Ian's research interests. Together they give a good overall perspective of the last 40 years of research in algorithms and data structures.
Massive modern datasets make traditional data structures and algorithms grind to a halt. This fun and practical guide introduces cutting-edge techniques that can reliably handle even the largest distributed datasets. In Algorithms and Data Structures for Massive Datasets you will learn: Probabilistic sketching data structures for practical problems Choosing the right database engine for your application Evaluating and designing efficient on-disk data structures and algorithms Understanding the algorithmic trade-offs involved in massive-scale systems Deriving basic statistics from streaming data Correctly sampling streaming data Computing percentiles with limited space resources Algorithms and Data Structures for Massive Datasets reveals a toolbox of new methods that are perfect for handling modern big data applications. You’ll explore the novel data structures and algorithms that underpin Google, Facebook, and other enterprise applications that work with truly massive amounts of data. These effective techniques can be applied to any discipline, from finance to text analysis. Graphics, illustrations, and hands-on industry examples make complex ideas practical to implement in your projects—and there’s no mathematical proofs to puzzle over. Work through this one-of-a-kind guide, and you’ll find the sweet spot of saving space without sacrificing your data’s accuracy. About the technology Standard algorithms and data structures may become slow—or fail altogether—when applied to large distributed datasets. Choosing algorithms designed for big data saves time, increases accuracy, and reduces processing cost. This unique book distills cutting-edge research papers into practical techniques for sketching, streaming, and organizing massive datasets on-disk and in the cloud. About the book Algorithms and Data Structures for Massive Datasets introduces processing and analytics techniques for large distributed data. Packed with industry stories and entertaining illustrations, this friendly guide makes even complex concepts easy to understand. You’ll explore real-world examples as you learn to map powerful algorithms like Bloom filters, Count-min sketch, HyperLogLog, and LSM-trees to your own use cases. What's inside Probabilistic sketching data structures Choosing the right database engine Designing efficient on-disk data structures and algorithms Algorithmic tradeoffs in massive-scale systems Computing percentiles with limited space resources About the reader Examples in Python, R, and pseudocode. About the author Dzejla Medjedovic earned her PhD in the Applied Algorithms Lab at Stony Brook University, New York. Emin Tahirovic earned his PhD in biostatistics from University of Pennsylvania. Illustrator Ines Dedovic earned her PhD at the Institute for Imaging and Computer Vision at RWTH Aachen University, Germany. Table of Contents 1 Introduction PART 1 HASH-BASED SKETCHES 2 Review of hash tables and modern hashing 3 Approximate membership: Bloom and quotient filters 4 Frequency estimation and count-min sketch 5 Cardinality estimation and HyperLogLog PART 2 REAL-TIME ANALYTICS 6 Streaming data: Bringing everything together 7 Sampling from data streams 8 Approximate quantiles on data streams PART 3 DATA STRUCTURES FOR DATABASES AND EXTERNAL MEMORY ALGORITHMS 9 Introducing the external memory model 10 Data structures for databases: B-trees, Bε-trees, and LSM-trees 11 External memory sorting
This book constitutes the proceedings of the 13th International Conference and Workshop on Algorithms and Computation, WALCOM 2019, held in Guwahati, India, in February/ March 2019. The 30 full papers presented were carefully reviewed and selected from 100 submissions. The papers are organized in topical headings on the facility location problem; computational geometry; graph drawing; graph algorithms; approximation algorithms; miscellaneous; data structures; parallel and distributed algorithms; and packing and covering.
This volume constitutes the refereed proceedings of the 26th International Symposium on String Processing and Information Retrieval, SPIRE 2019, held in Segovia, Spain, in October 2019. The 28 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 59 submissions. They cover topics such as: data compression; information retrieval; string algorithms; algorithms; computational biology; indexing and compression; and compressed data structures.
This book constitutes the refereed proceedings of the 13th International Conference on Advanced Data Mining and Applications, ADMA 2017, held in Singapore in November 2017. The 20 full and 38 short papers presented in this volume were carefully reviewed and selected from 118 submissions. The papers were organized in topical sections named: database and distributed machine learning; recommender system; social network and social media; machine learning; classification and clustering methods; behavior modeling and user profiling; bioinformatics and medical data analysis; spatio-temporal data; natural language processing and text mining; data mining applications; applications; and demos.
This book constitutes the proceedings of the 24th International Conference on Computing and Combinatorics, COCOON 2018, held in Qing Dao, China, in July 2018. The 62 papers presented in this volume were carefully reviewed and selected from 120 submissions. They deal with the areas of algorithms, theory of computation, computational complexity, and combinatorics related to computing.
Comprehensive Coverage of the Entire Area of ClassificationResearch on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlyi
This textbook provides a rigorous introduction to online algorithms for graduate and senior undergraduate students. In-depth coverage of most of the important topics is presented with special emphasis on elegant analysis. A wide range of solved examples and practice exercises are included, allowing hands-on exposure to the basic concepts.
Describes several useful paradigms for the design and implementation of efficient external memory (EM) algorithms and data structures. The problem domains considered include sorting, permuting, FFT, scientific computing, computational geometry, graphs, databases, geographic information systems, and text and string processing.
Streaming problems are algorithmic problems that are mainly characterized by their massive input streams. Because of these data streams, the algorithms for these problems are forced to be space-efficient, as the input stream length generally exceeds the available storage. In this thesis, the two streaming problems most frequent item and number of distinct items are studied in detail relating to their algorithmic complexities, and it is compared whether the verification of solution hypotheses has lower algorithmic complexity than computing a solution from the data stream. For this analysis, we introduce some concepts to prove space complexity lower bounds for an approximative setting and for hypothesis verification. For the most frequent item problem which consists in identifying the item which has the highest occurrence within the data stream, we can prove a linear space complexity lower bound for the deterministic and probabilistic setting. This implies that, in practice, this streaming problem cannot be solved in a satisfactory way since every algorithm has to exceed any reasonable storage limit. For some settings, the upper and lower bounds are almost tight, which implies that we have designed an almost optimal algorithm. Even for small approximation ratios, we can prove a linear lower bound, but not for larger ones. Nevertheless, we are not able to design an algorithm that solves the most frequent item problem space-efficiently for large approximation ratios. Furthermore, if we want to verify whether a hypothesis of the highest frequency count is true or not, we get exactly the same space complexity lower bounds, which leads to the conclusion that we are likely not able to profit from a stated hypothesis. The number of distinct items problem counts all different elements of the input stream. If we want to solve this problem exactly (in a deterministic or probabilistic setting) or approximately with a deterministic algorithm, we require once again linear storage size which is tight to the upper bound. However, for the approximative and probabilistic setting, we can enhance an already known space-efficient algorithm such that it is usable for arbitrarily small approximation ratios and arbitrarily good success probabilities. The hypothesis verification leads once again to the same lower bounds. However, there are some streaming problems that are able to profit from additional information such as hypotheses, as e.g., the median problem.