Transactional Memory

Transactional Memory

Author: Tim Harris

Publisher: Morgan & Claypool Publishers

Published: 2010

Total Pages: 247

ISBN-13: 1608452352

DOWNLOAD EBOOK

The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that con-current reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically---either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and co-ordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010.


Fault-Tolerant Message-Passing Distributed Systems

Fault-Tolerant Message-Passing Distributed Systems

Author: Michel Raynal

Publisher: Springer

Published: 2018-09-08

Total Pages: 468

ISBN-13: 3319941410

DOWNLOAD EBOOK

This book presents the most important fault-tolerant distributed programming abstractions and their associated distributed algorithms, in particular in terms of reliable communication and agreement, which lie at the heart of nearly all distributed applications. These programming abstractions, distributed objects or services, allow software designers and programmers to cope with asynchrony and the most important types of failures such as process crashes, message losses, and malicious behaviors of computing entities, widely known under the term "Byzantine fault-tolerance". The author introduces these notions in an incremental manner, starting from a clear specification, followed by algorithms which are first described intuitively and then proved correct. The book also presents impossibility results in classic distributed computing models, along with strategies, mainly failure detectors and randomization, that allow us to enrich these models. In this sense, the book constitutes an introduction to the science of distributed computing, with applications in all domains of distributed systems, such as cloud computing and blockchains. Each chapter comes with exercises and bibliographic notes to help the reader approach, understand, and master the fascinating field of fault-tolerant distributed computing.


Concurrent Programming: Algorithms, Principles, and Foundations

Concurrent Programming: Algorithms, Principles, and Foundations

Author: Michel Raynal

Publisher: Springer Science & Business Media

Published: 2012-12-30

Total Pages: 530

ISBN-13: 3642320279

DOWNLOAD EBOOK

This book is devoted to the most difficult part of concurrent programming, namely synchronization concepts, techniques and principles when the cooperating entities are asynchronous, communicate through a shared memory, and may experience failures. Synchronization is no longer a set of tricks but, due to research results in recent decades, it relies today on sane scientific foundations as explained in this book. In this book the author explains synchronization and the implementation of concurrent objects, presenting in a uniform and comprehensive way the major theoretical and practical results of the past 30 years. Among the key features of the book are a new look at lock-based synchronization (mutual exclusion, semaphores, monitors, path expressions); an introduction to the atomicity consistency criterion and its properties and a specific chapter on transactional memory; an introduction to mutex-freedom and associated progress conditions such as obstruction-freedom and wait-freedom; a presentation of Lamport's hierarchy of safe, regular and atomic registers and associated wait-free constructions; a description of numerous wait-free constructions of concurrent objects (queues, stacks, weak counters, snapshot objects, renaming objects, etc.); a presentation of the computability power of concurrent objects including the notions of universal construction, consensus number and the associated Herlihy's hierarchy; and a survey of failure detector-based constructions of consensus objects. The book is suitable for advanced undergraduate students and graduate students in computer science or computer engineering, graduate students in mathematics interested in the foundations of process synchronization, and practitioners and engineers who need to produce correct concurrent software. The reader should have a basic knowledge of algorithms and operating systems.


Database Internals

Database Internals

Author: Alex Petrov

Publisher: O'Reilly Media

Published: 2019-09-13

Total Pages: 373

ISBN-13: 1492040312

DOWNLOAD EBOOK

When it comes to choosing, using, and maintaining a database, understanding its internals is essential. But with so many distributed databases and tools available today, it’s often difficult to understand what each one offers and how they differ. With this practical guide, Alex Petrov guides developers through the concepts behind modern database and storage engine internals. Throughout the book, you’ll explore relevant material gleaned from numerous books, papers, blog posts, and the source code of several open source databases. These resources are listed at the end of parts one and two. You’ll discover that the most significant distinctions among many modern databases reside in subsystems that determine how storage is organized and how data is distributed. This book examines: Storage engines: Explore storage classification and taxonomy, and dive into B-Tree-based and immutable Log Structured storage engines, with differences and use-cases for each Storage building blocks: Learn how database files are organized to build efficient storage, using auxiliary data structures such as Page Cache, Buffer Pool and Write-Ahead Log Distributed systems: Learn step-by-step how nodes and processes connect and build complex communication patterns Database clusters: Which consistency models are commonly used by modern databases and how distributed storage systems achieve consistency


Distributed Computing

Distributed Computing

Author: Nancy A. Lynch

Publisher: Springer Science & Business Media

Published: 2010-09

Total Pages: 547

ISBN-13: 3642157629

DOWNLOAD EBOOK

This book constitutes the refereed proceedings of the 24th International Symposium on Distributed Computing, DISC 2010, held in Cambridge, CT, USA, in September 2010. The 32 revised full papers, selected from 135 submissions, are presented together with 14 brief announcements of ongoing works; all of them were carefully reviewed and selected for inclusion in the book. The papers address all aspects of distributed computing, and were organized in topical sections on, transactions, shared memory services and concurrency, wireless networks, best student paper, consensus and leader election, mobile agents, computing in wireless and mobile networks, modeling issues and adversity, and self-stabilizing and graph algorithms.


Transactional Memory, Second Edition

Transactional Memory, Second Edition

Author: Tim Harris

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 247

ISBN-13: 3031017285

DOWNLOAD EBOOK

The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / Conclusions


Be sparse! Be dense! Be robust!

Be sparse! Be dense! Be robust!

Author: Sorge, Manuel

Publisher: Universitätsverlag der TU Berlin

Published: 2017-05-31

Total Pages: 272

ISBN-13: 3798328854

DOWNLOAD EBOOK

In this thesis we study the computational complexity of five NP-hard graph problems. It is widely accepted that, in general, NP-hard problems cannot be solved efficiently, that is, in polynomial time, due to many unsuccessful attempts to prove the contrary. Hence, we aim to identify properties of the inputs other than their length, that make the problem tractable or intractable. We measure these properties via parameters, mappings that assign to each input a nonnegative integer. For a given parameter k, we then attempt to design fixed-parameter algorithms, algorithms that on input q have running time upper bounded by f(k(q)) * |q|^c , where f is a preferably slowly growing function, |q| is the length of q, and c is a constant, preferably small. In each of the graph problems treated in this thesis, our input represents the setting in which we shall find a solution graph. In addition, the solution graphs shall have a certain property specific to our five graph problems. This property comes in three flavors. First, we look for a graph that shall be sparse! That is, it shall contain few edges. Second, we look for a graph that shall be dense! That is, it shall contain many edges. Third, we look for a graph that shall be robust! That is, it shall remain a good solution, even when it suffers several small modifications. Be sparse! In this part of the thesis, we analyze two similar problems. The input for both of them is a hypergraph H , which consists of a vertex set V and a family E of subsets of V , called hyperedges. The task is to find a support for H , a graph G such that for each hyperedge W in E we have that G[W ] is connected. Motivated by applications in network design, we study SUBSET INTERCONNECTION DESIGN, where we additionally get an integer f , and the support shall contain at most |V| - f + 1 edges. We show that SUBSET INTERCONNECTION DESIGN admits a fixed-parameter algorithm with respect to the number of hyperedges in the input hypergraph, and a fixed-parameter algorithm with respect to f + d , where d is the size of a largest hyperedge. Motivated by an application in hypergraph visualization, we study r-OUTERPLANAR SUPPORT where the support for H shall be r -outerplanar, that is, admit a edge-crossing free embedding in the plane with at most r layers. We show that r-OUTER-PLANAR SUPPORT admits a fixed-parameter algorithm with respect to m + r , where m is the number of hyperedges in the input hypergraph H. Be dense! In this part of the thesis, we study two problems motivated by community detection in social networks. Herein, the input is a graph G and an integer k. We look for a subgraph G' of G containing (exactly) k vertices which adheres to one of two mathematically precise definitions of being dense. In mu-CLIQUE, 0 < mu <= 1, the sought k-vertex subgraph G' should contain at least mu time k choose 2 edges. We study the complexity of mu-CLIQUE with respect to three parameters of the input graph G: the maximum vertex degree delta, h-index h, and degeneracy d. We have delta >= h >= d in every graph and h as well as d assume small values in graphs derived from social networks. For delta and for h, respectively, we obtain fixed-parameter algorithms for mu-CLIQUE and we show that for d + k a fixed-parameter algorithm is unlikely to exist. We prove the positive algorithmic results via developing a general framework for optimizing objective functions over k-vertex subgraphs. In HIGHLY CONNECTED SUBGRAPH we look for a k-vertex subgraph G' in which each vertex shall have degree at least floor(k/2)+1. We analyze a part of the so-called parameter ecology for HIGHLY CONNECTED SUBGRAPH, that is, we navigate the space of possible parameters in a quest to find a reasonable trade-off between small parameter values in practice and efficient running time guarantees. The highlights are that no 2^o(n) * n^c -time algorithms are possible for n-vertex input graphs unless the Exponential Time Hypothesis fails; that there is a O(4^g * n^2)-time algorithm for the number g of edges outgoing from the solution G; and we derive a 2^(O(sqrt(a)log(a)) + a^2nm-time algorithm for the number a of edges not in the solution. Be robust! In this part of the thesis, we study the VECTOR CONNECTIVITY problem, where we are given a graph G, a vertex labeling ell from V(G) to {1, . . . , d }, and an integer k. We are to find a vertex subset S of V(G) of size at most k such that each vertex v in V (G)\S has ell(v) vertex-disjoint paths from v to S in G. Such a set S is useful when placing servers in a network to satisfy robustness-of-service demands. We prove that VECTOR CONNECTIVITY admits a randomized fixed-parameter algorithm with respect to k, that it does not allow a polynomial kernelization with respect to k + d but that, if d is treated as a constant, then it allows a vertex-linear kernelization with respect to k. In dieser Dissertation untersuchen wir die Berechnungskomplexität von fünf NP-schweren Graphproblemen. Es wird weithin angenommen, dass NP-schwere Probleme im Allgemeinen nicht effizient gelöst werden können, das heißt, dass sie keine Polynomialzeitalgorithmen erlauben. Diese Annahme basiert auf vielen bisher nicht erfolgreichen Versuchen das Gegenteil zu beweisen. Aus diesem Grund versuchen wir Eigenschaften der Eingabe herauszuarbeiten, die das betrachtete Problem handhabbar oder unhandhabbar machen. Solche Eigenschaften messen wir mittels Parametern, das heißt, Abbildungen, die jeder möglichen Eingabe eine natürliche Zahl zuordnen. Für einen gegebenen Parameter k versuchen wir dann Fixed-Parameter Algorithmen zu entwerfen, also Algorithmen, die auf Eingabe q eine obere Laufzeitschranke von f(k(q)) * |q|^c erlauben, wobei f eine, vorzugsweise schwach wachsende, Funktion ist, |q| die Länge der Eingabe, und c eine Konstante, vorzugsweise klein. In den Graphproblemen, die wir in dieser Dissertation studieren, repräsentiert unsere Eingabe eine Situation in der wir einen Lösungsgraph finden sollen. Zusätzlich sollen die Lösungsgraphen bestimmte problemspezifische Eigenschaften haben. Wir betrachten drei Varianten dieser Eigenschaften: Zunächst suchen wir einen Graphen, der sparse sein soll. Das heißt, dass er wenige Kanten enthalten soll. Dann suchen wir einen Graphen, der dense sein soll. Das heißt, dass er viele Kanten enthalten soll. Zuletzt suchen wir einen Graphen, der robust sein soll. Das heißt, dass er eine gute Lösung bleiben soll, selbst wenn er einige kleine Modifikationen durchmacht. Be sparse! In diesem Teil der Arbeit analysieren wir zwei ähnliche Probleme. In beiden ist die Eingabe ein Hypergraph H, bestehend aus einer Knotenmenge V und einer Familie E von Teilmengen von V, genannt Hyperkanten. Die Aufgabe ist einen Support für H zu finden, einen Graphen G, sodass für jede Hyperkante W in E der induzierte Teilgraph G[W] verbunden ist. Motiviert durch Anwendungen im Netzwerkdesign betrachten wir SUBSET INTERCONNECTION DESIGN, worin wir eine natürliche Zahl f als zusätzliche Eingabe bekommen, und der Support höchstens |V| - f + 1 Kanten enthalten soll. Wir zeigen, dass SUBSET INTERCONNECTION DESIGN einen Fixed-Parameter Algorithmus in Hinsicht auf die Zahl der Hyperkanten im Eingabegraph erlaubt, und einen Fixed-Parameter Algorithmus in Hinsicht auf f + d, wobei d die Größe einer größten Hyperkante ist. Motiviert durch eine Anwendung in der Hypergraphvisualisierung studieren wir r-OUTERPLANAR SUPPORT, worin der Support für H r-outerplanar sein soll, das heißt, er soll eine kantenkreuzungsfreie Einbettung in die Ebene erlauben mit höchstens r Schichten. Wir zeigen, dass r-OUTERPLANAR SUPPORT einen Fixed-Parameter Algorithmus in Hinsicht auf m + r zulässt, wobei m die Anzahl der Hyperkanten im Eingabehypergraphen H ist. Be dense! In diesem Teil der Arbeit studieren wir zwei Probleme, die durch Community Detection in sozialen Netzwerken motiviert sind. Dabei ist die Eingabe ein Graph G und eine natürliche Zahl k. Wir suchen einen Teilgraphen G' von G, der (genau) k Knoten enthält und dabei eine von zwei mathematisch präzisen Definitionen davon, dense zu sein, aufweist. In mu-CLIQUE, 0 < mu <= 1, soll der gesuchte Teilgraph G' mindestens mu mal k über 2 Kanten enthalten. Wir studieren die Berechnungskomplexität von mu-CLIQUE in Hinsicht auf drei Parameter des Eingabegraphen G: dem maximalen Knotengrad delta, dem h-Index h, und der Degeneracy d. Es gilt delta >= h >= d für jeden Graphen und h als auch d nehmen kleine Werte in Graphen an, die aus sozialen Netzwerken abgeleitet sind. Für delta und h erhalten wir Fixed-Parameter Algorithmen für mu-CLIQUE und wir zeigen, dass für d + k wahrscheinlich kein Fixed-Parameter Algorithmus existiert. Unsere positiven algorithmischen Resultate erhalten wir durch Entwickeln eines allgemeinen Frameworks zum Optimieren von Zielfunktionen über k-Knoten-Teilgraphen. In HIGHLY CONNECTED SUBGRAPH soll in dem gesuchten k-Knoten-Teilgraphen G' jeder Knoten Knotengrad mindestens floor(k/2) + 1 haben. Wir analysieren einen Teil der sogenannten Parameter Ecology für HIGHLY CONNECTED SUBGRAPH. Das heißt, wir navigieren im Raum der möglichen Parameter auf der Suche nach einem vernünftigen Trade-off zwischen kleinen Parameterwerten in der Praxis und effizienten oberen Laufzeitschranken. Die Highlights hier sind, dass es keine Algorithmen mit 2^o(n) * poly(n)-Laufzeit für HIGHLY CONNECTED SUBGRAPH gibt, es sei denn die Exponential Time Hypothesis stimmt nicht; die Entwicklung eines Algorithmus mit O(4^y * n^2 )-Laufzeit, wobei y die Anzahl der Kanten ist, die aus dem Lösungsgraphen G' herausgehen; und die Entwicklung eines Algorithmus mit 2^O(sqrt(a) log(a)) + O(a^2nm)-Laufzeit, wobei a die Anzahl der Kanten ist, die nicht in G' enthalten sind.


Networked Systems

Networked Systems

Author: Guevara Noubir

Publisher: Springer

Published: 2014-08-02

Total Pages: 363

ISBN-13: 3319095811

DOWNLOAD EBOOK

This book constitutes the revised selected papers of the Second International Conference on Networked Systems, NETYS 2014, held in Marrakech, Morocco, in May 2014. The 20 full papers and the 6 short papers presented together with 2 keynotes were carefully reviewed and selected from 80 submissions. They address major topics such as multi-core architectures; concurrent and distributed algorithms; middleware environments; storage clusters; social networks; peer-to-peer networks; sensor networks; wireless and mobile networks; as well as privacy and security measures to protect such networked systems and data from attack and abuse.


Handbook of Fiber Optic Data Communication

Handbook of Fiber Optic Data Communication

Author: Casimer DeCusatis

Publisher: Elsevier Inc. Chapters

Published: 2013-08-09

Total Pages: 30

ISBN-13: 0128068280

DOWNLOAD EBOOK

All modern data centers require some form of data backup or replication to protect the data from natural or man-made disasters and provide business continuity. Companies rely on their information systems to run daily operations. If a system becomes unavailable, company operations may be impaired or stopped completely. If critical data remains inaccessible for an extended period, the company may never recover and be forced to go out of business. It is necessary to provide a reliable infrastructure for IT operations in order to minimize any chance of disruption. In this chapter, we define the requirements for Tier 1 through Tier 4 data centers. We discuss the ACID-BASE (atomicity, consistency, isolation, durability-basically available, soft state, eventual consistency) taxonomies for data consistency, giving examples from companies such as Yahoo!, Amazon, Google, and IBM. The chapter includes a detailed discussion of the different options for IBM Geographically Dispersed Parallel Sysplex (GDPS), enterprise-class high-end business continuity, and disaster recovery solution, including the Sysplex Timer protocol, InterSystem Channel (ISC), Parallel Sysplex InfiniBand (PSIFB), and more.