Scalable Shared-Memory Multiprocessing

Scalable Shared-Memory Multiprocessing

Author: Daniel E. Lenoski

Publisher: Elsevier

Published: 2014-06-28

Total Pages: 364

ISBN-13: 1483296016

DOWNLOAD EBOOK

Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.


Evolution of an Operating System for Large-scale Shared-memory Multiprocessors

Evolution of an Operating System for Large-scale Shared-memory Multiprocessors

Author: Michael Lee Scott

Publisher:

Published: 1989

Total Pages: 23

ISBN-13:

DOWNLOAD EBOOK

The Psyche project is characterized by (1) a design that permits the implementation of multiple models of parallelism, both within and among applications, (2) the ability to trade protection for performance, with information sharing as the default, rather than the exception, (3) explicit, user-level control of process structure and scheduling, and (4) a kernel implementation that uses shared memory itself, and that provides users with the illusion of uniform memory access times.


Scalable Shared Memory Multiprocessors

Scalable Shared Memory Multiprocessors

Author: Michel Dubois

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 326

ISBN-13: 1461536049

DOWNLOAD EBOOK

The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability". Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations,synchronization, various coherence protocols, .


An Efficient and General Implementation of Futures on Large Scale Shared-memory Multiprocessors [microform]

An Efficient and General Implementation of Futures on Large Scale Shared-memory Multiprocessors [microform]

Author: Marc Feeley

Publisher: Ann Arbor, Mich. : University Microfilms International

Published: 1993

Total Pages: 213

ISBN-13:

DOWNLOAD EBOOK

The difference in performance is as high as a factor of two when a cache is availabe and a factor of 1.2 when a cache is not available. In addition, the thesis shows that the semantics of the Multilisp language does not have to be impoverished to attain good performance. The laziness of LTC can be exploited to support at virtually no cost several programming features including: the Katz-Weise continuation semantics with legitimacy, dynamic scoping, and fairness."


Shared Memory Multiprocessing

Shared Memory Multiprocessing

Author: Norihisa Suzuki

Publisher: MIT Press

Published: 1992

Total Pages: 534

ISBN-13: 9780262193221

DOWNLOAD EBOOK

Shared memory multiprocessors are becoming the dominant architecture for small-scale parallel computation. This book is the first to provide a coherent review of current research in shared memory multiprocessing in the United States and Japan. It focuses particularly on scalable architecture that will be able to support hundreds of microprocessors as well as on efficient and economical ways of connecting these fast microprocessors. The 20 contributions are divided into sections covering the experience to date with multiprocessors, cache coherency, software systems, and examples of scalable shared memory multiprocessors.