This final report of the Stanford Lisp Performance Study describes implementation techniques, performance tradeoffs, benchmarking techniques, and performance results for all of the major Lisp dialects in use today.
The book discusses rationales for creating and updating benchmarks, the use of benchmarks in academic research, benchmarking methodologies, the relation of SPEC benchmarks to other benchmarking activities, shortcomings of current benchmarks, and the need for further benchmarking efforts. Performance evaluation and benchmarking are of concern to all computer-related disciplines. A benchmark is a standard program or set of programs that can be run on different computers to give an accurate measure of their performance. This book covers a variety of aspects of computer performance evaluation, with a focus on Standard Performance Evaluation Corporation (SPEC) benchmarks. SPEC is a nonprofit organization whose members represent industry, academia, and other organizations. The book discusses rationales for creating and updating benchmarks, the use of benchmarks in academic research, benchmarking methodologies, the relation of SPEC benchmarks to other benchmarking activities, shortcomings of current benchmarks, and the need for further benchmarking efforts. Contributors Brian Armstrong, Frederica Darema, Edward S. Davidson, Sylvia Dieckmann, Jozo J. Dujmovic, Rudolf Eigenmann, J. Kelly Flanagan, Greg Gaertner, Jonathan Geisler, John Gustafson, Urs Hölzle, Shih-Hao Hung, Kathryn S. McKinley, Reinhard Riedl, Faisal Saied, Frank Sorenson, Mark Straka, Valerie Taylor, Olivier Temam, Rajat Todi, Reinhold Weicker
Broadcast media, such as satellite, ground radio, and multipoint cable channels, can easily provide full connectivity for communication among geographically distributed users. One of the most important problems in the design of networks (referred to as packet broadcast networks) that can take practical advantage of broadcast channels is how to achieve efficient sharing of a single common channel. Many multiple access protocols, or algorithms, for packet broadcast networks have been proposed, and much work has been done on the performance evaluation of the protocols. A variety of techniques have been used to analyze the performance; however, this is the first book to provide a unified approach to the performance evaluation problem by means of an approximate analytical technique called equilibrium point analysis. Two types of packet broadcast networks - satellite networks and local area networks are considered, and eight multiple access protocols are studied and their performance analyzed in terms of throughput and average message delay. Contents Part I: Fundamentals - Multiple Access Protocols and Performance - Equilibrium Point Analysis - Part II: Satellite Networks - S-ALOHA - R-ALOHA - ALOHA-Reservation - TDMAReservation - SRUC - TDMA - Performance Comparisons of the Protocols for Satellite Networks - Part III: Local Area Networks - Buffered CSMACD - BRAM Performance Analysis of Multiple Access Protocols is included in the Computer Systems Series, Research Reports and Notes, edited by Herb Schwetman.
This book is an edited selection of the papers presented at the International Workshop on VLSI for Artifidal Intelligence and Neural Networks which was held at the University of Oxford in September 1990. Our thanks go to all the contributors and especially to the programme committee for all their hard work. Thanks are also due to the ACM-SIGARCH, the IEEE Computer Society, and the lEE for publicizing the event and to the University of Oxford and SUNY-Binghamton for their active support. We are particularly grateful to Anna Morris, Maureen Doherty and Laura Duffy for coping with the administrative problems. Jose Delgado-Frias Will Moore April 1991 vii PROLOGUE Artificial intelligence and neural network algorithms/computing have increased in complexity as well as in the number of applications. This in tum has posed a tremendous need for a larger computational power than can be provided by conventional scalar processors which are oriented towards numeric and data manipulations. Due to the artificial intelligence requirements (symbolic manipulation, knowledge representation, non-deterministic computations and dynamic resource allocation) and neural network computing approach (non-programming and learning), a different set of constraints and demands are imposed on the computer architectures for these applications.
Written by a Lisp expert, this is the most comprehensive tutorial on the advanced features of Lisp for experienced programmers. It shows how to program in the bottom-up style that is ideal for Lisp programming, and includes a unique, practical collection of Lisp programming techniques that shows how to take advantage of the language's design for efficient programming in a wide variety of applications.
One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.
PDSIA '99 was the fourth in a series of international workshops on parallel symbolic computing, a basic yet challenging area with wide applications in high-performance computing. As in the previous meetings, parallel symbolic languages and systems were the major topics. However, reflecting the latest advances in distributed computing systems, the workshop also encompassed wider perspectives in parallel and distributed computing for symbolic and irregular applications.