The Clustered Network File System (CNFS) is a capability based on IBM® General Parallel File System (GPFSTM) running on Linux® which, when combined with System x® servers or BladeCenter® Servers, IBM TotalStorage® Disk Systems, and Storage Area Networks (SAN) components, provides a scalable file services environment. This capability enables customers to run a General Parallel File System (GPFS) data-serving cluster in which some or all of the nodes actively export the file system using NFS. This IBM RedpaperTM publication shows how Cluster NFS file services are delivered and supported today through the configurable order process of the IBM Intelligent Cluster. The audience for this paper includes executive and consultant decision makers and technical administrators who want to know how to implement this solution.
Fundamentals of Investments focuses on students as investment managers, giving them information to act on by placing theory and research in the proper context. The text offers strong, consistent pedagogy, including a balanced, unified treatment of the four main types of financial investments: stocks, bonds, options, and futures. Topics are organized in a way that makes them easy to apply—whether to a portfolio simulation or to real life—and supported with hands-on activities.
Until now, building and managing Linux clusters has required more intimate and specialized knowledge than most IT organizations possess. This book dramatically lowers the learning curve, bringing together all the hands-on knowledge and step-by-step techniques needed to get the job done.
The Green and Virtual Data Center sets aside the political aspects of what is or is not considered green to instead focus on the opportunities for organizations that want to sustain environmentally-friendly economical growth. If you are willing to believe that IT infrastructure resources deployed in a highly virtualized manner can be combined with other technologies to achieve simplified and cost-effective delivery of services in a green, profitable manner, this book is for you. Savvy industry veteran Greg Schulz provides real-world insight, addressing best practices, server, software, storage, networking, and facilities issues concerning any current or next-generation virtual data center that relies on underlying physical infrastructures. Coverage includes: Energy and data footprint reduction, Cloud-based storage and computing, Intelligent and adaptive power management, Server, storage, and networking virtualization, Tiered servers and storage, network, and data centers, Energy avoidance and energy efficiency. Many current and emerging technologies can enable a green and efficient virtual data center to support and sustain business growth with a reasonable return on investment. This book presents virtually all critical IT technologies and techniques to discuss the interdependencies that need to be supported to enable a dynamic, energy-efficient, economical, and environmentally-friendly green IT data center. This is a path that every organization must ultimately follow. Take a tour of the Green and Virtual Data Center website. CRC Press is pleased to announce that The Green and Virtual Data Center has been added to Intel Corporation's Recommended Reading List. Intel's Recommended Reading program provides technical professionals a simple and handy reference list of what to read to stay abreast of new technologies. Dozens of industry technologists, corporate fellows, and engineers have helped by suggesting books and reviewing the list. This is the most comprehensive reading list available for professional computer developers.
The Definitive Guide to File System Analysis: Key Concepts and Hands-on Techniques Most digital evidence is stored within the computer's file system, but understanding how file systems work is one of the most technically challenging concepts for a digital investigator because there exists little documentation. Now, security expert Brian Carrier has written the definitive reference for everyone who wants to understand and be able to testify about how file system analysis is performed. Carrier begins with an overview of investigation and computer foundations and then gives an authoritative, comprehensive, and illustrated overview of contemporary volume and file systems: Crucial information for discovering hidden evidence, recovering deleted data, and validating your tools. Along the way, he describes data structures, analyzes example disk images, provides advanced investigation scenarios, and uses today's most valuable open source file system analysis tools—including tools he personally developed. Coverage includes Preserving the digital crime scene and duplicating hard disks for "dead analysis" Identifying hidden data on a disk's Host Protected Area (HPA) Reading source data: Direct versus BIOS access, dead versus live acquisition, error handling, and more Analyzing DOS, Apple, and GPT partitions; BSD disk labels; and Sun Volume Table of Contents using key concepts, data structures, and specific techniques Analyzing the contents of multiple disk volumes, such as RAID and disk spanning Analyzing FAT, NTFS, Ext2, Ext3, UFS1, and UFS2 file systems using key concepts, data structures, and specific techniques Finding evidence: File metadata, recovery of deleted files, data hiding locations, and more Using The Sleuth Kit (TSK), Autopsy Forensic Browser, and related open source tools When it comes to file system analysis, no other book offers this much detail or expertise. Whether you're a digital forensics specialist, incident response team member, law enforcement officer, corporate security specialist, or auditor, this book will become an indispensable resource for forensic investigations, no matter what analysis tools you use.
One of the original developers of the NFS and WebNFS offers unique insight into these key technologies, for both programmers creating and debugging NFS-based applications and network engineers creating new implementations. Readers can gain a deeper understanding of how network file protocols are designed and learn how NFS is implemented on UNIX, Windows NT, Java and web browsers.
Thousands of IT organizations have adopted clustering to improve the availability of mission-critical software services. Today, with the rapid growth of cloud computing environments, clustering is even more crucial. Now, there’s a comprehensive, authoritative guide to the industry’s most stable, robust clustering platform: the Oracle Solaris Cluster. Oracle® Solaris Cluster Essentials thoroughly covers both Oracle Solaris Cluster 3.2 and Oracle Solaris Cluster Geographic Edition, offering start-to-finish lifecycle guidance for planning, implementation, management, and troubleshooting. Authored by Oracle Solaris Cluster expert Tim Read, this book covers both high availability and disaster recovery features, and offers detailed guidance for both Oracle and non-Oracle database environments. It also presents several example implementations that can be used to quickly construct effective proofs-of-concept. Whether you’re new to clustering or upgrading from older solutions,this bookbrings together all the information you’ll need to maximize the value, reliability, and performance of any Oracle Solaris Cluster environment. You’ll learn how to Understand Oracle Solaris Cluster’s product features and architecture, and their implications for design and performance Establish requirements and design clustered systems that reflect them Master best practices for integrating clustering with virtualization Implement proven disaster recovery planning techniques Efficiently maintain Oracle Solaris Cluster environments Part of the Solaris System Administration Series, Oracle® Solaris Cluster Essentials combines a complete technology introduction and hands-on guide for every architect, administrator, and IT manager responsible for high availability and business continuity.
Organizations today depend heavily on their data. Even short periods of data outages can be expensive and result in loss of productivity, as well as financial consequences, while permanent data loss can be catastrophic. Therefore, reliability and means to efficiently store and access such data is an important component of most large organizations' IT infrastructure. Much of this data is still stored in the most versatile format, the 'flat file'. This eBook provides both an academic and historic perspective on the development of distributed file systems and details some of the core algorithms, such as quorum protocols that are used in distributed storage systems. This book can be used as a short, stand-alone introduction to the field or as a resource for an academic course in the topic.
Pro Linux High Availability Clustering teaches you how to implement this fundamental Linux add-on into your business. Linux High Availability Clustering is needed to ensure the availability of mission critical resources. The technique is applied more and more in corporate datacenters around the world. While lots of documentation about the subject is available on the internet, it isn't always easy to build a real solution based on that scattered information, which is often oriented towards specific tasks only. Pro Linux High Availability Clustering explains essential high-availability clustering components on all Linux platforms, giving you the insight to build solutions for any specific case needed. In this book four common cases will be explained: Configuring Apache for high availability Creating an Open Source SAN based on DRBD, iSCSI and HA clustering Setting up a load-balanced web server cluster with a back-end, highly-available database Setting up a KVM virtualization platform with high-availability protection for a virtual machine. With the knowledge you'll gain from these real-world applications, you'll be able to efficiently apply Linux HA to your work situation with confidence. Author Sander Van Vugt teaches Linux high-availability clustering on training courses, uses it in his everyday work, and now brings this knowledge to you in one place, with clear examples and cases. Make the best start with HA clustering with Pro Linux High Availability Clustering at your side.
System administrators and technical professionals will be able to understand and master the most critical part of Tru64 UNIX by using this easy-to-understand guide written by a file systems expert. This book also explains how to deploy Compaq's TruCluster clustering technology.