As data management and integration continue to evolve rapidly, storing all your data in one place, such as a data warehouse, is no longer scalable. In the very near future, data will need to be distributed and available for several technological solutions. With this practical book, you’ll learnhow to migrate your enterprise from a complex and tightly coupled data landscape to a more flexible architecture ready for the modern world of data consumption. Executives, data architects, analytics teams, and compliance and governance staff will learn how to build a modern scalable data landscape using the Scaled Architecture, which you can introduce incrementally without a large upfront investment. Author Piethein Strengholt provides blueprints, principles, observations, best practices, and patterns to get you up to speed. Examine data management trends, including technological developments, regulatory requirements, and privacy concerns Go deep into the Scaled Architecture and learn how the pieces fit together Explore data governance and data security, master data management, self-service data marketplaces, and the importance of metadata
The Only Complete Technical Primer for MDM Planners, Architects, and Implementers Companies moving toward flexible SOA architectures often face difficult information management and integration challenges. The master data they rely on is often stored and managed in ways that are redundant, inconsistent, inaccessible, non-standardized, and poorly governed. Using Master Data Management (MDM), organizations can regain control of their master data, improve corresponding business processes, and maximize its value in SOA environments. Enterprise Master Data Management provides an authoritative, vendor-independent MDM technical reference for practitioners: architects, technical analysts, consultants, solution designers, and senior IT decisionmakers. Written by the IBM ® data management innovators who are pioneering MDM, this book systematically introduces MDM’s key concepts and technical themes, explains its business case, and illuminates how it interrelates with and enables SOA. Drawing on their experience with cutting-edge projects, the authors introduce MDM patterns, blueprints, solutions, and best practices published nowhere else—everything you need to establish a consistent, manageable set of master data, and use it for competitive advantage. Coverage includes How MDM and SOA complement each other Using the MDM Reference Architecture to position and design MDM solutions within an enterprise Assessing the value and risks to master data and applying the right security controls Using PIM-MDM and CDI-MDM Solution Blueprints to address industry-specific information management challenges Explaining MDM patterns as enablers to accelerate consistent MDM deployments Incorporating MDM solutions into existing IT landscapes via MDM Integration Blueprints Leveraging master data as an enterprise asset—bringing people, processes, and technology together with MDM and data governance Best practices in MDM deployment, including data warehouse and SAP integration
Data stewards in business and IT are the backbone of a successful data governance implementation because they do the work to make a company's data trusted, dependable, and high quality. Data Stewardship explains everything you need to know to successfully implement the stewardship portion of data governance, including how to organize, train, and work with data stewards, get high-quality business definitions and other metadata, and perform the day-to-day tasks using a minimum of the steward's time and effort. David Plotkin has loaded this book with practical advice on stewardship so you can get right to work, have early successes, and measure and communicate those successes, gaining more support for this critical effort. - Provides clear and concise practical advice on implementing and running data stewardship, including guidelines on how to organize based on company structure, business functions, and data ownership - Shows how to gain support for your stewardship effort, maintain that support over the long-term, and measure the success of the data stewardship effort and report back to management - Includes detailed lists of responsibilities for each type of data steward and strategies to help the Data Governance Program Office work effectively with the data stewards
Executing Data Quality Projects, Second Edition presents a structured yet flexible approach for creating, improving, sustaining and managing the quality of data and information within any organization. Studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. Help is here! This book describes a proven Ten Step approach that combines a conceptual framework for understanding information quality with techniques, tools, and instructions for practically putting the approach to work – with the end result of high-quality trusted data and information, so critical to today's data-dependent organizations. The Ten Steps approach applies to all types of data and all types of organizations – for-profit in any industry, non-profit, government, education, healthcare, science, research, and medicine. This book includes numerous templates, detailed examples, and practical advice for executing every step. At the same time, readers are advised on how to select relevant steps and apply them in different ways to best address the many situations they will face. The layout allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, best practices, and warnings. The experience of actual clients and users of the Ten Steps provide real examples of outputs for the steps plus highlighted, sidebar case studies called Ten Steps in Action. This book uses projects as the vehicle for data quality work and the word broadly to include: 1) focused data quality improvement projects, such as improving data used in supply chain management, 2) data quality activities in other projects such as building new applications and migrating data from legacy systems, integrating data because of mergers and acquisitions, or untangling data due to organizational breakups, and 3) ad hoc use of data quality steps, techniques, or activities in the course of daily work. The Ten Steps approach can also be used to enrich an organization's standard SDLC (whether sequential or Agile) and it complements general improvement methodologies such as six sigma or lean. No two data quality projects are the same but the flexible nature of the Ten Steps means the methodology can be applied to all. The new Second Edition highlights topics such as artificial intelligence and machine learning, Internet of Things, security and privacy, analytics, legal and regulatory requirements, data science, big data, data lakes, and cloud computing, among others, to show their dependence on data and information and why data quality is more relevant and critical now than ever before. - Includes concrete instructions, numerous templates, and practical advice for executing every step of The Ten Steps approach - Contains real examples from around the world, gleaned from the author's consulting practice and from those who implemented based on her training courses and the earlier edition of the book - Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices - A companion Web site includes links to numerous data quality resources, including many of the templates featured in the text, quick summaries of key ideas from the Ten Steps methodology, and other tools and information that are available online
What is metadata and what do I need to know about it? These are two key questions for the information professional operating in the digital age as more and more information resources are available in electronic format. This is a thought-provoking introduction to metadata written by one of its leading advocates. It assesses the current theory and practice of metadata and examines key developments - including global initiatives and multilingual issues - in terms of both policy and technology. Subjects discussed include: What is metadata? definitions and concepts Retrieval environments: web; library catalogues; documents and records management; GIS; e-Learning Using metadata to enhance retrieval: pointing to content; subject retrieval; language control and indexing Information management issues: interoperability; information security; authority control; authentication and legal admissibility of evidence; records management and document lifecyc≤ preservation issues Application of metadata to information management: document and records management; content management systems for the internet Managing metadata: how to develop a schema Standards development: Dublin Core; UK Government metadata standards (eGIF); IFLA FRBR Model for cataloguing resources Looking forward: the semantic web; the Web Ontology Working Group. Readership: This book will be essential reading for network-oriented librarians and information workers in all sectors and for LIS students. In addition, it will provide useful background reading for computer staff supporting information services. Publishers, policy makers and practitioners in other curatorial traditions such as museums work or archiving will also find much of relevance.
The key to a successful MDM initiative isn't technology or methods, it's people: the stakeholders in the organization and their complex ownership of the data that the initiative will affect.Master Data Management equips you with a deeply practical, business-focused way of thinking about MDM—an understanding that will greatly enhance your ability to communicate with stakeholders and win their support. Moreover, it will help you deserve their support: you'll master all the details involved in planning and executing an MDM project that leads to measurable improvements in business productivity and effectiveness. - Presents a comprehensive roadmap that you can adapt to any MDM project - Emphasizes the critical goal of maintaining and improving data quality - Provides guidelines for determining which data to "master. - Examines special issues relating to master data metadata - Considers a range of MDM architectural styles - Covers the synchronization of master data across the application infrastructure
A comprehensive guide to everything scientists need to know about data management, this book is essential for researchers who need to learn how to organize, document and take care of their own data. Researchers in all disciplines are faced with the challenge of managing the growing amounts of digital data that are the foundation of their research. Kristin Briney offers practical advice and clearly explains policies and principles, in an accessible and in-depth text that will allow researchers to understand and achieve the goal of better research data management. Data Management for Researchers includes sections on: * The data problem – an introduction to the growing importance and challenges of using digital data in research. Covers both the inherent problems with managing digital information, as well as how the research landscape is changing to give more value to research datasets and code. * The data lifecycle – a framework for data’s place within the research process and how data’s role is changing. Greater emphasis on data sharing and data reuse will not only change the way we conduct research but also how we manage research data. * Planning for data management – covers the many aspects of data management and how to put them together in a data management plan. This section also includes sample data management plans. * Documenting your data – an often overlooked part of the data management process, but one that is critical to good management; data without documentation are frequently unusable. * Organizing your data – explains how to keep your data in order using organizational systems and file naming conventions. This section also covers using a database to organize and analyze content. * Improving data analysis – covers managing information through the analysis process. This section starts by comparing the management of raw and analyzed data and then describes ways to make analysis easier, such as spreadsheet best practices. It also examines practices for research code, including version control systems. * Managing secure and private data – many researchers are dealing with data that require extra security. This section outlines what data falls into this category and some of the policies that apply, before addressing the best practices for keeping data secure. * Short-term storage – deals with the practical matters of storage and backup and covers the many options available. This section also goes through the best practices to insure that data are not lost. * Preserving and archiving your data – digital data can have a long life if properly cared for. This section covers managing data in the long term including choosing good file formats and media, as well as determining who will manage the data after the end of the project. * Sharing/publishing your data – addresses how to make data sharing across research groups easier, as well as how and why to publicly share data. This section covers intellectual property and licenses for datasets, before ending with the altmetrics that measure the impact of publicly shared data. * Reusing data – as more data are shared, it becomes possible to use outside data in your research. This chapter discusses strategies for finding datasets and lays out how to cite data once you have found it. This book is designed for active scientific researchers but it is useful for anyone who wants to get more from their data: academics, educators, professionals or anyone who teaches data management, sharing and preservation. "An excellent practical treatise on the art and practice of data management, this book is essential to any researcher, regardless of subject or discipline." —Robert Buntrock, Chemical Information Bulletin
Defining a set of guiding principles for data management and describing how these principles can be applied within data management functional areas; Providing a functional framework for the implementation of enterprise data management practices; including widely adopted practices, methods and techniques, functions, roles, deliverables and metrics; Establishing a common vocabulary for data management concepts and serving as the basis for best practices for data management professionals. DAMA-DMBOK2 provides data management and IT professionals, executives, knowledge workers, educators, and researchers with a framework to manage their data and mature their information infrastructure, based on these principles: Data is an asset with unique properties; The value of data can be and should be expressed in economic terms; Managing data means managing the quality of data; It takes metadata to manage data; It takes planning to manage data; Data management is cross-functional and requires a range of skills and expertise; Data management requires an enterprise perspective; Data management must account for a range of perspectives; Data management is data lifecycle management; Different types of data have different lifecycle requirements; Managing data includes managing risks associated with data; Data management requirements must drive information technology decisions; Effective data management requires leadership commitment.
As your company moves data to the cloud, you need to consider a comprehensive approach to data governance, along with well-defined and agreed-upon policies to ensure you meet compliance. Data governance incorporates the ways that people, processes, and technology work together to support business efficiency. With this practical guide, chief information, data, and security officers will learn how to effectively implement and scale data governance throughout their organizations. You'll explore how to create a strategy and tooling to support the democratization of data and governance principles. Through good data governance, you can inspire customer trust, enable your organization to extract more value from data, and generate more-competitive offerings and improvements in customer experience. This book shows you how. Enable auditable legal and regulatory compliance with defined and agreed-upon data policies Employ better risk management Establish control and maintain visibility into your company's data assets, providing a competitive advantage Drive top-line revenue and cost savings when developing new products and services Implement your organization's people, processes, and tools to operationalize data trustworthiness.
This book constitutes the proceedings of the 7th International Conference on Future Data and Security Engineering, FDSE 2020, which was supposed to be held in Quy Nhon, Vietnam, in November 2020, but the conference was held virtually due to the COVID-19 pandemic. The 24 full papers (of 53 accepted full papers) presented together with 2 invited keynotes were carefully reviewed and selected from 161 submissions. The other 29 accepted full and 8 short papers are included in CCIS 1306. The selected papers are organized into the following topical headings: security issues in big data; big data analytics and distributed systems; advances in big data query processing and optimization; blockchain and applications; industry 4.0 and smart city: data analytics and security; advanced studies in machine learning for security; and emerging data management systems and applications.