When you combine the sheer scale and range of digital information now available with a journalist’s "nose for news" and her ability to tell a compelling story, a new world of possibility opens up. With The Data Journalism Handbook, you’ll explore the potential, limits, and applied uses of this new and fascinating field. This valuable handbook has attracted scores of contributors since the European Journalism Centre and the Open Knowledge Foundation launched the project at MozFest 2011. Through a collection of tips and techniques from leading journalists, professors, software developers, and data analysts, you’ll learn how data can be either the source of data journalism or a tool with which the story is told—or both. Examine the use of data journalism at the BBC, the Chicago Tribune, the Guardian, and other news organizations Explore in-depth case studies on elections, riots, school performance, and corruption Learn how to find data from the Web, through freedom of information laws, and by "crowd sourcing" Extract information from raw data with tips for working with numbers and statistics and using data visualization Deliver data through infographics, news apps, open data platforms, and download links
What is bad data? Some people consider it a technical phenomenon, like missing values or malformed records, but bad data includes a lot more. In this handbook, data expert Q. Ethan McCallum has gathered 19 colleagues from every corner of the data arena to reveal how they’ve recovered from nasty data problems. From cranky storage to poor representation to misguided policy, there are many paths to bad data. Bottom line? Bad data is data that gets in the way. This book explains effective ways to get around it. Among the many topics covered, you’ll discover how to: Test drive your data to see if it’s ready for analysis Work spreadsheet data into a usable form Handle encoding problems that lurk in text data Develop a successful web-scraping effort Use NLP tools to reveal the real sentiment of online reviews Address cloud computing issues that can impact your analysis effort Avoid policies that create data analysis roadblocks Take a systematic approach to data quality analysis
"What our teachers don't tell us in school is that we will spend most of our scientific or engineering career in front of computers, trying to beat them into submission." This extract from the Preface sets the style for this highly readable book. It is packed with information covering data representations, the pitfalls of computer arithmetic, and a variety of widely-used representations and standards. Each chapter begins with a detailed contents list and finishes with a brief summary of the topics presented and the whole is rounded off with a glossary and index. Novices will enjoy an occasionally lighthearted read from start to finish, while even the most experienced computer users who use the book as a reference will discover useful nuggets of information. A structured array of data sets are available online via the TELOS Web site, www.telospub.com, which will provide users with direct digital access to information they might need in working through the book.
An insider’s guide to data librarianship packed full of practical examples and advice for any library and information professional learning to deal with data. Interest in data has been growing in recent years. Support for this peculiar class of digital information – its use, preservation and curation, and how to support researchers’ production and consumption of it in ever greater volumes to create new knowledge, is needed more than ever. Many librarians and information professionals are finding their working life is pulling them toward data support or research data management but lack the skills required. The Data Librarian’s Handbook, written by two data librarians with over 30 years’ combined experience, unpicks the everyday role of the data librarian and offers practical guidance on how to collect, curate and crunch data for economic, social and scientific purposes. With contemporary case studies from a range of institutions and disciplines, tips for best practice, study aids and links to key resources, this book is a must-read for all new entrants to the field, library and information students and working professionals. Key topics covered include: • the evolution of data libraries and data archives • handling data compared to other forms of information • managing and curating data to ensure effective use and longevity • how to incorporate data literacy into mainstream library instruction and information literacy training • how to develop an effective institutional research data management (RDM) policy and infrastructure • how to support and review a data management plan (DMP) for a project, a key requirement for most research funders • approaches for developing, managing and promoting data repositories • handling and sharing confidential or sensitive data • supporting open scholarship and open science, ensuring data are discoverable, accessible, intelligible and assessable. This title is for the practising data librarian, possibly new in their post with little experience of providing data support. It is also for managers and policy-makers, public service librarians, research data management coordinators and data support staff. It will also appeal to students and lecturers in iSchools and other library and information degree programmes where academic research support is taught.
This practical, field-tested reference doesn't just explain the characteristics of finished, high-quality data models--it shows readers exactly how to build one. It presents rules and best practices in several notations, including IDEFIX, Martin, Chen, and Finkelstein. The book offers dozens of real-world examples and go beyond basic theory to provide users with practical guidance.
For many researchers, Python is a first-class tool mainly because of its libraries for storing, manipulating, and gaining insight from data. Several resources exist for individual pieces of this data science stack, but only with the Python Data Science Handbook do you get them all—IPython, NumPy, Pandas, Matplotlib, Scikit-Learn, and other related tools. Working scientists and data crunchers familiar with reading and writing Python code will find this comprehensive desk reference ideal for tackling day-to-day issues: manipulating, transforming, and cleaning data; visualizing different types of data; and using data to build statistical or machine learning models. Quite simply, this is the must-have reference for scientific computing in Python. With this handbook, you’ll learn how to use: IPython and Jupyter: provide computational environments for data scientists using Python NumPy: includes the ndarray for efficient storage and manipulation of dense data arrays in Python Pandas: features the DataFrame for efficient storage and manipulation of labeled/columnar data in Python Matplotlib: includes capabilities for a flexible range of data visualizations in Python Scikit-Learn: for efficient and clean Python implementations of the most important and established machine learning algorithms
This book offers an interdisciplinary introduction to data journalism, offering a unique combination of critical reflection and practical insight into the field, including how data journalism is done around the world and the broader consequences of datafication in the news.
If you're a developer looking to supplement your own data tools and services, this concise ebook covers the most useful sources of public data available today. You'll find useful information on APIs that offer broad coverage, tie their data to the outside world, and are either accessible online or feature downloadable bulk data. You'll also find code and helpful links. This guide organizes APIs by the subjects they cover—such as websites, people, or places—so you can quickly locate the best resources for augmenting the data you handle in your own service. Categories include: Website tools such as WHOIS, bit.ly, and Compete Services that use email addresses as search terms, including Github Finding information from just a name, with APIs such as WhitePages Services, such as Klout, for locating people with Facebook and Twitter accounts Search APIs, including BOSS and Wikipedia Geographical data sources, including SimpleGeo and U.S. Census Company information APIs, such as CrunchBase and ZoomInfo APIs that list IP addresses, such as MaxMind Services that list books, films, music, and products
Component failure rate data are a vital part of any reliability or safety study and highly relevant to the engineering community across many disciplines. This book gives a comprehensive account of the subject.
Development Research in Practice leads the reader through a complete empirical research project, providing links to continuously updated resources on the DIME Wiki as well as illustrative examples from the Demand for Safe Spaces study. The handbook is intended to train users of development data how to handle data effectively, efficiently, and ethically. “In the DIME Analytics Data Handbook, the DIME team has produced an extraordinary public good: a detailed, comprehensive, yet easy-to-read manual for how to manage a data-oriented research project from beginning to end. It offers everything from big-picture guidance on the determinants of high-quality empirical research, to specific practical guidance on how to implement specific workflows—and includes computer code! I think it will prove durably useful to a broad range of researchers in international development and beyond, and I learned new practices that I plan on adopting in my own research group.†? —Marshall Burke, Associate Professor, Department of Earth System Science, and Deputy Director, Center on Food Security and the Environment, Stanford University “Data are the essential ingredient in any research or evaluation project, yet there has been too little attention to standardized practices to ensure high-quality data collection, handling, documentation, and exchange. Development Research in Practice: The DIME Analytics Data Handbook seeks to fill that gap with practical guidance and tools, grounded in ethics and efficiency, for data management at every stage in a research project. This excellent resource sets a new standard for the field and is an essential reference for all empirical researchers.†? —Ruth E. Levine, PhD, CEO, IDinsight “Development Research in Practice: The DIME Analytics Data Handbook is an important resource and a must-read for all development economists, empirical social scientists, and public policy analysts. Based on decades of pioneering work at the World Bank on data collection, measurement, and analysis, the handbook provides valuable tools to allow research teams to more efficiently and transparently manage their work flows—yielding more credible analytical conclusions as a result.†? —Edward Miguel, Oxfam Professor in Environmental and Resource Economics and Faculty Director of the Center for Effective Global Action, University of California, Berkeley “The DIME Analytics Data Handbook is a must-read for any data-driven researcher looking to create credible research outcomes and policy advice. By meticulously describing detailed steps, from project planning via ethical and responsible code and data practices to the publication of research papers and associated replication packages, the DIME handbook makes the complexities of transparent and credible research easier.†? —Lars Vilhuber, Data Editor, American Economic Association, and Executive Director, Labor Dynamics Institute, Cornell University