This handbook brings together a variety of approaches to the uses of big data in multiple fields, primarily science, medicine, and business. This single resource features contributions from researchers around the world from a variety of fields, where they share their findings and experience. This book is intended to help spur further innovation in big data. The research is presented in a way that allows readers, regardless of their field of study, to learn from how applications have proven successful and how similar applications could be used in their own field. Contributions stem from researchers in fields such as physics, biology, energy, healthcare, and business. The contributors also discuss important topics such as fraud detection, privacy implications, legal perspectives, and ethical handling of big data.
Big Data, gathered together and re-analysed, can be used to form endless variations of our persons - so-called ‘data doubles’. Whilst never a precise portrayal of who we are, they unarguably contain glimpses of details about us that, when deployed into various routines (such as management, policing and advertising) can affect us in many ways. How are we to deal with Big Data? When is it beneficial to us? When is it harmful? How might we regulate it? Offering careful and critical analyses, this timely volume aims to broaden well-informed, unprejudiced discourse, focusing on: the tenets of Big Data, the politics of governance and regulation; and Big Data practices, performance and resistance. An interdisciplinary volume, The Politics of Big Data will appeal to undergraduate and postgraduate students, as well as postdoctoral and senior researchers interested in fields such as Technology, Politics and Surveillance.
When data from all aspects of our lives can be relevant to our health - from our habits at the grocery store and our Google searches to our FitBit data and our medical records - can we really differentiate between big data and health big data? Will health big data be used for good, such as to improve drug safety, or ill, as in insurance discrimination? Will it disrupt health care (and the health care system) as we know it? Will it be possible to protect our health privacy? What barriers will there be to collecting and utilizing health big data? What role should law play, and what ethical concerns may arise? This timely, groundbreaking volume explores these questions and more from a variety of perspectives, examining how law promotes or discourages the use of big data in the health care sphere, and also what we can learn from other sectors.
"Data is emerging as a key component of military operations, both on and off the battlefield. Large troves of data generated by new information technologies-often termed "big data"-are growing ever more important to a range of military functions. Military forces and other actors will increasingly need to acquire, evaluate, and utilize such data in many combat contexts. At the same time, those forces can gain advantages by targeting adversaries' data and data systems. And a multitude of actors within armed conflict, including humanitarian and human rights organizations, can also use big data to deliver aid or identify atrocities. Such myriad uses of big data raise challenging interpretive questions under international humanitarian law (IHL), the jus ad bellum, and international human rights law. This book is the first of its kind to examine how these bodies of international law might apply to the uses of big data specifically. Focusing on IHL, the book also assesses how jus ad bellum categories might translate to operations involving big data below the armed conflict threshold. And because big data is profoundly transforming modern life off the battlefield as well, the book explores questions beyond the role of big data within weapons systems and other military capabilities to questions about the nature of civilian harm and scope of individual rights. This book offers a range of approaches and ideas to this timely issue, and offers an initial roadmap for scholars, policymakers, and advocates to follow as they address the challenges still to come"--
Businesses are rushing to collect personal data to fuel surging demand. Data enthusiasts claim personal information that's obtained from the commercial internet, including mobile platforms, social networks, cloud computing, and connected devices, will unlock path-breaking innovation, including advanced data security. By contrast, regulators and activists contend that corporate data practices too often disempower consumers by creating privacy harms and related problems. As the Internet of Things matures and facial recognition, predictive analytics, big data, and wearable tracking grow in power, scale, and scope, a controversial ecosystem will exacerbate the acrimony over commercial data capture and analysis. The only productive way forward is to get a grip on the key problems right now and change the conversation. That's exactly what Jules Polonetsky, Omer Tene, and Evan Selinger do. They bring together diverse views from leading academics, business leaders, and policymakers to discuss the opportunities and challenges of the new data economy.
We are in the era of big data. With a smartphone now in nearly every pocket, a computer in nearly every household, and an ever-increasing number of Internet-connected devices in the marketplace, the amount of consumer data flowing throughout the economy continues to increase rapidly. The analysis of this data is often valuable to companies and to consumers, as it can guide the development of new products and services, predict the preferences of individuals, help tailor services and opportunities, and guide individualized marketing. At the same time, advocates, academics, and others have raised concerns about whether certain uses of big data analytics may harm consumers, particularly lowincome and underserved populations. To explore these issues, the Federal Trade Commission ("FTC" or "the Commission") held a public workshop, Big Data: A Tool for Inclusion or Exclusion?, on September 15, 2014. The workshop brought together stakeholders to discuss both the potential of big data to create opportunities for consumers and to exclude them from such opportunities. The Commission has synthesized the information from the workshop, a prior FTC seminar on alternative scoring products, and recent research to create this report. Though "big data" encompasses a wide range of analytics, this report addresses only the commercial use of big data consisting of consumer information and focuses on the impact of big data on low-income and underserved populations. Of course, big data also raises a host of other important policy issues, such as notice, choice, and security, among others. Those, however, are not the primary focus of this report. As "little" data becomes "big" data, it goes through several phases. The life cycle of big data can be divided into four phases: (1) collection; (2) compilation and consolidation; (3) analysis; and (4) use. This report focuses on the fourth phase and discusses the benefits and risks created by the use of big data analytics; the consumer protection and equal opportunity laws that currently apply to big data; research in the field of big data; and lessons that companies should take from the research. Ultimately, this report is intended to educate businesses on important laws and research that are relevant to big data analytics and provide suggestions aimed at maximizing the benefits and minimizing its risks.
The second of two volumes filling a gap in the literature in understanding and responding to this grand challenge, this edited collection focuses particularly on the impact and complex consequences of migration, youth experiences and the functioning of digital spaces, and the shaping of youth identity through exposure to both.
This book analyzes the business model of enterprises in the digital economy by taking an economic and comparative perspective. The aim of this book is to conduct an in-depth analysis of the anti-competitive behavior of companies who monopolize data, and put forward the necessity of regulating data monopoly by exploring the causes and characteristics of their anti-competitive behavior. It studies four aspects of the differences between data monopoly and traditional monopolistic behavior, namely defining the relevant market for data monopolies, the entry barrier, the problem of determining the dominant position of data monopoly, and the influence on consumer welfare. It points out the limitations of traditional regulatory tools and discusses how new regulatory methods could be developed within the competition legal framework to restrict data monopolies. It proposes how economic analytical tools used in traditional anti-monopoly law are facing challenges and how competition enforcement agencies could adjust regulatory methods to deal with new anti-competitive behavior by data monopolies.
Algorithms permeate our lives in numerous ways, performing tasks that until recently could only be carried out by humans. Artificial Intelligence (AI) technologies, based on machine learning algorithms and big-data-powered systems, can perform sophisticated tasks such as driving cars, analyzing medical data, and evaluating and executing complex financial transactions - often without active human control or supervision. Algorithms also play an important role in determining retail pricing, online advertising, loan qualification, and airport security. In this work, Martin Ebers and Susana Navas bring together a group of scholars and practitioners from across Europe and the US to analyze how this shift from human actors to computers presents both practical and conceptual challenges for legal and regulatory systems. This book should be read by anyone interested in the intersection between computer science and law, how the law can better regulate algorithmic design, and the legal ramifications for citizens whose behavior is increasingly dictated by algorithms.
The concept of utilizing big data to enable scientific discovery has generated tremendous excitement and investment from both private and public sectors over the past decade, and expectations continue to grow. Using big data analytics to identify complex patterns hidden inside volumes of data that have never been combined could accelerate the rate of scientific discovery and lead to the development of beneficial technologies and products. However, producing actionable scientific knowledge from such large, complex data sets requires statistical models that produce reliable inferences (NRC, 2013). Without careful consideration of the suitability of both available data and the statistical models applied, analysis of big data may result in misleading correlations and false discoveries, which can potentially undermine confidence in scientific research if the results are not reproducible. In June 2016 the National Academies of Sciences, Engineering, and Medicine convened a workshop to examine critical challenges and opportunities in performing scientific inference reliably when working with big data. Participants explored new methodologic developments that hold significant promise and potential research program areas for the future. This publication summarizes the presentations and discussions from the workshop.