A provocative story about the relationship between the humans on a British airbase and the AI security system that guards that base. When a group of humans are killed, the question is who is responsible and why. Find out in AI and the Trolley Problem, Pat Cadigan's Tor.com Original story. At the Publisher's request, this title is being sold without Digital Rights Management Software (DRM) applied.
This handbook is one of the first comprehensive research and teaching tools for the developing area of global media ethics. The advent of new media that is global in reach and impact has created the need for a journalism ethics that is global in principles and aims. For many scholars, teachers and journalists, the existing journalism ethics, e.g. existing codes of ethics, is too parochial and national. It fails to provide adequate normative guidance for a media that is digital, global and practiced by professional and citizen. A global media ethics is being constructed to define what responsible public journalism means for a new global media era. Currently, scholars write texts and codes for global media, teach global media ethics, analyse how global issues should be covered, and gather together at conferences, round tables and meetings. However, the field lacks an authoritative handbook that presents the views of leading thinkers on the most important issues for global media ethics. This handbook is a milestone in the field, and a major contribution to media ethics.
This open access book proposes a novel approach to Artificial Intelligence (AI) ethics. AI offers many advantages: better and faster medical diagnoses, improved business processes and efficiency, and the automation of boring work. But undesirable and ethically problematic consequences are possible too: biases and discrimination, breaches of privacy and security, and societal distortions such as unemployment, economic exploitation and weakened democratic processes. There is even a prospect, ultimately, of super-intelligent machines replacing humans. The key question, then, is: how can we benefit from AI while addressing its ethical problems? This book presents an innovative answer to the question by presenting a different perspective on AI and its ethical consequences. Instead of looking at individual AI techniques, applications or ethical issues, we can understand AI as a system of ecosystems, consisting of numerous interdependent technologies, applications and stakeholders. Developing this idea, the book explores how AI ecosystems can be shaped to foster human flourishing. Drawing on rich empirical insights and detailed conceptual analysis, it suggests practical measures to ensure that AI is used to make the world a better place.
This book explains why AI is unique, what legal and ethical problems it could cause, and how we can address them. It argues that AI is unlike any other previous technology, owing to its ability to take decisions independently and unpredictably. This gives rise to three issues: responsibility--who is liable if AI causes harm; rights--the disputed moral and pragmatic grounds for granting AI legal personality; and the ethics surrounding the decision-making of AI. The book suggests that in order to address these questions we need to develop new institutions and regulations on a cross-industry and international level. Incorporating clear explanations of complex topics, Robot Rules will appeal to a multi-disciplinary audience, from those with an interest in law, politics and philosophy, to computer programming, engineering and neuroscience.
Framing the discussion as a crime tried in the court of public opinion, presents a lighthearted examination of the trolley problem--one of the most famous thought experiments in modern philosophy.
Machines and computers are becoming increasingly sophisticated and self-sustaining. As we integrate such technologies into our daily lives, questions concerning moral integrity and best practices arise. A changing world requires renegotiating our current set of standards. Without best practices to guide interaction and use with these complex machines, interaction with them will turn disastrous. Machine Law, Ethics, and Morality in the Age of Artificial Intelligence is a collection of innovative research that presents holistic and transdisciplinary approaches to the field of machine ethics and morality and offers up-to-date and state-of-the-art perspectives on the advancement of definitions, terms, policies, philosophies, and relevant determinants related to human-machine ethics. The book encompasses theory and practice sections for each topical component of important areas of human-machine ethics both in existence today and prospective for the future. While highlighting a broad range of topics including facial recognition, health and medicine, and privacy and security, this book is ideally designed for ethicists, philosophers, scientists, lawyers, politicians, government lawmakers, researchers, academicians, and students. It is of special interest to decision- and policy-makers concerned with the identification and adoption of human-machine ethics initiatives, leading to needed policy adoption and reform for human-machine entities, their technologies, and their societal and legal obligations.
The robot population is rising on Earth and other planets. (Mars is inhabited entirely by robots.) As robots slip into more domains of human life--from the operating room to the bedroom--they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This makes it all the more urgent to study their ethical, legal, and policy impacts. To help the robotics industry and broader society, we need to not only press ahead on a wide range of issues, but also identify new ones emerging as quickly as the field is evolving. For instance, where military robots had received much attention in the past (and are still controversial today), this volume looks toward autonomous cars here as an important case study that cuts across diverse issues, from liability to psychology to trust and more. And because robotics feeds into and is fed by AI, the Internet of Things, and other cognate fields, robot ethics must also reach into those domains, too. Expanding these discussions also means listening to new voices; robot ethics is no longer the concern of a handful of scholars. Experts from different academic disciplines and geographical areas are now playing vital roles in shaping ethical, legal, and policy discussions worldwide. So, for a more complete study, the editors of this volume look beyond the usual suspects for the latest thinking. Many of the views as represented in this cutting-edge volume are provocative--but also what we need to push forward in unfamiliar territory.
Algorithms have made our lives more efficient and entertaining--but not without a significant cost. Can we design a better future, one in which societial gains brought about by technology are balanced with the rights of citizens? The Ethical Algorithm offers a set of principled solutions based on the emerging and exciting science of socially aware algorithm design.
Should a self-driving car prioritize the lives of the passengers over the lives of pedestrians? Should we as a society develop autonomous weapon systems that are capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development come together to explore these existential questions, including Aaron James (UC Irvine), Allan Dafoe (Oxford), Andrea Loreggia (Padova), Andrew Critch (UC Berkeley), Azim Shariff (Univ. .
This volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches. The term "A.I." is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and are capable of tasks which require learning and 'intelligence', presents difficult ethical questions, and has drawn concerns from many quarters about individual and societal welfare, democratic decision-making, moral agency, and the prevention of harm. This work ranges from explorations of normative constraints on specific applications of machine learning algorithms today-in everyday medical practice, for instance-to reflections on the (potential) status of AI as a form of consciousness with attendant rights and duties and, more generally still, on the conceptual terms and frameworks necessarily to understand tasks requiring intelligence, whether "human" or "A.I."