A day does not go by without a news article reporting some amazing breakthrough in artificial intelligence (AI). Many philosophers, futurists, and AI researchers have conjectured that human-level AI will be developed in the next 20 to 200 years. If these predictions are correct, it raises new and sinister issues related to our future in the age of
Artificial Superintelligence Concerns those who believe in a fundamental (causal) theory of everything. Combining the top most categories of reality, the idea of pure intelligence, the major constructs and facts of science and mathematics, the author proposes a theory of powerfully intelligent machines superior to human minds.
Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility.
New York Times Best Seller How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who’s helped mainstream research on how to keep AI beneficial. How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle? What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues—from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.
The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It is comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book is the first edited volume dedicated to addressing challenges of constructing safe and secure advanced machine intelligence. The chapters vary in length and technical content from broad interest opinion essays to highly formalized algorithmic approaches to specific problems. All chapters are self-contained and could be read in any order or skipped without a loss of comprehension.
A leading artificial intelligence researcher lays out a new approach to AI that will enable people to coexist successfully with increasingly intelligent machines.
“Artificial intelligence has always inspired outlandish visions—that AI is going to destroy us, save us, or at the very least radically transform us. Erik Larson exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it. This is a timely, important, and even essential book.” —John Horgan, author of The End of Science Many futurists insist that AI will soon achieve human levels of intelligence. From there, it will quickly eclipse the most gifted human mind. The Myth of Artificial Intelligence argues that such claims are just that: myths. We are not on the path to developing truly intelligent machines. We don’t even know where that path might be. Erik Larson charts a journey through the landscape of AI, from Alan Turing’s early work to today’s dominant models of machine learning. Since the beginning, AI researchers and enthusiasts have equated the reasoning approaches of AI with those of human intelligence. But this is a profound mistake. Even cutting-edge AI looks nothing like human intelligence. Modern AI is based on inductive reasoning: computers make statistical correlations to determine which answer is likely to be right, allowing software to, say, detect a particular face in an image. But human reasoning is entirely different. Humans do not correlate data sets; we make conjectures sensitive to context—the best guess, given our observations and what we already know about the world. We haven’t a clue how to program this kind of reasoning, known as abduction. Yet it is the heart of common sense. Larson argues that all this AI hype is bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we are to make real progress, we must abandon futuristic talk and learn to better appreciate the only true intelligence we know—our own.
This book explores the psychological impact of advanced forms of artificial intelligence. How will it be to live with a superior intelligence? How will the exposure to highly developed artificial intelligence (AI) systems change human well-being? With a review of recent advancements in brain–computer interfaces, military AI, Explainable AI (XAI) and digital clones as a foundation, the experience of living with a hyperintelligence is discussed from the viewpoint of a clinical psychologist. The theory of universal solicitation is introduced, i.e. the demand character of a technology that wants to be used in all aspects of life. With a focus on human experience, and to a lesser extent on technology, the book is written for a general readership with an interest in psychology, technology and the future of our human condition. With its unique focus on psychological topics, the book offers contributions to a discussion on the future of human life beyond purely technological considerations.
Elon Musk named Our Final Invention one of five books everyone should read about the future—a Huffington Post Definitive Tech Book of 2013. Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the “smart” in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence. In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine. Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to? “If you read just one book that makes you confront scary high-tech realities that we’ll soon have no choice but to address, make it this one.” —The Washington Post “Science fiction has long explored the implications of humanlike machines (think of Asimov’s I, Robot), but Barrat’s thoughtful treatment adds a dose of reality.” —Science News “A dark new book . . . lays out a strong case for why we should be at least a little worried.” —The New Yorker