A concise but informative overview of AI ethics and policy. Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring home-owners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, voters in liberal democracies? This book offers a concise overview of moral, political, legal and economic implications of AI. It covers the basics of AI's latest permutation, machine learning, and considers issues including transparency, bias, liability, privacy, and regulation.
In recent years a vast literature has been produced on the feasibility of Artificial Intelligence (AI). The topic most frequently discussed is the concept of intelligence, with efforts to demonstrate that it is or is not transferable to the computer. Only rarely has attention been focused on the concept of the artificial per se in order to clarify what kind, depth and scope of performance (including intelligence) it could support. Apart from the classic book by H.A. Simon, The Sciences of the Artificial, published in 1969, no serious attempt has been made to define a conceptual frame for understanding the intimate nature of intelligent machines independently of its claimed or denied human-like features. The general aim of this book is to discuss, from different points of view, what we are losing and what we are gaining from the artificial, particularly from AI, when we abandon the original anthropomorphic pretension. There is necessarily a need for analysis of the history of AI and the limits of its plausibility in reproducing the human mind. In addition, the papers presented here aim at redefining the epistemology and the possible targets of the AI discipline, raising problems, and proposing solutions, which should be understood as typical of the artificial rather than of an information-based conception of man.
In recent years a vast literature has been produced on the feasibility of Artificial Intelligence (AI). The topic most frequently discussed is the concept of intelligence, with efforts to demonstrate that it is or is not transferable to the computer. Only rarely has attention been focused on the concept of the artificial per se in order to clarify what kind, depth and scope of performance (including intelligence) it could support. Apart from the classic book by H.A. Simon, The Sciences of the Artificial, published in 1969, no serious attempt has been made to define a conceptual frame for understanding the intimate nature of intelligent machines independently of its claimed or denied human-like features. The general aim of this book is to discuss, from different points of view, what we are losing and what we are gaining from the artificial, particularly from AI, when we abandon the original anthropomorphic pretension. There is necessarily a need for analysis of the history of AI and the limits of its plausibility in reproducing the human mind. In addition, the papers presented here aim at redefining the epistemology and the possible targets of the AI discipline, raising problems, and proposing solutions, which should be understood as typical of the artificial rather than of an information-based conception of man.
A Sunday Times Business Book of the Year. Scary Smart will teach you how to navigate the scary and inevitable intrusion of Artificial Intelligence, with an accessible blueprint for creating a harmonious future alongside AI. From Mo Gawdat, the former Chief Business Officer at Google [X] and bestselling author of Solve for Happy. Technology is putting our humanity at risk to an unprecedented degree. This book is not for engineers who write the code or the policy makers who claim they can regulate it. This is a book for you. Because, believe it or not, you are the only one that can fix it. - Mo Gawdat Artificial intelligence is smarter than humans. It can process information at lightning speed and remain focused on specific tasks without distraction. AI can see into the future, predict outcomes and even use sensors to see around physical and virtual corners. So why does AI frequently get it so wrong and cause harm? The answer is us: the human beings who write the code and teach AI to mimic our behaviour. Scary Smart explains how to fix the current trajectory now, to make sure that the AI of the future can preserve our species. This book offers a blueprint, pointing the way to what we can do to safeguard ourselves, those we love, and the planet itself. 'No one ever regrets reading anything Mo Gawdat has written.' - Emma Gannon, author of The Multi-Hyphen Method and host of the podcast Ctrl Alt Delete
Explores universal questions about humanity's capacity for living and thriving in the coming age of sentient machines and AI, examining debates from opposing perspectives while discussing emerging intellectual diversity and its potential role in enabling a positive life.
“Artificial intelligence has always inspired outlandish visions—that AI is going to destroy us, save us, or at the very least radically transform us. Erik Larson exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it. This is a timely, important, and even essential book.” —John Horgan, author of The End of Science Many futurists insist that AI will soon achieve human levels of intelligence. From there, it will quickly eclipse the most gifted human mind. The Myth of Artificial Intelligence argues that such claims are just that: myths. We are not on the path to developing truly intelligent machines. We don’t even know where that path might be. Erik Larson charts a journey through the landscape of AI, from Alan Turing’s early work to today’s dominant models of machine learning. Since the beginning, AI researchers and enthusiasts have equated the reasoning approaches of AI with those of human intelligence. But this is a profound mistake. Even cutting-edge AI looks nothing like human intelligence. Modern AI is based on inductive reasoning: computers make statistical correlations to determine which answer is likely to be right, allowing software to, say, detect a particular face in an image. But human reasoning is entirely different. Humans do not correlate data sets; we make conjectures sensitive to context—the best guess, given our observations and what we already know about the world. We haven’t a clue how to program this kind of reasoning, known as abduction. Yet it is the heart of common sense. Larson argues that all this AI hype is bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we are to make real progress, we must abandon futuristic talk and learn to better appreciate the only true intelligence we know—our own.
Why the United States lags behind other industrialized countries in sharing the benefits of innovation with workers and how we can remedy the problem. The United States has too many low-quality, low-wage jobs. Every country has its share, but those in the United States are especially poorly paid and often without benefits. Meanwhile, overall productivity increases steadily and new technology has transformed large parts of the economy, enhancing the skills and paychecks of higher paid knowledge workers. What’s wrong with this picture? Why have so many workers benefited so little from decades of growth? The Work of the Future shows that technology is neither the problem nor the solution. We can build better jobs if we create institutions that leverage technological innovation and also support workers though long cycles of technological transformation. Building on findings from the multiyear MIT Task Force on the Work of the Future, the book argues that we must foster institutional innovations that complement technological change. Skills programs that emphasize work-based and hybrid learning (in person and online), for example, empower workers to become and remain productive in a continuously evolving workplace. Industries fueled by new technology that augments workers can supply good jobs, and federal investment in R&D can help make these industries worker-friendly. We must act to ensure that the labor market of the future offers benefits, opportunity, and a measure of economic security to all.
The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology. In Human-Centered AI, Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity.
Late in 2017, the global significance of the conversation about artificial intelligence (AI) changed forever. China put the world on alert when it released a plan to dominate all aspects of AI across the planet. Only weeks later, Vladimir Putin raised a Russian red flag in response by declaring AI the future for all humankind, and proclaiming that, "Whoever becomes the leader in this sphere will become the ruler of the world." The race was on. Consistent with their unique national agendas, countries throughout the world began plotting their paths and hurrying their pace. Now, not long after, the race has become a sprint. Despite everything at stake, to most of us AI remains shrouded by a cloud of mystery and misunderstanding. Hidden behind complicated and technical jargon and confused by fantastical depictions of science fiction, the modern realities of AI and its profound implications are hard to decipher, but crucial to recognize. In T-Minus AI: Humanity's Countdown to Artificial Intelligence and the New Pursuit of Global Power, author Michael Kanaan explains AI from a human-oriented perspective we can all finally understand. A recognized national expert and the U.S. Air Force's first Chairperson for Artificial Intelligence, Kanaan weaves a compelling new view on our history of innovation and technology to masterfully explain what each of us should know about modern computing, AI, and machine learning. Kanaan also dives into the global implications of AI by illuminating the cultural and national vulnerabilities already exposed and the pressing issues now squarely on the table. AI has already become China's all-purpose tool to impose its authoritarian influence around the world. Russia, playing catch up, is weaponizing AI through its military systems and now infamous, aggressive efforts to disrupt democracy by whatever disinformation means possible. America and like-minded nations are awakening to these new realities—and the paths they're electing to follow echo loudly the political foundations and, in most cases, the moral imperatives upon which they were formed. As we march toward a future far different than ever imagined, T-Minus AI is fascinating and crucially well-timed. It leaves the fiction behind, paints the alarming implications of AI for what they actually are, and calls for unified action to protect fundamental human rights and dignities for all.
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence. The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust—in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better.