Values information from AI is a collection of information and images of values generated from an AI tool as part of The Values We Share Project to promote values. All information in this book can be used to promote values and can be used as material in values formation programs. All information in this book will also be used in The Values We Share Project videos, materials and courses in the future. Visit The Values We Share Project at http://thevaluesweshare.info.
One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us. At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care. As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.
A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us—and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole—and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands. The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software. In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story. The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.
Using our moral and technical imaginations to create responsible innovations: theory, method, and applications for value sensitive design. Implantable medical devices and human dignity. Private and secure access to information. Engineering projects that transform the Earth. Multigenerational information systems for international justice. How should designers, engineers, architects, policy makers, and others design such technology? Who should be involved and what values are implicated? In Value Sensitive Design, Batya Friedman and David Hendry describe how both moral and technical imagination can be brought to bear on the design of technology. With value sensitive design, under development for more than two decades, Friedman and Hendry bring together theory, methods, and applications for a design process that engages human values at every stage. After presenting the theoretical foundations of value sensitive design, which lead to a deep rethinking of technical design, Friedman and Hendry explain seventeen methods, including stakeholder analysis, value scenarios, and multilifespan timelines. Following this, experts from ten application domains report on value sensitive design practice. Finally, Friedman and Hendry explore such open questions as the need for deeper investigation of indirect stakeholders and further method development. This definitive account of the state of the art in value sensitive design is an essential resource for designers and researchers working in academia and industry, students in design and computer science, and anyone working at the intersection of technology and society.
Artificial intelligence (AI) in its various forms –– machine learning, chatbots, robots, agents, etc. –– is increasingly being seen as a core component of enterprise business workflow and information management systems. The current promise and hype around AI are being driven by software vendors, academic research projects, and startups. However, we posit that the greatest promise and potential for AI lies in the enterprise with its applications touching all organizational facets. With increasing business process and workflow maturity, coupled with recent trends in cloud computing, datafication, IoT, cybersecurity, and advanced analytics, there is an understanding that the challenges of tomorrow cannot be solely addressed by today’s people, processes, and products. There is still considerable mystery, hype, and fear about AI in today’s world. A considerable amount of current discourse focuses on a dystopian future that could adversely affect humanity. Such opinions, with understandable fear of the unknown, don’t consider the history of human innovation, the current state of business and technology, or the primarily augmentative nature of tomorrow’s AI. This book demystifies AI for the enterprise. It takes readers from the basics (definitions, state-of-the-art, etc.) to a multi-industry journey, and concludes with expert advice on everything an organization must do to succeed. Along the way, we debunk myths, provide practical pointers, and include best practices with applicable vignettes. AI brings to enterprise the capabilities that promise new ways by which professionals can address both mundane and interesting challenges more efficiently, effectively, and collaboratively (with humans). The opportunity for tomorrow’s enterprise is to augment existing teams and resources with the power of AI in order to gain competitive advantage, discover new business models, establish or optimize new revenues, and achieve better customer and user satisfaction.
Artificial intelligence (AI) has captured our imaginations—and become a distraction. Too many leaders embrace the oversized narratives of artificial minds outpacing human intelligence and lose sight of the original problems they were meant to solve. When businesses try to “do AI,” they place an abstract solution before problems and customers without fully considering whether it is wise, whether the hype is true, or how AI will impact their organization in the long term. Often absent is sound reasoning for why they should go down this path in the first place. Doing AI explores AI for what it actually is—and what it is not— and the problems it can truly solve. In these pages, author Richard Heimann unravels the tricky relationship between problems and high-tech solutions, exploring the pitfalls in solution-centric thinking and explaining how businesses should rethink AI in a way that aligns with their cultures, goals, and values. As the Chief AI Officer at Cybraics Inc., Richard Heimann knows from experience that AI-specific strategies are often bad for business. Doing AI is his comprehensive guide that will help readers understand AI, avoid common pitfalls, and identify beneficial applications for their companies. This book is a must-read for anyone looking for clarity and practical guidance for identifying problems and effectively solving them, rather than getting sidetracked by a shiny new “solution” that doesn’t solve anything.
We already observe the positive effects of AI in almost every field, and foresee its potential to help address our sustainable development goals and the urgent challenges for the preservation of the environment. We also perceive that the risks related to the safety, security, confidentiality, and fairness of AI systems, the threats to free will of possibly manipulative systems, as well as the impact of AI on the economy, employment, human rights, equality, diversity, inclusion, and social cohesion need to be better assessed. The development and use of AI must be guided by principles of social cohesion, environmental sustainability, resource sharing, and inclusion. It has to integrate human rights, and social, cultural, and ethical values of democracy. It requires continued education and training as well as continual assessment of its effects through social deliberation. The “Reflections on AI for Humanity” proposed in this book develop the following issues and sketch approaches for addressing them: How can we ensure the security requirements of critical applications and the safety and confidentiality of data communication and processing? What techniques and regulations for the validation, certification, and audit of AI tools are needed to develop confidence in AI? How can we identify and overcome biases in algorithms? How do we design systems that respect essential human values, ensuring moral equality and inclusion? What kinds of governance mechanisms are needed for personal data, metadata, and aggregated data at various levels? What are the effects of AI and automation on the transformation and social division of labor? What are the impacts on economic structures? What proactive and accommodation measures will be required? How will people benefit from decision support systems and personal digital assistants without the risk of manipulation? How do we design transparent and intelligible procedures and ensure that their functions reflect our values and criteria? How can we anticipate failure and restore human control over an AI system when it operates outside its intended scope? How can we devote a substantial part of our research and development resources to the major challenges of our time such as climate, environment, health, and education?
The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology. In Human-Centered AI, Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity.
Companies that don't use AI to their advantage will soon be left behind. Artificial intelligence and machine learning will drive a massive reshaping of the economy and society. What should you and your company be doing right now to ensure that your business is poised for success? These articles by AI experts and consultants will help you understand today's essential thinking on what AI is capable of now, how to adopt it in your organization, and how the technology is likely to evolve in the near future. Artificial Intelligence: The Insights You Need from Harvard Business Review will help you spearhead important conversations, get going on the right AI initiatives for your company, and capitalize on the opportunity of the machine intelligence revolution. Catch up on current topics and deepen your understanding of them with the Insights You Need series from Harvard Business Review. Featuring some of HBR's best and most recent thinking, Insights You Need titles are both a primer on today's most pressing issues and an extension of the conversation, with interesting research, interviews, case studies, and practical ideas to help you explore how a particular issue will impact your company and what it will mean for you and your business.
Unlock unprecedented levels of value at your firm by implementing artificial intelligence In The Secrets of AI Value Creation: Practical Guide to Business Value Creation with Artificial Intelligence from Strategy to Execution, a team of renowned artificial intelligence leaders and experts delivers an insightful blueprint for unlocking the value of AI in your company. This book presents a comprehensive framework that can be applied to your organisation, exploring the value drivers and challenges you might face throughout your AI journey. You will uncover effective strategies and tactics utilised by successful artificial intelligence (AI) achievers to propel business growth. In the book, you’ll explore critical value drivers and key capabilities that will determine the success or failure of your company’s AI initiatives. The authors examine the subject from multiple perspectives, including business, technology, data, algorithmics, and psychology. Organized into four parts and fourteen insightful chapters, the book includes: Concrete examples and real-world case studies illustrating the practical impact of the ideas discussed within Best practices used and common challenges encountered when first incorporating AI into your company’s operations A comprehensive framework you can use to navigate the complexities of AI implementation and value creation An indispensable blueprint for artificial intelligence implementation at your organisation, The Secrets of AI Value Creation is a can’t-miss resource for managers, executives, directors, entrepreneurs, founders, data analysts, and business- and tech-side professionals looking for ways to unlock new forms of value in their company. The authors, who are industry leaders, assemble the puzzle pieces into a comprehensive framework for AI value creation: Michael Proksch is an expert on the subject of AI strategy and value creation. He worked with various Fortune 2000 organisations and focuses on optimising business operations building customised AI solutions, and driving organisational adoption of AI through the creation of value and trust. Nisha Paliwal is a senior technology executive. She is known for her expertise in various technology services, focusing on the importance of bringing AI technology, computing resources, data, and talent together in a synchronous and organic way. Wilhelm Bielert is a seasoned senior executive with an extensive of experience in digital transformation, program and project management, and corporate restructuring. With a proven track record, he has successfully led transformative initiatives in multinational corporations, specialising in harnessing the power of AI and other cutting-edge technologies to drive substantial value creation.