This book constitutes the refereed proceedings of the First Annual International Frontiers of Algorithmics Workshop, FAW 2007, held in Lanzhou, China in August 2007. Topics covered in the papers include bioinformatics, discrete structures, geometric information processing and communication, games and incentive analysis, graph algorithms, internet algorithms and protocols, and algorithms in medical applications.
This book constitutes the proceedings of the 11th International Workshop on Frontiers in Algorithmics, FAW 2017, held in Chengdu, China, in June 2017. The 24 papers presented in this volume were carefully reviewed and selected from 61 submissions. They deal with all aspects of theoretical computer science and algorithms.
The Third International Frontiers of Algorithmics Workshop (FAW 2009), held during June 20–23,2009 at Hefei University of Technology, Hefei, Anhui, China, continued to provide a focused forum on current trends in research on algori- mics,includingdiscretestructures,andtheirapplications.We aimatstimulating the various ?elds for which algorithmics can become a crucial enabler, and to strengthenthe ties between the Easternand Westernalgorithmicsresearchc- munities as well as theory and practice of algorithmics. We had three distinguished invited speakers: Guoliang Chen, Andrew Chi- Chih Yao and Frances Foong Yao, speaking on parallel computing, communication complexity and applications, and computer and network power management. The ?nal program also included 33 peer-reviewed papers selected out of 87 contributed submissions, covering topics including approximation and online - gorithms; computational geometry; graph theory and graph algorithms; games and applications; heuristics; large-scale data mining; machine learning; pattern recognition algorithms; and parameterized algorithms. April 2009 Xiaotie Deng John Hopcroft Jinyun Xue Organization FAW 2009 was organized by Hefei University of Technology, China.
Democratic Frontiers: Algorithms and Society focuses on digital platforms’ effects in societies with respect to key areas such as subjectivity and self-reflection, data and measurement for the common good, public health and accessible datasets, activism in social media and the import/export of AI technologies relative to regime type. Digital technologies develop at a much faster pace relative to our systems of governance which are supposed to embody democratic principles that are comparatively timeless, whether rooted in ancient Greek or Enlightenment ideas of freedom, autonomy and citizenship. Algorithms, computing millions of calculations per second, do not pause to reflect on their operations. Developments in the accumulation of vast private datasets that are used to train automated machine learning algorithms pose new challenges for upholding these values. Social media platforms, while the key driver of today’s information disorder, also afford new opportunities for organized social activism. The US and China, presumably at opposite ends of an ideological spectrum, are the main exporters of AI technology to both free and totalitarian societies. These are some of the important topics covered by this volume that examines the democratic stakes for societies with the rapid expansion of these technologies. Scholars and students from many backgrounds as well as policy makers, journalists and the general reading public will find a multidisciplinary approach to issues of democratic values and governance encompassing research from Sociology, Digital Humanities, New Media, Psychology, Communication, International Relations and Economics. Chapter 3 of this book is available for free in PDF format as Open Access from the individual product page at www.routledge.com. It has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 license
Algorithms are a dominant force in modern culture, and every indication is that they will become more pervasive, not less. The best algorithms are undergirded by beautiful mathematics. This text cuts across discipline boundaries to highlight some of the most famous and successful algorithms. Readers are exposed to the principles behind these examples and guided in assembling complex algorithms from simpler building blocks. Written in clear, instructive language within the constraints of mathematical rigor, Algorithms from THE BOOK includes a large number of classroom-tested exercises at the end of each chapter. The appendices cover background material often omitted from undergraduate courses. Most of the algorithm descriptions are accompanied by Julia code, an ideal language for scientific computing. This code is immediately available for experimentation. Algorithms from THE BOOK is aimed at first-year graduate and advanced undergraduate students. It will also serve as a convenient reference for professionals throughout the mathematical sciences, physical sciences, engineering, and the quantitative sectors of the biological and social sciences.
Boolean circuit complexity is the combinatorics of computer science and involves many intriguing problems that are easy to state and explain, even for the layman. This book is a comprehensive description of basic lower bound arguments, covering many of the gems of this “complexity Waterloo” that have been discovered over the past several decades, right up to results from the last year or two. Many open problems, marked as Research Problems, are mentioned along the way. The problems are mainly of combinatorial flavor but their solutions could have great consequences in circuit complexity and computer science. The book will be of interest to graduate students and researchers in the fields of computer science and discrete mathematics.
A groundbreaking narrative on the urgency of ethically designed AI and a guidebook to reimagining life in the era of intelligent technology. The Age of Intelligent Machines is upon us, and we are at a reflection point. The proliferation of fast–moving technologies, including forms of artificial intelligence akin to a new species, will cause us to confront profound questions about ourselves. The era of human intellectual superiority is ending, and we need to plan for this monumental shift. A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are examines the immense impact intelligent technology will have on humanity. These machines, while challenging our personal beliefs and our socioeconomic world order, also have the potential to transform our health and well–being, alleviate poverty and suffering, and reveal the mysteries of intelligence and consciousness. International human rights attorney Flynn Coleman deftly argues that it is critical that we instill values, ethics, and morals into our robots, algorithms, and other forms of AI. Equally important, we need to develop and implement laws, policies, and oversight mechanisms to protect us from tech’s insidious threats. To realize AI’s transcendent potential, Coleman advocates for inviting a diverse group of voices to participate in designing our intelligent machines and using our moral imagination to ensure that human rights, empathy, and equity are core principles of emerging technologies. Ultimately, A Human Algorithm is a clarion call for building a more humane future and moving conscientiously into a new frontier of our own design. “[Coleman] argues that the algorithms of machine learning––if they are instilled with human ethics and values––could bring about a new era of enlightenment.” —San Francisco Chronicle
This book starts by presenting the basics of reinforcement learning using highly intuitive and easy-to-understand examples and applications, and then introduces the cutting-edge research advances that make reinforcement learning capable of out-performing most state-of-art systems, and even humans in a number of applications. The book not only equips readers with an understanding of multiple advanced and innovative algorithms, but also prepares them to implement systems such as those created by Google Deep Mind in actual code. This book is intended for readers who want to both understand and apply advanced concepts in a field that combines the best of two worlds – deep learning and reinforcement learning – to tap the potential of ‘advanced artificial intelligence’ for creating real-world applications and game-winning algorithms.
Data mining of massive data sets is transforming the way we think about crisis response, marketing, entertainment, cybersecurity and national intelligence. Collections of documents, images, videos, and networks are being thought of not merely as bit strings to be stored, indexed, and retrieved, but as potential sources of discovery and knowledge, requiring sophisticated analysis techniques that go far beyond classical indexing and keyword counting, aiming to find relational and semantic interpretations of the phenomena underlying the data. Frontiers in Massive Data Analysis examines the frontier of analyzing massive amounts of data, whether in a static database or streaming through a system. Data at that scale-terabytes and petabytes-is increasingly common in science (e.g., particle physics, remote sensing, genomics), Internet commerce, business analytics, national security, communications, and elsewhere. The tools that work to infer knowledge from data at smaller scales do not necessarily work, or work well, at such massive scale. New tools, skills, and approaches are necessary, and this report identifies many of them, plus promising research directions to explore. Frontiers in Massive Data Analysis discusses pitfalls in trying to infer knowledge from massive data, and it characterizes seven major classes of computation that are common in the analysis of massive data. Overall, this report illustrates the cross-disciplinary knowledge-from computer science, statistics, machine learning, and application disciplines-that must be brought to bear to make useful inferences from massive data.