The most important book on antitrust ever written. It shows how antitrust suits adversely affect the consumer by encouraging a costly form of protection for inefficient and uncompetitive small businesses.
Economists have begun to make much greater use of experimental methods in their research. This collection surveys these methods and shows how they can help us to understand firm behaviour in relation to various forms of competition policy.
This book uses game theory to analyse anti-competitive behaviour among firms and to consider its implications for competition policy. Part I focuses on 'explicit collusion': the author proves that 'four are few and six are many', and shows how cartels can be enforced under imperfect and incomplete information. Part II on 'tacit collusion' discusses the informational requirements of collusion detection in noncooperative repeated games. In Part III on 'semicollusion', excess capacity is shown to reinforce collusion. Part IV is devoted to the detection of predatory pricing. In this book, Louis Phlips applies the latest economic theory to a discussion of several European antitrust decisions and empirical studies. The presentation of case studies, combined with a clear exposition of the theory, will make this book invaluable to teachers and students of competition policy.
"The report assesses the initiatives for information exchange among firms and their consequences for welfare with a view towards the design of competition policy in this domain. To this end the report surveys critically the academic literature on static and dynamic models of competition in their relation to information exchange and examines the main antitrust legislation and cases in Europe and the US."--Page i.
These contributions discuss a number of important developments over the past decade in a newly established and important field of economics that have led to notable changes in views on governmental competition policies. They focus on the nature and role of competition and other determinants of market structures, such as numbers of firms and barriers to entry; other factors which determine the effective degree of competition in the market; the influence of major firms (especially when these pursue objectives other than profit maximization); and decentralization and coordination under control relationships other than markets and hierarchies.ContributorsJoseph E. Stiglitz, G. C. Archibald, B. C. Eaton, R. G. Lipsey, David Enaoua, Paul Geroski, Alexis Jacquemin, Richard J. Gilbert, Reinhard Selten, Oliver E. Williamson, Jerry R. Green, G. Frank Mathewson, R. A. Winter, C. d'Aspremont, J. Jaskold Gabszewicz, Steven Salop, Branko Horvat, Z. Roman, W. J. Baumol, J. C. Panzar, R. D. Willig, Richard Schmalensee, Richard Nelson, Michael Scence, and Partha Dasgupta
Since the first censuses were taken and crop yields recorded in ancient times, data collection and analysis have been essential to improving the functioning of society. Foundational work in calculus, probability theory, and statistics in the 17th and 18th centuries provided an array of new tools used by scientists to more precisely predict the movements of the sun and stars and determine population-wide rates of crime, marriage, and suicide. These tools often led to stunning advances. In the 1800s, Dr. John Snow used early modern data science to map cholera “clusters” in London. By tracing to a contaminated public well a disease that was widely thought to be caused by “miasmatic” air, Snow helped lay the foundation for the germ theory of disease.Gleaning insights from data to boost economic activity also took hold in American industry. Frederick Winslow Taylor's use of a stopwatch and a clipboard to analyze productivity at Midvale Steel Works in Pennsylvania increased output on the shop floor and fueled his belief that data science could revolutionize every aspect of life.2 In 1911, Taylor wrote The Principles of Scientific Management to answer President Theodore Roosevelt's call for increasing “national efficiency”: Today, data is more deeply woven into the fabric of our lives than ever before. We aspire to use data to solve problems, improve well-being, and generate economic prosperity. The collection, storage, and analysis of data is on an upward and seemingly unbounded trajectory, fueled by increases in processing power, the cratering costs of computation and storage, and the growing number of sensor technologies embedded in devices of all kinds. In 2011, some estimated the amount of information created and replicated would surpass 1.8 zettabytes. In 2013, estimates reached 4 zettabytes of data generated worldwide.
While the field of economics makes sharp distinctions and produces precise theory, the work of experimental economics sometimes appears blurred and may produce uncertain results. The contributors to this volume have provided brief notes describing specific experimental results.