We propose a framework to link empirical models of systemic risk to theoretical network/ general equilibrium models used to understand the channels of transmission of systemic risk. The theoretical model allows for systemic risk due to interbank counterparty risk, common asset exposures/fire sales, and a “Minsky" cycle of optimism. The empirical model uses stock market and CDS spreads data to estimate a multivariate density of equity returns and to compute the expected equity return for each bank, conditional on a bad macro-outcome. Theses “cross-sectional" moments are used to re-calibrate the theoretical model and estimate the importance of the Minsky cycle of optimism in driving systemic risk.
This book is about how extreme and systemic risk can be analyzed in an integrated way. Risk analysis is understood to include measurement, assessment as well as management aspects. Integration is understood as being able to perform risk analysis for extreme and systemic events simultaneously. The presented approach is based on Sklar's theorem, which states that a multivariate distribution can be separated into two parts – one describing the marginal distributions and the other describing the dependency between the distributions using a so-called copula. It is suggested to reinterpret Sklar's theorem from a system or network perspective, treating copulas as a network property and individual, including extreme, risk as elements within the network. In that way, extreme and systemic risk can be analyzed independently as well as jointly across several scales. The book is intended for a large audience, and all techniques presented are guided with examples and applications with a special focus on natural disaster events. Furthermore, an extensive literature and discussion of it are given in each chapter for the interested reader.
In the aftermath of the recent financial crisis, the federal government has pursued significant regulatory reforms, including proposals to measure and monitor systemic risk. However, there is much debate about how this might be accomplished quantitatively and objectively—or whether this is even possible. A key issue is determining the appropriate trade-offs between risk and reward from a policy and social welfare perspective given the potential negative impact of crises. One of the first books to address the challenges of measuring statistical risk from a system-wide persepective, Quantifying Systemic Risk looks at the means of measuring systemic risk and explores alternative approaches. Among the topics discussed are the challenges of tying regulations to specific quantitative measures, the effects of learning and adaptation on the evolution of the market, and the distinction between the shocks that start a crisis and the mechanisms that enable it to grow.
The recent financial crisis and the difficulty of using mainstream macroeconomic models to accurately monitor and assess systemic risk have stimulated new analyses of how we measure economic activity and the development of more sophisticated models in which the financial sector plays a greater role. Markus Brunnermeier and Arvind Krishnamurthy have assembled contributions from leading academic researchers, central bankers, and other financial-market experts to explore the possibilities for advancing macroeconomic modeling in order to achieve more accurate economic measurement. Essays in this volume focus on the development of models capable of highlighting the vulnerabilities that leave the economy susceptible to adverse feedback loops and liquidity spirals. While these types of vulnerabilities have often been identified, they have not been consistently measured. In a financial world of increasing complexity and uncertainty, this volume is an invaluable resource for policymakers working to improve current measurement systems and for academics concerned with conceptualizing effective measurement.
The recent global financial crisis has forced a re-examination of risk transmission in the financial sector and how it affects financial stability. Current macroprudential policy and surveillance (MPS) efforts are aimed establishing a regulatory framework that helps mitigate the risk from systemic linkages with a view towards enhancing the resilience of the financial sector. This paper presents a forward-looking framework ("Systemic CCA") to measure systemic solvency risk based on market-implied expected losses of financial institutions with practical applications for the financial sector risk management and the system-wide capital assessment in top-down stress testing. The suggested approach uses advanced contingent claims analysis (CCA) to generate aggregate estimates of the joint default risk of multiple institutions as a conditional tail expectation using multivariate extreme value theory (EVT). In addition, the framework also helps quantify the individual contributions to systemic risk and contingent liabilities of the financial sector during times of stress.
This volume presents a unified mathematical framework for the transmission channels for damaging shocks that can lead to instability in financial systems. As the title suggests, financial contagion is analogous to the spread of disease, and damaging financial crises may be better understood by bringing to bear ideas from studying other complex systems in our world. After considering how people have viewed financial crises and systemic risk in the past, it delves into the mechanics of the interactions between banking counterparties. It finds a common mathematical structure for types of crises that proceed through cascade mappings that approach a cascade equilibrium. Later chapters follow this theme, starting from the underlying random skeleton graph, developing into the theory of bootstrap percolation, ultimately leading to techniques that can determine the large scale nature of contagious financial cascades.
Financial Risk Forecasting is a complete introduction to practical quantitative risk management, with a focus on market risk. Derived from the authors teaching notes and years spent training practitioners in risk management techniques, it brings together the three key disciplines of finance, statistics and modeling (programming), to provide a thorough grounding in risk management techniques. Written by renowned risk expert Jon Danielsson, the book begins with an introduction to financial markets and market prices, volatility clusters, fat tails and nonlinear dependence. It then goes on to present volatility forecasting with both univatiate and multivatiate methods, discussing the various methods used by industry, with a special focus on the GARCH family of models. The evaluation of the quality of forecasts is discussed in detail. Next, the main concepts in risk and models to forecast risk are discussed, especially volatility, value-at-risk and expected shortfall. The focus is both on risk in basic assets such as stocks and foreign exchange, but also calculations of risk in bonds and options, with analytical methods such as delta-normal VaR and duration-normal VaR and Monte Carlo simulation. The book then moves on to the evaluation of risk models with methods like backtesting, followed by a discussion on stress testing. The book concludes by focussing on the forecasting of risk in very large and uncommon events with extreme value theory and considering the underlying assumptions behind almost every risk model in practical use – that risk is exogenous – and what happens when those assumptions are violated. Every method presented brings together theoretical discussion and derivation of key equations and a discussion of issues in practical implementation. Each method is implemented in both MATLAB and R, two of the most commonly used mathematical programming languages for risk forecasting with which the reader can implement the models illustrated in the book. The book includes four appendices. The first introduces basic concepts in statistics and financial time series referred to throughout the book. The second and third introduce R and MATLAB, providing a discussion of the basic implementation of the software packages. And the final looks at the concept of maximum likelihood, especially issues in implementation and testing. The book is accompanied by a website - www.financialriskforecasting.com – which features downloadable code as used in the book.
This global handbook provides an up-to-date and comprehensive overview of shadow banking, or market-based finance as it has been recently coined. Engaging in financial intermediary services outside of normal regulatory parameters, the shadow banking sector was arguably a critical factor in causing the 2007-2009 financial crisis. This volume focuses specifically on shadow banking activities, risk, policy and regulatory issues. It evaluates the nexus between policy design and regulatory output around the world, paying attention to the concept of risk in all its dimensions—the legal, financial, market, economic and monetary perspectives. Particular attention is given to spillover risk, contagion risk and systemic risk and their positioning and relevance in shadow banking activities. Newly introduced and incoming policies are evaluated in detail, as well as how risk is managed, observed and assessed, and how new regulation can potentially create new sources of risk. Volume I concludes with analysis of what will and still needs to happen in the event of another crisis. Proposing innovative suggestions for improvement, including a novel Pigovian tax to tame financial and systemic risks, this handbook is a must-read for professionals and policy-makers within the banking sector, as well as those researching economics and finance.
The objective of Risk Analysis in Theory and Practice is to present this analytical framework and to illustrate how it can be used in the investigation of economic decisions under risk. In a sense, the economics of risk is a difficult subject: it involves understanding human decisions in the absence of perfect information. How do we make decisions when we do not know some of events affecting us? The complexities of our uncertain world and of how humans obtain and process information make this difficult. In spite of these difficulties, much progress has been made. First, probability theory is the corner stone of risk assessment. This allows us to measure risk in a fashion that can be communicated among decision makers or researchers. Second, risk preferences are now better understood. This provides useful insights into the economic rationality of decision making under uncertainty. Third, over the last decades, good insights have been developed about the value of information. This helps better understand the role of information in human decision making and this book provides a systematic treatment of these issues in the context of both private and public decisions under uncertainty. - Balanced treatment of conceptual models and applied analysis - Considers both private and public decisions under uncertainty - Website presents application exercises in Excel