This book constitutes the refereed proceedings of the 26th International Conference on the Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2006, held in Kolkata, India, in December 2006. It contains 38 papers that cover a broad variety of current topics from the theory of computing, ranging from formal methods, discrete mathematics, complexity theory, and automata theory to theoretical computer science in general.
This book constitutes the refereed proceedings of the 27th International Conference on the Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2007, held in New Delhi, India, in December 2007. The 40 revised full papers presented together with five invited papers were carefully reviewed. They provide original research results in fundamental aspects of computer science and reports from the frontline of software technology and theoretical computer science.
This book studies exponential time algorithms for NP-hard problems. In this modern area, the aim is to design algorithms for combinatorially hard problems that execute provably faster than a brute-force enumeration of all candidate solutions. After an introduction and survey of the field, the text focuses first on the design and especially the analysis of branching algorithms. The analysis of these algorithms heavily relies on measures of the instances, which aim at capturing the structure of the instances, not merely their size. This makes them more appropriate to quantify the progress an algorithm makes in the process of solving a problem. Expanding the methodology to design exponential time algorithms, new techniques are then presented. Two of them combine treewidth based algorithms with branching or enumeration algorithms. Another one is the iterative compression technique, prominent in the design of parameterized algorithms, and adapted here to the design of exponential time algorithms. This book assumes basic knowledge of algorithms and should serve anyone interested in exactly solving hard problems.
Stochastic games provide a versatile model for reactive systems that are affected by random events. This dissertation advances the algorithmic theory of stochastic games to incorporate multiple players, whose objectives are not necessarily conflicting. The basis of this work is a comprehensive complexity-theoretic analysis of the standard game-theoretic solution concepts in the context of stochastic games over a finite state space. One main result is that the constrained existence of a Nash equilibrium becomes undecidable in this setting. This impossibility result is accompanied by several positive results, including efficient algorithms for natural special cases.
Focuses on various issues related to engineering trustworthy cyber-physical systems Contributes to the improved understanding of system concepts and standardization, and presents a research roadmap Emphasizes tool-supported methods, and focuses on practical issues faced by practitioners Covers the experience of deploying advanced system engineering methods in industry Includes contributions from leading international experts Offers supplementary material on the book website: http://research.nii.ac.jp/tcps/
This book constitutes the proceedings of the 19th International Conference on Foundations of Software Science and Computation Structures, FOSSACS 2016, which took place in Eindhoven, The Netherlands, in April 2016, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016. The 31 full papers presented in this volume were carefully reviewed and selected from 85 submissions. They were organized in topical sections named: types; recursion and fixed-points; verification and program analysis; automata, logic, games; probabilistic and timed systems; proof theory and lambda calculus; algorithms for infinite systems; and monads.
This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions.
Security protocols employed in practice are used in our everyday life and we heavily depend on their security. The complexity of these protocols still poses a big challenge on their comprehensive analysis. To cope with this complexity, a promising approach is modular security analysis based on universal composability frameworks, such as Canetti's UC model. This appealing approach has, however, only very rarely been applied to the analysis of (existing) real-world protocols. Either the analysis was not fully modular or it could only be applied to idealized variants of the protocols. The main goal of this thesis therefore is to push modular protocol analysis as far as possible, but without giving up on accurate modeling. Our main contributions in a nutshell: An ideal functionality for symmetric key cryptography that provides a solid foundation for faithful, composable cryptographic analysis of real-world security protocols. A computational soundness result of formal analysis for key exchange protocols that use symmetric encryption. Novel universal and joint state composition theorems that are applicable to the analysis of real-world security protocols. Case studies on several security protocols: SSL/TLS, IEEE 802.11i (WPA2), SSH, IPsec, and EAP-PSK. We showed that our new composition theorems can be used for a faithful, modular analysis of these protocols. In addition, we proved composable security properties for two central protocols of the IEEE standard 802.11i, namely the 4-Way Handshake Protocol and the CCM Protocol. This constitutes the first rigorous cryptographic analysis of these protocols. While our applications focus on real-world security protocols, our theorems, models, and techniques should be useful beyond this domain.
The concept of 'shape' is at the heart of image processing and computer vision, yet researchers still have some way to go to replicate the human brain's ability to extrapolate meaning from the most basic of outlines. This volume reflects the advances of the last decade, which have also opened up tough new challenges in image processing. Today's applications require flexible models as well as efficient, mathematically justified algorithms that allow data processing within an acceptable timeframe. Examining important topics in continuous-scale and discrete modeling, as well as in modern algorithms, the book is the product of a key seminar focused on innovations in the field. It is a thorough introduction to the latest technology, especially given the tutorial style of a number of chapters. It also succeeds in identifying promising avenues for future research. The topics covered include mathematical morphology, skeletonization, statistical shape modeling, continuous-scale shape models such as partial differential equations and the theory of discrete shape descriptors. Some authors highlight new areas of enquiry such as partite skeletons, multi-component shapes, deformable shape models, and the use of distance fields. Combining the latest theoretical analysis with cutting-edge applications, this book will attract both academics and engineers.
For a long time computer scientists have distinguished between fast and slow algo rithms. Fast (or good) algorithms are the algorithms that run in polynomial time, which means that the number of steps required for the algorithm to solve a problem is bounded by some polynomial in the length of the input. All other algorithms are slow (or bad). The running time of slow algorithms is usually exponential. This book is about bad algorithms. There are several reasons why we are interested in exponential time algorithms. Most of us believe that there are many natural problems which cannot be solved by polynomial time algorithms. The most famous and oldest family of hard problems is the family of NP complete problems. Most likely there are no polynomial time al gorithms solving these hard problems and in the worst case scenario the exponential running time is unavoidable. Every combinatorial problem is solvable in ?nite time by enumerating all possi ble solutions, i. e. by brute force search. But is brute force search always unavoid able? De?nitely not. Already in the nineteen sixties and seventies it was known that some NP complete problems can be solved signi?cantly faster than by brute force search. Three classic examples are the following algorithms for the TRAVELLING SALESMAN problem, MAXIMUM INDEPENDENT SET, and COLORING.