"This book presents the most innovative systematic and practical facets of fuzzy computing technologies to students, scholars, and academicians, as well as practitioners, engineers, and professionals"--
Among the various multi-level formulations of mathematical models in decision making processes, this book focuses on the bi-level model. Being the most frequently used, the bi-level model addresses conflicts which exist in multi-level decision making processes. From the perspective of bi-level structure and uncertainty, this book takes real-life problems as the background, focuses on the so-called random-like uncertainty, and develops the general framework of random-like bi-level decision making problems. The random-like uncertainty considered in this book includes random phenomenon, random-overlapped random (Ra-Ra) phenomenon and fuzzy-overlapped random (Ra-Fu) phenomenon. Basic theory, models, algorithms and practical applications for different types of random-like bi-level decision making problems are also presented in this book.
A comprehensive, coherent, and in depth presentation of the state of the art in fuzzy clustering. Fuzzy clustering is now a mature and vibrant area of research with highly innovative advanced applications. Encapsulating this through presenting a careful selection of research contributions, this book addresses timely and relevant concepts and methods, whilst identifying major challenges and recent developments in the area. Split into five clear sections, Fundamentals, Visualization, Algorithms and Computational Aspects, Real-Time and Dynamic Clustering, and Applications and Case Studies, the book covers a wealth of novel, original and fully updated material, and in particular offers: a focus on the algorithmic and computational augmentations of fuzzy clustering and its effectiveness in handling high dimensional problems, distributed problem solving and uncertainty management. presentations of the important and relevant phases of cluster design, including the role of information granules, fuzzy sets in the realization of human-centricity facet of data analysis, as well as system modelling demonstrations of how the results facilitate further detailed development of models, and enhance interpretation aspects a carefully organized illustrative series of applications and case studies in which fuzzy clustering plays a pivotal role This book will be of key interest to engineers associated with fuzzy control, bioinformatics, data mining, image processing, and pattern recognition, while computer engineers, students and researchers, in most engineering disciplines, will find this an invaluable resource and research tool.
In recent years there has been a growing interest to extend classical methods for data analysis. The aim is to allow a more flexible modeling of phenomena such as uncertainty, imprecision or ignorance. Such extensions of classical probability theory and statistics are useful in many real-life situations, since uncertainties in data are not only present in the form of randomness --- various types of incomplete or subjective information have to be handled. About twelve years ago the idea of strengthening the dialogue between the various research communities in the field of data analysis was born and resulted in the International Conference Series on Soft Methods in Probability and Statistics (SMPS). This book gathers contributions presented at the SMPS'2012 held in Konstanz, Germany. Its aim is to present recent results illustrating new trends in intelligent data analysis. It gives a comprehensive overview of current research into the fusion of soft computing methods with probability and statistics. Synergies of both fields might improve intelligent data analysis methods in terms of robustness to noise and applicability to larger datasets, while being able to efficiently obtain understandable solutions of real-world problems.
Soft computing, as an engineering science, and statistics, as a classical branch of mathematics, emphasize different aspects of data analysis. Soft computing focuses on obtaining working solutions quickly, accepting approximations and unconventional approaches. Its strength lies in its flexibility to create models that suit the needs arising in applications. In addition, it emphasizes the need for intuitive and interpretable models, which are tolerant to imprecision and uncertainty. Statistics is more rigorous and focuses on establishing objective conclusions based on experimental data by analyzing the possible situations and their (relative) likelihood. It emphasizes the need for mathematical methods and tools to assess solutions and guarantee performance. Combining the two fields enhances the robustness and generalizability of data analysis methods, while preserving the flexibility to solve real-world problems efficiently and intuitively.
This monograph, now in a thoroughly revised second edition, offers the latest research on random sets. It has been extended to include substantial developments achieved since 2005, some of them motivated by applications of random sets to econometrics and finance. The present volume builds on the foundations laid by Matheron and others, including the vast advances in stochastic geometry, probability theory, set-valued analysis, and statistical inference. It shows the various interdisciplinary relationships of random set theory within other parts of mathematics, and at the same time fixes terminology and notation that often vary in the literature, establishing it as a natural part of modern probability theory and providing a platform for future development. It is completely self-contained, systematic and exhaustive, with the full proofs that are necessary to gain insight. Aimed at research level, Theory of Random Sets will be an invaluable reference for probabilists; mathematicians working in convex and integral geometry, set-valued analysis, capacity and potential theory; mathematical statisticians in spatial statistics and uncertainty quantification; specialists in mathematical economics, econometrics, decision theory, and mathematical finance; and electronic and electrical engineers interested in image analysis.
Probability theory has been the only well-founded theory of uncertainty for a long time. It was viewed either as a powerful tool for modelling random phenomena, or as a rational approach to the notion of degree of belief. During the last thirty years, in areas centered around decision theory, artificial intelligence and information processing, numerous approaches extending or orthogonal to the existing theory of probability and mathematical statistics have come to the front. The common feature of those attempts is to allow for softer or wider frameworks for taking into account the incompleteness or imprecision of information. Many of these approaches come down to blending interval or fuzzy interval analysis with probabilistic methods. This book gathers contributions to the 4th International Conference on Soft methods in Probability and Statistics. Its aim is to present recent results illustrating such new trends that enlarge the statistical and uncertainty modeling traditions, towards the handling of incomplete or subjective information. It covers a broad scope ranging from philosophical and mathematical underpinnings of new uncertainty theories, with a stress on their impact in the area of statistics and data analysis, to numerical methods and applications to environmental risk analysis and mechanical engineering. A unique feature of this collection is to establish a dialogue between fuzzy random variables and imprecise probability theories.
Although the notion is a relatively recent one, the notions and principles of Granular Computing (GrC) have appeared in a different guise in many related fields including granularity in Artificial Intelligence, interval computing, cluster analysis, quotient space theory and many others. Recent years have witnessed a renewed and expanding interest in the topic as it begins to play a key role in bioinformatics, e-commerce, machine learning, security, data mining and wireless mobile computing when it comes to the issues of effectiveness, robustness and uncertainty. The Handbook of Granular Computing offers a comprehensive reference source for the granular computing community, edited by and with contributions from leading experts in the field. Includes chapters covering the foundations of granular computing, interval analysis and fuzzy set theory; hybrid methods and models of granular computing; and applications and case studies. Divided into 5 sections: Preliminaries, Fundamentals, Methodology and Algorithms, Development of Hybrid Models and Applications and Case Studies. Presents the flow of ideas in a systematic, well-organized manner, starting with the concepts and motivation and proceeding to detailed design that materializes in specific algorithms, applications and case studies. Provides the reader with a self-contained reference that includes all pre-requisite knowledge, augmented with step-by-step explanations of more advanced concepts. The Handbook of Granular Computing represents a significant and valuable contribution to the literature and will appeal to a broad audience including researchers, students and practitioners in the fields of Computational Intelligence, pattern recognition, fuzzy sets and neural networks, system modelling, operations research and bioinformatics.
The idea of soft computing emerged in the early 1990s from the fuzzy systems c- munity, and refers to an understanding that the uncertainty, imprecision and ig- rance present in a problem should be explicitly represented and possibly even - ploited rather than either eliminated or ignored in computations. For instance, Zadeh de?ned ‘Soft Computing’ as follows: Soft computing differs from conventional (hard) computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty and partial truth. In effect, the role model for soft computing is the human mind. Recently soft computing has, to some extent, become synonymous with a hybrid approach combining AI techniques including fuzzy systems, neural networks, and biologically inspired methods such as genetic algorithms. Here, however, we adopt a more straightforward de?nition consistent with the original concept. Hence, soft methods are understood as those uncertainty formalisms not part of mainstream s- tistics and probability theory which have typically been developed within the AI and decisionanalysiscommunity.Thesearemathematicallysounduncertaintymodelling methodologies which are complementary to conventional statistics and probability theory.