"This book presents the most commonly used techniques for the most statistical inferences based on fuzzy data. For this purpose, it provides common, simple techniques for statistical inference for fuzzy data. For simplicity in theoretical applications and in calculations, all fuzzy statistical procedures were conducted on fuzzy numbers"--
This book presents the most commonly used techniques for the most statistical inferences based on fuzzy data. It brings together many of the main ideas used in statistical inferences in one place, based on fuzzy information including fuzzy data. This book covers a much wider range of topics than a typical introductory text on fuzzy statistics. It includes common topics like elementary probability, descriptive statistics, hypothesis tests, one-way ANOVA, control-charts, reliability systems and regression models. The reader is assumed to know calculus and a little fuzzy set theory. The conventional knowledge of probability and statistics is required. Key Features: Includes example in Mathematica and MATLAB. Contains theoretical and applied exercises for each section. Presents various popular methods for analyzing fuzzy data. The book is suitable for students and researchers in statistics, social science, engineering, and economics, and it can be used at graduate and P.h.D level.
Initially conceived as a methodology for the representation and manipulation of imprecise and vague information, fuzzy computation has found wide use in problems that fall well beyond its originally intended scope of application. Many scientists and engineers now use the paradigms of fuzzy computation to tackle problems that are either intractable
Probability theory has been the only well-founded theory of uncertainty for a long time. It was viewed either as a powerful tool for modelling random phenomena, or as a rational approach to the notion of degree of belief. During the last thirty years, in areas centered around decision theory, artificial intelligence and information processing, numerous approaches extending or orthogonal to the existing theory of probability and mathematical statistics have come to the front. The common feature of those attempts is to allow for softer or wider frameworks for taking into account the incompleteness or imprecision of information. Many of these approaches come down to blending interval or fuzzy interval analysis with probabilistic methods. This book gathers contributions to the 4th International Conference on Soft methods in Probability and Statistics. Its aim is to present recent results illustrating such new trends that enlarge the statistical and uncertainty modeling traditions, towards the handling of incomplete or subjective information. It covers a broad scope ranging from philosophical and mathematical underpinnings of new uncertainty theories, with a stress on their impact in the area of statistics and data analysis, to numerical methods and applications to environmental risk analysis and mechanical engineering. A unique feature of this collection is to establish a dialogue between fuzzy random variables and imprecise probability theories.
The contributions in this book state the complementary rather than competitive relationship between Probability and Fuzzy Set Theory and allow solutions to real life problems with suitable combinations of both theories.
This volume is a collection of papers presented at the international conference on Nonlinear Mathematics for Uncertainty and Its Applications (NLMUA2011), held at Beijing University of Technology during the week of September 7--9, 2011. The conference brought together leading researchers and practitioners involved with all aspects of nonlinear mathematics for uncertainty and its applications. Over the last fifty years there have been many attempts in extending the theory of classical probability and statistical models to the generalized one which can cope with problems of inference and decision making when the model-related information is scarce, vague, ambiguous, or incomplete. Such attempts include the study of nonadditive measures and their integrals, imprecise probabilities and random sets, and their applications in information sciences, economics, finance, insurance, engineering, and social sciences. The book presents topics including nonadditive measures and nonlinear integrals, Choquet, Sugeno and other types of integrals, possibility theory, Dempster-Shafer theory, random sets, fuzzy random sets and related statistics, set-valued and fuzzy stochastic processes, imprecise probability theory and related statistical models, fuzzy mathematics, nonlinear functional analysis, information theory, mathematical finance and risk managements, decision making under various types of uncertainty, and others.
Soft computing, as an engineering science, and statistics, as a classical branch of mathematics, emphasize different aspects of data analysis. Soft computing focuses on obtaining working solutions quickly, accepting approximations and unconventional approaches. Its strength lies in its flexibility to create models that suit the needs arising in applications. In addition, it emphasizes the need for intuitive and interpretable models, which are tolerant to imprecision and uncertainty. Statistics is more rigorous and focuses on establishing objective conclusions based on experimental data by analyzing the possible situations and their (relative) likelihood. It emphasizes the need for mathematical methods and tools to assess solutions and guarantee performance. Combining the two fields enhances the robustness and generalizability of data analysis methods, while preserving the flexibility to solve real-world problems efficiently and intuitively.
The analysis of experimental data resulting from some underlying random process is a fundamental part of most scientific research. Probability Theory and Statistics have been developed as flexible tools for this analyis, and have been applied successfully in various fields such as Biology, Economics, Engineering, Medicine or Psychology. However, traditional techniques in Probability and Statistics were devised to model only a singe source of uncertainty, namely randomness. In many real-life problems randomness arises in conjunction with other sources, making the development of additional "softening" approaches essential. This book is a collection of papers presented at the 2nd International Conference on Soft Methods in Probability and Statistics (SMPS’2004) held in Oviedo, providing a comprehensive overview of the innovative new research taking place within this emerging field.
This book discusses the problems of complexity in industrial data, including the problems of data sources, causes and types of data uncertainty, and methods of data preparation for further reasoning in engineering practice. Each data source has its own specificity, and a characteristic property of industrial data is its high degree of uncertainty. The book also explores a wide spectrum of soft modeling methods with illustrations pertaining to specific cases from diverse industrial processes. In soft modeling the physical nature of phenomena may not be known and may not be taken into consideration. Soft models usually employ simplified mathematical equations derived directly from the data obtained as observations or measurements of the given system. Although soft models may not explain the nature of the phenomenon or system under study, they usually point to its significant features or properties.