This first textbook on formal concept analysis gives a systematic presentation of the mathematical foundations and their relations to applications in computer science, especially in data analysis and knowledge processing. Above all, it presents graphical methods for representing conceptual systems that have proved themselves in communicating knowledge. The mathematical foundations are treated thoroughly and are illuminated by means of numerous examples, making the basic theory readily accessible in compact form.
Formal concept analysis has been developed as a field of applied mathematics based on the mathematization of concept and concept hierarchy. It thereby allows us to mathematically represent, analyze, and construct conceptual structures. The formal concept analysis approach has been proven successful in a wide range of application fields. This book constitutes a comprehensive and systematic presentation of the state of the art of formal concept analysis and its applications. The first part of the book is devoted to foundational and methodological topics. The contributions in the second part demonstrate how formal concept analysis is successfully used outside of mathematics, in linguistics, text retrieval, association rule mining, data analysis, and economics. The third part presents applications in software engineering.
FCA is an important formalism that is associated with a variety of research areas such as lattice theory, knowledge representation, data mining, machine learning, and semantic Web. It is successfully exploited in an increasing number of application domains such as software engineering, information retrieval, social network analysis, and bioinformatics. Its mathematical power comes from its concept lattice formalization in which each element in the lattice captures a formal concept while the whole structure represents a conceptual hierarchy that offers browsing, clustering and association rule mining. Complex data analytics refers to advanced methods and tools for mining and analyzing data with complex structures such as XML/Json data, text and image data, multidimensional data, graphs, sequences and streaming data. It also covers visualization mechanisms used to highlight the discovered knowledge. This edited book examines a set of important and relevant research directions in complex data management, and updates the contribution of the FCA community in analyzing complex and large data such as knowledge graphs and interlinked contexts. For example, Formal Concept Analysis and some of its extensions are exploited, revisited and coupled with recent processing parallel and distributed paradigms to maximize the benefits in analyzing large data.
The book studies the existing and potential connections between Social Network Analysis (SNA) and Formal Concept Analysis (FCA) by showing how standard SNA techniques, usually based on graph theory, can be supplemented by FCA methods, which rely on lattice theory. The book presents contributions to the following areas: acquisition of terminological knowledge from social networks, knowledge communities, individuality computation, other types of FCA-based analysis of bipartite graphs (two-mode networks), multimodal clustering, community detection and description in one-mode and multi-mode networks, adaptation of the dual-projection approach to weighted bipartite graphs, extensions to the Kleinberg's HITS algorithm as well as attributed graph analysis.
This book constitutes the refereed proceedings of the Second International Conference on Formal Concept Analysis, ICFCA 2004, held in Sydney, Australia in February 2004. The 27 revised full papers presented together with 7 invited papers were carefully reviewed and selected for inclusion in the book. Formal concept analysis emerged out of efforts to restructure lattice theory and has been extended into attribute exploration, Boolean judgment, and contextual logics in order to create a powerful general framework for knowledge representation and formal reasoning; among the application areas of formal concept analysis are data and knowledge processing, data visualization, information retrieval, machine learning, data analysis, and knowledge management. The papers in this book address all current issues in formal concept analysis, ranging from foundational and methodological issues to applications in various fields.
This book constitutes the thoroughly refereed conference proceedings of the 9th International Conference on Rough Sets and Knowledge Technology, RSKT 2014, held in Shanghai, China, in October 2014. The 70 papers presented were carefully reviewed and selected from 162 submissions. The papers in this volume cover topics such as foundations and generalizations of rough sets, attribute reduction and feature selection, applications of rough sets, intelligent systems and applications, knowledge technology, domain-oriented data-driven data mining, uncertainty in granular computing, advances in granular computing, big data to wise decisions, rough set theory, and three-way decisions, uncertainty, and granular computing.
This volume contains lecture notes of the 15th Reasoning Web Summer School (RW 2019), held in Bolzano, Italy, in September 2019. The research areas of Semantic Web, Linked Data, and Knowledge Graphs have recently received a lot of attention in academia and industry. Since its inception in 2001, the Semantic Web has aimed at enriching the existing Web with meta-data and processing methods, so as to provide Web-based systems with intelligent capabilities such as context awareness and decision support. The Semantic Web vision has been driving many community efforts which have invested a lot of resources in developing vocabularies and ontologies for annotating their resources semantically. Besides ontologies, rules have long been a central part of the Semantic Web framework and are available as one of its fundamental representation tools, with logic serving as a unifying foundation. Linked Data is a related research area which studies how one can make RDF data available on the Web and interconnect it with other data with the aim of increasing its value for everybody. Knowledge Graphs have been shown useful not only for Web search (as demonstrated by Google, Bing, etc.) but also in many application domains.
In the beginning of 1983, I came across A. Kaufmann's book "Introduction to the theory of fuzzy sets" (Academic Press, New York, 1975). This was my first acquaintance with the fuzzy set theory. Then I tried to introduce a new component (which determines the degree of non-membership) in the definition of these sets and to study the properties of the new objects so defined. I defined ordinary operations as "n", "U", "+" and "." over the new sets, but I had began to look more seriously at them since April 1983, when I defined operators analogous to the modal operators of "necessity" and "possibility". The late George Gargov (7 April 1947 - 9 November 1996) is the "god father" of the sets I introduced - in fact, he has invented the name "intu itionistic fuzzy", motivated by the fact that the law of the excluded middle does not hold for them. Presently, intuitionistic fuzzy sets are an object of intensive research by scholars and scientists from over ten countries. This book is the first attempt for a more comprehensive and complete report on the intuitionistic fuzzy set theory and its more relevant applications in a variety of diverse fields. In this sense, it has also a referential character.
In recent years rough set theory has attracted the attention of many researchers and practitioners all over the world, who have contributed essentially to its development and applications. Weareobservingagrowingresearchinterestinthefoundationsofroughsets, including the various logical, mathematical and philosophical aspects of rough sets. Some relationships have already been established between rough sets and other approaches, and also with a wide range of hybrid systems. As a result, rough sets are linked with decision system modeling and analysis of complex systems, fuzzy sets, neural networks, evolutionary computing, data mining and knowledge discovery, pattern recognition, machine learning, and approximate reasoning. In particular, rough sets are used in probabilistic reasoning, granular computing (including information granule calculi based on rough mereology), intelligent control, intelligent agent modeling, identi?cation of autonomous s- tems, and process speci?cation. Methods based on rough set theory alone or in combination with other - proacheshavebeendiscoveredwith awide rangeofapplicationsinsuchareasas: acoustics, bioinformatics, business and ?nance, chemistry, computer engineering (e.g., data compression, digital image processing, digital signal processing, p- allel and distributed computer systems, sensor fusion, fractal engineering), de- sion analysis and systems, economics, electrical engineering (e.g., control, signal analysis, power systems), environmental studies, informatics, medicine, mole- lar biology, musicology, neurology, robotics, social science, software engineering, spatial visualization, Web engineering, and Web mining.
The concept of a data lake is less than 10 years old, but they are already hugely implemented within large companies. Their goal is to efficiently deal with ever-growing volumes of heterogeneous data, while also facing various sophisticated user needs. However, defining and building a data lake is still a challenge, as no consensus has been reached so far. Data Lakes presents recent outcomes and trends in the field of data repositories. The main topics discussed are the data-driven architecture of a data lake; the management of metadata supplying key information about the stored data, master data and reference data; the roles of linked data and fog computing in a data lake ecosystem; and how gravity principles apply in the context of data lakes. A variety of case studies are also presented, thus providing the reader with practical examples of data lake management.