"This book offers suggestions, solutions, and recommendations for new and emerging research in Semantic Web technology, focusing broadly on methods and techniques for making the Web more useful and meaningful"--Provided by publisher.
Linked Data is a method of publishing structured data to facilitate sharing, linking, searching and re-use. Many such datasets have already been published, but although their number and size continues to increase, the main objectives of linking and integration have not yet been fully realized, and even seemingly simple tasks, like finding all the available information for an entity, are still challenging. This book, Services for Connecting and Integrating Big Numbers of Linked Datasets, is the 50th volume in the series ‘Studies on the Semantic Web’. The book analyzes the research work done in the area of linked data integration, and focuses on methods that can be used at large scale. It then proposes indexes and algorithms for tackling some of the challenges, such as, methods for performing cross-dataset identity reasoning, finding all the available information for an entity, methods for ordering content-based dataset discovery, and others. The author demonstrates how content-based dataset discovery can be reduced to solving optimization problems, and techniques are proposed for solving these efficiently while taking the contents of the datasets into consideration. To order them in real time, the proposed indexes and algorithms have been implemented in a suite of services called LODsyndesis, in turn enabling the implementation of other high level services, such as techniques for knowledge graph embeddings, and services for data enrichment which can be exploited for machine-learning tasks, and which also improve the prediction of machine-learning problems.
Fuzzy systems and data mining are indispensible aspects of the computer systems and algorithms on which the world has come to depend. This book presents papers from FSDM 2021, the 7th International Conference on Fuzzy Systems and Data Mining. The conference, originally due to take place in Seoul, South Korea, was held online on 26-29 October 2021, due to ongoing restrictions connected with the COVID-19 pandemic. The annual FSDM conference provides a platform for knowledge exchange between international experts, researchers, academics and delegates from industry. This year, the committee received 266 submissions, and this book contains 52 papers, including keynotes and invited presentations, oral and poster contributions. The papers cover four main areas: 1) fuzzy theory, algorithms and systems – including topics like stability; 2) fuzzy applications – which are widely used and cover various types of processing as well as hardware and architecture for big data and time series; 3) the interdisciplinary field of fuzzy logic and data mining; and 4) data mining itself. The topic most frequently addressed this year is fuzzy systems. The book offers an overview of research and developments in fuzzy logic and data mining, and will be of interest to all those working in the field of data science.
This book constitutes the thoroughly refereed proceedings of the 5th Joint International Semantic Technology Conference, JIST 2015, held in Yichang, China, in November 2015. The theme of the JIST 2015 conference was "Big Data and Social Media". The JIST 2015 conference consisted of main technical tracks including 2 keynotes, 2 invited talks, a regular technical paper track (full and short papers), an in-use track, a poster and demo session, workshop, and tutorial. The 14 full and 8 short papers in this volume were carefully reviewed and selected from 43 submissions. The paper cover the following topics: ontology and reasoning, linked data, learning and discovery, RDF and query, knowledge graph, knowledge integration, query and recommendation, and applications of semantic technologies.
In recent years, our world has experienced a profound shift and progression in available computing and knowledge sharing innovations. These emerging advancements have developed at a rapid pace, disseminating into and affecting numerous aspects of contemporary society. This has created a pivotal need for an innovative compendium encompassing the latest trends, concepts, and issues surrounding this relevant discipline area. During the past 15 years, the Encyclopedia of Information Science and Technology has become recognized as one of the landmark sources of the latest knowledge and discoveries in this discipline. The Encyclopedia of Information Science and Technology, Fourth Edition is a 10-volume set which includes 705 original and previously unpublished research articles covering a full range of perspectives, applications, and techniques contributed by thousands of experts and researchers from around the globe. This authoritative encyclopedia is an all-encompassing, well-established reference source that is ideally designed to disseminate the most forward-thinking and diverse research findings. With critical perspectives on the impact of information science management and new technologies in modern settings, including but not limited to computer science, education, healthcare, government, engineering, business, and natural and physical sciences, it is a pivotal and relevant source of knowledge that will benefit every professional within the field of information science and technology and is an invaluable addition to every academic and corporate library.
Recent combinations of semantic technology and artificial intelligence (AI) present new techniques to build intelligent systems that identify more precise results. Semantic AI in Knowledge Graphs locates itself at the forefront of this novel development, uncovering the role of machine learning to extend the knowledge graphs by graph mapping or corpus-based ontology learning. Securing efficient results via the combination of symbolic AI and statistical AI such as entity extraction based on machine learning, text mining methods, semantic knowledge graphs, and related reasoning power, this book is the first of its kind to explore semantic AI and knowledge graphs. A range of topics are covered, from neuro-symbolic AI, explainable AI and deep learning to knowledge discovery and mining, and knowledge representation and reasoning. A trailblazing exploration of semantic AI in knowledge graphs, this book is a significant contribution to both researchers in the field of AI and data mining as well as beginner academicians.
This book presents 13 high-quality research articles that provide long sought-after answers to questions concerning various aspects of reuse and integration. Its contents lead to the inescapable conclusion that software, hardware, and design productivity – including quality attributes – is not bounded. It combines the best of theory and practice and contains recipes for increasing the output of our productivity sectors. The idea of improving software quality through reuse is not new. After all, if software works and is needed, why not simply reuse it? What is new and evolving, however, is the idea of relative validation through testing and reuse, and the abstraction of code into frameworks for instantiation and reuse. Literal code can be abstracted. These abstractions can in turn yield similar codes, which serve to verify their patterns. There is a taxonomy of representations from the lowest-level literal codes to their highest-level natural language descriptions. As a result, product quality is improved in proportion to the degree of reuse at all levels of abstraction. Any software that is, in theory, complex enough to allow for self-reference, cannot be certified as being absolutely valid. The best that can be attained is a relative validity, which is based on testing. Axiomatic, denotational, and other program semantics are more difficult to verify than the codes, which they represent! But, are there any limits to testing? And how can we maximize the reliability of software or hardware products through testing? These are essential questions that need to be addressed; and, will be addressed herein.
This book constitutes the thoroughly refereed proceedings of the 11th International Conference on Metadata and Semantic Research, MTSR 2017 2017, held in Tallinn, Estonia, November 28th to December 1st, 2017. The 18 full and 13 short papers presented were carefully reviewed and selected from 58 submissions. They focus on the Internet of Things (IoT) and the practical implementation of ontologies and linked data. Further topics are theoretical and foundational principles of metadata; ontologies and information organization; applications of linked data, open data, big data and user-generated metadata; digital interconnectedness; metadata standardization; authority control and interoperability in digital libraries and research data repositories; emerging issues in RDF, OWL, SKOS, schema.org, BIBFRAME, metadata and ontology design; linked data applications for e-books; digital publishing and Content Management Systems (CMSs); content discovery services, search, information retrieval and data visualization applications.
This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.