The book includes selected high-quality research papers presented at the Third International Congress on Information and Communication Technology held at Brunel University, London on February 27–28, 2018. It discusses emerging topics pertaining to information and communication technology (ICT) for managerial applications, e-governance, e-agriculture, e-education and computing technologies, the Internet of Things (IOT), and e-mining. Written by experts and researchers working on ICT, the book is suitable for new researchers involved in advanced studies.
This book provides readers the “big picture” and a comprehensive survey of the domain of big data processing systems. For the past decade, the Hadoop framework has dominated the world of big data processing, yet recently academia and industry have started to recognize its limitations in several application domains and thus, it is now gradually being replaced by a collection of engines that are dedicated to specific verticals (e.g. structured data, graph data, and streaming data). The book explores this new wave of systems, which it refers to as Big Data 2.0 processing systems. After Chapter 1 presents the general background of the big data phenomena, Chapter 2 provides an overview of various general-purpose big data processing systems that allow their users to develop various big data processing jobs for different application domains. In turn, Chapter 3 examines various systems that have been introduced to support the SQL flavor on top of the Hadoop infrastructure and provide competing and scalable performance in the processing of large-scale structured data. Chapter 4 discusses several systems that have been designed to tackle the problem of large-scale graph processing, while the main focus of Chapter 5 is on several systems that have been designed to provide scalable solutions for processing big data streams, and on other sets of systems that have been introduced to support the development of data pipelines between various types of big data processing jobs and systems. Next, Chapter 6 focuses on covering the emerging frameworks and systems in the domain of scalable machine learning and deep learning processing. Lastly, Chapter 7 shares conclusions and an outlook on future research challenges. This new and considerably enlarged second edition not only contains the completely new chapter 6, but also offers a refreshed content for the state-of-the-art in all domains of big data processing over the last years. Overall, the book offers a valuable reference guide for professional, students, and researchers in the domain of big data processing systems. Further, its comprehensive content will hopefully encourage readers to pursue further research on the subject.
Crowdfunding is becoming an increasingly popular method to finance projects of every kind and scale. This contributed volume is one of the earliest books presenting scientific and research-based perspectives of crowdfunding, its development, and future. The European Crowdfunding Network (ECN) and its scientific work group, together with FGF e.V., invited both researchers and practitioners to contribute to this first state-of-the-art edited volume on crowdfunding in Europe. This book contributes to a better comprehension of crowdfunding, encourages further fundamental research and contributes to a systematization of this new field of research. The book also features expert contributions by practitioners to enhance and complement the scientific perspective. This book can be used as a guideline and shall advance classification in an emerging research field.
This book constitutes the thoroughly refereed proceedings of the 9th Russian Summer School on Information Retrieval, RuSSIR 2015, held in Saint Petersburg, Russia, in August 2015. The volume includes 5 tutorial papers, summarizing lectures given at the event, and 6 revised papers from the school participants. The papers focus on various aspects of information retrieval.
Government “of the people, by the people, for the people” expresses an ideal that resonates in all democracies. Yet poll after poll reveals deep distrust of institutions that seem to have left “the people” out of the governing equation. Government bureaucracies that are supposed to solve critical problems on their own are a troublesome outgrowth of the professionalization of public life in the industrial age. They are especially ill-suited to confronting today’s complex challenges. Offering a far-reaching program for innovation, Smart Citizens, Smarter State suggests that public decisionmaking could be more effective and legitimate if government were smarter—if our institutions knew how to use technology to leverage citizens’ expertise. Just as individuals use only part of their brainpower to solve most problems, governing institutions make far too little use of the skills and experience of those inside and outside of government with scientific credentials, practical skills, and ground-level street smarts. New tools—what Beth Simone Noveck calls technologies of expertise—are making it possible to match the supply of citizen expertise to the demand for it in government. Drawing on a wide range of academic disciplines and practical examples from her work as an adviser to governments on institutional innovation, Noveck explores how to create more open and collaborative institutions. In so doing, she puts forward a profound new vision for participatory democracy rooted not in the paltry act of occasional voting or the serendipity of crowdsourcing but in people’s knowledge and know-how.
"Focused on the latest research on text and document management, this guide addresses the information management needs of organizations by providing the most recent findings. How the need for effective databases to house information is impacting organizations worldwide and how some organizations that possess a vast amount of data are not able to use the data in an economic and efficient manner is demonstrated. A taxonomy for object-oriented databases, metrics for controlling database complexity, and a guide to accommodating hierarchies in relational databases are provided. Also covered is how to apply Java-triggers for X-Link management and how to build signatures."
Geared toward designers and professionals interested in the conceptual aspects of integrity problems in different paradigms, Database Integrity: Challenges and Solutions successfully addresses these and a variety of other issues.
This book explores the possibility of using social media data for detecting socio-economic recovery activities. In the last decade, there have been intensive research activities focusing on social media during and after disasters. This approach, which views people’s communication on social media as a sensor for real-time situations, has been widely adopted as the “people as sensor” approach. Furthermore, to improve recovery efforts after large-scale disasters, detecting communities’ real-time recovery situations is essential, since conventional socio-economic recovery indicators, such as governmental statistics, are not published in real time. Thanks to its timeliness, using social media data can fill the gap. Motivated by this possibility, this book especially focuses on the relationships between people’s communication on Twitter and Facebook pages, and socio-economic recovery activities as reflected in the used-car market data and the housing market data in the case of two major disasters: the Great East Japan Earthquake and Tsunami of 2011 and Hurricane Sandy in 2012. The book pursues an interdisciplinary approach, combining e.g. disaster recovery studies, crisis informatics, and economics. In terms of its contributions, firstly, the book sheds light on the “people as sensors” approach for detecting socio-economic recovery activities, which has not been thoroughly studied to date but has the potential to improve situation awareness during the recovery phase. Secondly, the book proposes new socio-economic recovery indicators: used-car market data and housing market data. Thirdly, in the context of using social media during the recovery phase, the results demonstrate the importance of distinguishing between social media data posted both by people who are at or near disaster-stricken areas and by those who are farther away.
This work presents link prediction similarity measures for social networks that exploit the degree distribution of the networks. In the context of link prediction in dense networks, the text proposes similarity measures based on Markov inequality degree thresholding (MIDTs), which only consider nodes whose degree is above a threshold for a possible link. Also presented are similarity measures based on cliques (CNC, AAC, RAC), which assign extra weight between nodes sharing a greater number of cliques. Additionally, a locally adaptive (LA) similarity measure is proposed that assigns different weights to common nodes based on the degree distribution of the local neighborhood and the degree distribution of the network. In the context of link prediction in dense networks, the text introduces a novel two-phase framework that adds edges to the sparse graph to forma boost graph.