This book provides the first comprehensive look at the emerging field of web document analysis. It sets the scene in this new field by combining state-of-the-art reviews of challenges and opportunities with research papers by leading researchers. Readers will find in-depth discussions on the many diverse and interdisciplinary areas within the field, including web image processing, applications of machine learning and graph theories for content extraction and web mining, adaptive web content delivery, multimedia document modeling and human interactive proofs for web security.
As became apparent after the tragic events of September 11, 2001, terrorist groups are increasingly using the Internet as a communication and propaganda tool where they can safely communicate with their affiliates, coordinate action plans, raise funds, and introduce new supporters to their networks. This is evident from the large number of web sites run by different terrorist organizations, though the URLs and geographical locations of these web sites are frequently moved around the globe. The wide use of the Internet by terrorists makes some people think that the risk of a major cyber-attack against the communication infrastructure is low. However, this situation may change abruptly once the terrorists decide that the Net does not serve their purposes anymore and, like any other invention of our civilization, deserves destruction.Fighting Terror in Cyberspace is a unique volume, which provides, for the first time, a comprehensive overview of terrorist threats in cyberspace along with state-of-the-art tools and technologies that can deal with these threats in the present and in the future. The book covers several key topics in cyber warfare such as terrorist use of the Internet, the Cyber Jihad, data mining tools and techniques of terrorist detection on the web, analysis and detection of terror financing, and automated identification of terrorist web sites in multiple languages. The contributors include leading researchers on international terrorism, as well as distinguished experts in information security and cyber intelligence. This book represents a valuable source of information for academic researchers, law enforcement and intelligence experts, and industry consultants who are involved in detection, analysis, and prevention of terrorist activities on the Internet.
Software systems surround us. Software is a critical component in everything from the family car through electrical power] systems to military equipment. As software ploys an ever-increasing role in our lives and livelihoods, the quality of that software becomes more and more critical. However, our ability to deliver high-quality software has not kept up with those increasing demands. The economic fallout is enormous; the US economy alone is losing over US$50 billion per year due to software failures. This book presents new research into using advanced artificial intelligence techniques to guide software quality improvements. The techniques of chaos theory and data mining arc brought to bear to provide new insights into the software development process. Written for researchers and practitioners in software engineering and computational intelligence, this book is a unique and important bridge between these two fields.
Thisvolumecontainspapersselectedforpresentationatthe6thIAPRWorkshop on Document Analysis Systems (DAS 2004) held during September 8–10, 2004 at the University of Florence, Italy. Several papers represent the state of the art in a broad range of “traditional” topics such as layout analysis, applications to graphics recognition, and handwritten documents. Other contributions address the description of complete working systems, which is one of the strengths of this workshop. Some papers extend the application domains to other media, like the processing of Internet documents. The peculiarity of this 6th workshop was the large number of papers related to digital libraries and to the processing of historical documents, a taste which frequently requires the analysis of color documents. A total of 17 papers are associated with these topics, whereas two yearsago (in DAS 2002) only a couple of papers dealt with these problems. In our view there are three main reasons for this new wave in the DAS community. From the scienti?c point of view, several research ?elds reached a thorough knowledge of techniques and problems that can be e?ectively solved, and this expertise can now be applied to new domains. Another incentive has been provided by several research projects funded by the EC and the NSF on topics related to digital libraries.
The refereed proceedings of the 4th IAPR International Workshop on Graph-Based Representation in Pattern Recognition, GbRPR 2003, held in York, UK in June/July 2003. The 23 revised full papers presented were carefully reviewed and selected for inclusion in the book. The papers are organized in topical sections on data structures and representation, segmentation, graph edit distance, graph matching, matrix methods, and graph clustering.
Artificial intelligence (AI) is a branch of computer science that models the human ability of reasoning, usage of human language and organization of knowledge, solving problems and practically all other human intellectual abilities. Usually it is characterized by the application of heuristic methods because in the majority of cases there is no exact solution to this kind of problem. Soft computing can be viewed as a branch of AI that deals with the problems that explicitly contain incomplete or complex information, or are known to be impossible for direct computation, i.e., these are the same problems as in AI but viewed from the perspective of their computation. The Mexican International Conference on Artificial Intelligence (MICAI), a yearly international conference series organized by the Mexican Society for Artificial Intelligence (SMIA), is a major international AI forum and the main event in the academic life of the country’s growing AI community. In 2010, SMIA celebrated 10 years of activity related to the organization of MICAI as is represented in its slogan “Ten years on the road with AI”. MICAI conferences traditionally publish high-quality papers in all areas of artificial intelligence and its applications. The proceedings of the previous MICAI events were also published by Springer in its Lecture Notes in Artificial Intelligence (LNAI) series, vols. 1793, 2313, 2972, 3789, 4293, 4827, 5317, and 5845. Since its foundation in 2000, the conference has been growing in popularity and improving in quality.
This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to help the reader grasp the underlying theory. This is a valuable reference for scientists and engineers working in mathematics, computer science, control or other fields related to information processing. It can also be used as a textbook for graduate courses in applied mathematics, computer science, automatic control and electrical engineering. Contents: Fuzzy Neural Networks for Storing and Classifying; Fuzzy Associative Memory OCo Feedback Networks; Regular Fuzzy Neural Networks; Polygonal Fuzzy Neural Networks; Approximation Analysis of Fuzzy Systems; Stochastic Fuzzy Systems and Approximations; Application of FNN to Image Restoration. Readership: Scientists, engineers and graduate students in applied mathematics, computer science, automatic control and information processing."
- Provides a comprehensive review of the literature in range image registration and serves as an effective study guide on this important topic - Presents a novel robust error measure, the surface interpretation, which is easily computed and offers significant immunity to non-Gaussian errors. The shortcomings of the least squares formalism in this setting are carefully explored - The first substantive work focusing on precision alignment, and the first capable of attaining such alignments in low-overlap scenarios without human intervention or manual prealignment - Offers extensive experimental results, highlighting both the impact of robust measures, and the relative efficiency of genetic search algorithms versus more traditional approaches. Extensive comparisons with more traditional algorithms and measures are presented
Adding the time dimension to real-world databases produces Time Series Databases (TSDB) and introduces new aspects and difficulties to data mining and knowledge discovery. This book covers the state-of-the-art methodology for mining time series databases. The novel data mining methods presented in the book include techniques for efficient segmentation, indexing, and classification of noisy and dynamic time series. A graph-based method for anomaly detection in time series is described and the book also studies the implications of a novel and potentially useful representation of time series as strings. The problem of detecting changes in data mining models that are induced from temporal databases is additionally discussed.
Data Mining is the science and technology of exploring data in order to discover previously unknown patterns. It is a part of the overall process of Knowledge Discovery in Databases (KDD). The accessibility and abundance of information today makes data mining a matter of considerable importance and necessity. This book provides an introduction to the field with an emphasis on advanced decomposition methods in general data mining tasks and for classification tasks in particular. The book presents a complete methodology for decomposing classification problems into smaller and more manageable sub-problems that are solvable by using existing tools. The various elements are then joined together to solve the initial problem.The benefits of decomposition methodology in data mining include: increased performance (classification accuracy); conceptual simplification of the problem; enhanced feasibility for huge databases; clearer and more comprehensible results; reduced runtime by solving smaller problems and by using parallel/distributed computation; and the opportunity of using different techniques for individual sub-problems.