Proceedings of the 29th Annual International Conference on Very Large Data Bases held in Berlin, Germany on September 9-12, 2003. Organized by the VLDB Endowment, VLDB is the premier international conference on database technology.
Researchers in data management have recently recognized the importance of a new class of data-intensive applications that requires managing data streams, i.e., data composed of continuous, real-time sequence of items. Streaming applications pose new and interesting challenges for data management systems. Such application domains require queries to be evaluated continuously as opposed to the one time evaluation of a query for traditional applications. Streaming data sets grow continuously and queries must be evaluated on such unbounded data sets. These, as well as other challenges, require a major rethink of almost all aspects of traditional database management systems to support streaming applications. Stream Data Management comprises eight invited chapters by researchers active in stream data management. The collected chapters provide exposition of algorithms, languages, as well as systems proposed and implemented for managing streaming data. Stream Data Management is designed to appeal to researchers or practitioners already involved in stream data management, as well as to those starting out in this area. This book is also suitable for graduate students in computer science interested in learning about stream data management.
Comprehensive Coverage of the Entire Area of ClassificationResearch on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlyi
With the growth in our reliance on information systems and computer science information modeling and knowledge bases have become a focus for academic attention and research. The amount and complexity of information, the number of levels of abstraction and the size of databases and knowledge bases all continue to increase, and new challenges and problems arise every day.This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes such as the design and specification of information systems, software engineering and knowledge and process management.
This monograph is a technical survey of concepts and techniques for describing and analyzing large-scale time-series data streams. Some topics covered are algorithms for query by humming, gamma-ray burst detection, pairs trading, and density detection. Included are self-contained descriptions of wavelets, fast Fourier transforms, and sketches as they apply to time-series analysis. Detailed applications are built on a solid scientific basis.
This volume comprises papers from the following ?ve workshops that were part of the complete program for the International Conference on Extending Database Technology (EDBT) held in Heraklion, Greece, March 2004: • ICDE/EDBT Joint Ph. D. Workshop (PhD) • Database Technologies for Handling XML-information on the Web (DataX) • Pervasive Information Management (PIM) • Peer-to-Peer Computing and Databases (P2P&DB) • Clustering Information Over the Web (ClustWeb) Together, the ?ve workshops featured 61 high-quality papers selected from appr- imately 180 submissions. It was, therefore, dif?cult to decide on the papers that were to beacceptedforpresentation. Webelievethattheacceptedpaperssubstantiallycontribute to their particular ?elds of research. The workshops were an excellent basis for intense and highly fruitful discussions. The quality and quantity of papers show that the areas of interest for the workshops are highly active. A large number of excellent researchers are working on the aforementioned ?elds producing research output that is not only of interest for other researchers but also for industry. The organizers and participants of the workshops were highly satis?ed with the output. The high quality of the presenters and workshop participants contributed to the success of each workshop. The amazing environment of Heraklion and the location of the EDBT conference also contributed to the overall success. Last, but not least, our sincere thanks to the conference organizers – the organizing team was always willing to help and if there were things that did not work, assistance was quickly available.
"This book bridges two fields that, although closely related, are often studied in isolation: enterprise modeling and information systems modeling. The principal idea is to use a standard language for modeling information systems, UML, as a catalyst and investigate its potential for modeling enterprises"--Provided by publisher.
This two-volume set LNCS 3760/3761 constitutes the refereed proceedings of the three confederated conferences CoopIS 2005, DOA 2005, and ODBASE 2005 held as OTM 2005 in Agia Napa, Cyprus in October/November 2005. The 89 revised full and 7 short papers presented together with 3 keynote speeches were carefully reviewed and selected from a total of 360 submissions. Corresponding with the three OTM 2005 main conferences CoopIS, DOA, and ODBASE, the papers are organized in topical sections on workflow, workflow and business processes, mining and filtering, petri nets and processs management, information access and integrity, heterogeneity, semantics, querying and content delivery, Web services, agents, security, integrity and consistency, chain and collaboration mangement, Web services and service-oriented architectures, multicast and fault tolerance, communication services, techniques for application hosting, mobility, security and data persistence, component middleware, java environments, peer-to-peer computing architectures, aspect oriented middleware, information integration and modeling, query processing, ontology construction, metadata, information retrieval and classification, system verification and evaluation, and active rules and Web services.
"Temporal Information Processing Technology and Its Applications" systematically studies temporal information processing technology and its applications. The book covers following subjects: 1) time model, calculus and logic; 2) temporal data models, semantics of temporal variable ‘now’ temporal database concepts; 3) temporal query language, a typical temporal database management system: TempDB; 4) temporal extension on XML, workflow and knowledge base; and, 5) implementation patterns of temporal applications, a typical example of temporal application. The book is intended for researchers, practitioners and graduate students of databases, data/knowledge management and temporal information processing. Dr. Yong Tang is a professor at the Computer School, South China Normal University, China.
DASFAA is an annual international database conference, located in the Asia- Paci?cregion,whichshowcasesstate-of-the-artR & Dactivities in databases- tems and their applications. It provides a forum for technical presentations and discussions among database researchers, developers and users from academia, business and industry. DASFAA 2009, the 14th in the series, was held during April 20-23, 2009 in Brisbane, Australia. In this year, we carefully selected six workshops, each focusing on speci?c research issues that contribute to the main themes of the DASFAA conference. Thisvolumecontainsthe?nalversionsofpapersacceptedforthesesixworkshops that were held in conjunction with DASFAA 2009. They are: – First International Workshop on Benchmarking of XML and Semantic Web Applications (BenchmarX 2009) – Second International Workshop on Managing Data Quality in Collaborative Information Systems (MCIS 2009) – First International Workshop on Data and Process Provenance (WDPP 2009) – First International Workshop on Privacy-Preserving Data Analysis (PPDA 2009) – FirstInternationalWorkshoponMobileBusinessCollaboration(MBC2009) – DASFAA 2009 PhD Workshop All the workshops were selected via a public call-for-proposals process. The workshop organizers put a tremendous amount of e?ort into soliciting and - lecting papers with a balance of high quality, new ideas and new applications. We asked all workshops to follow a rigid paper selection process, including the procedure to ensure that any Program Committee members are excluded from the paper review process of any paper they are involved with. A requirement about the overall paper acceptance rate of no more than 50% was also imposed on all the workshops.