Based on the 2007 Dagstuhl Research Seminar CoCoME, this book defines a common example for modeling approaches of component-based systems. The book makes it possible to compare different approaches and to validate existing models.
Based on the 2007 Dagstuhl Research Seminar CoCoME, this book defines a common example for modeling approaches of component-based systems. The book makes it possible to compare different approaches and to validate existing models.
This book constitutes the refereed proceedings of the 49th International Conference on Objects, Models, Components, Patterns, held in Zurich, Switzerland, in June 2011. The 19 revised full papers presented together with the abstracts of 2 invited papers were carefully reviewed and selected from a total of 68 submissions. The papers discuss all aspects of object technology and related fields, in particular model-based development, component-based development, language implementation and patterns, in a holistic way. The conference has a strong practical bias, without losing sight of the importance of correctness and performance.
Capacity management is a core activity when designing and operating distributed software systems. Particularly, enterprise application systems are exposed to highly varying workloads. Employing static capacity management, this leads to unnecessarily high total cost of ownership due to poor resource usage efficiency. This thesis introduces a model-driven online capacity management approach for distributed component-based software systems, called SLAstic. The core contributions of this approach are a) modeling languages to capture relevant architectural information about a controlled software system, b) an architecture-based online capacity management framework based on the common MAPE-K control loop architecture, c) model-driven techniques supporting the automation of the approach, d) architectural runtime reconfiguration operations for controlling a system’s capacity, as well as e) an integration of the Palladio Component Model. A qualitative and quantitative evaluation of the approach is performed by case studies, lab experiments, and simulation.
Making Data Integration Work: How to Systematically Reduce Cost, Improve Quality, and Enhance Effectiveness Today’s enterprises are investing massive resources in data integration. Many possess thousands of point-to-point data integration applications that are costly, undocumented, and difficult to maintain. Data integration now accounts for a major part of the expense and risk of typical data warehousing and business intelligence projects--and, as businesses increasingly rely on analytics, the need for a blueprint for data integration is increasing now more than ever. This book presents the solution: a clear, consistent approach to defining, designing, and building data integration components to reduce cost, simplify management, enhance quality, and improve effectiveness. Leading IBM data management expert Tony Giordano brings together best practices for architecture, design, and methodology, and shows how to do the disciplined work of getting data integration right. Mr. Giordano begins with an overview of the “patterns” of data integration, showing how to build blueprints that smoothly handle both operational and analytic data integration. Next, he walks through the entire project lifecycle, explaining each phase, activity, task, and deliverable through a complete case study. Finally, he shows how to integrate data integration with other information management disciplines, from data governance to metadata. The book’s appendices bring together key principles, detailed models, and a complete data integration glossary. Coverage includes Implementing repeatable, efficient, and well-documented processes for integrating data Lowering costs and improving quality by eliminating unnecessary or duplicative data integrations Managing the high levels of complexity associated with integrating business and technical data Using intuitive graphical design techniques for more effective process and data integration modeling Building end-to-end data integration applications that bring together many complex data sources
Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. This book describes a new integrated reverse engineering approach for the reconstruction of parameterised software performance models (software component architecture and behaviour).
Mashups have emerged as an innovative software trend that re-interprets existing Web building blocks and leverages the composition of individual components in novel, value-adding ways. Additional appeal also derives from their potential to turn non-programmers into developers. Daniel and Matera have written the first comprehensive reference work for mashups. They systematically cover the main concepts and techniques underlying mashup design and development, the synergies among the models involved at different levels of abstraction and the way models materialize into composition paradigms and architectures of corresponding development tools. The book deliberately takes a balanced approach, combining a scientific perspective on the topic with an in-depth view on relevant technologies. To this end, the first part of the book introduces the theoretical and technological foundations for designing and developing mashups, as well as for designing tools that can aid mashup development. The second part then focuses more specifically on various aspects of mashups. It discusses a set of core component technologies, core approaches and architectural patterns, with a particular emphasis on tool-aided mashup development exploiting model-driven architectures. Development processes for mashups are also discussed and special attention is paid to composition paradigms for the end-user development of mashups and quality issues. Overall, the book is of interest to a wide range of readers. Students, lecturers, and researchers will find a comprehensive overview of core concepts and technological foundations for mashup implementation and composition. Even without low-level coding details, practitioners like software architects will find guidance on key implementation concepts, architectural patterns and development tools and approaches. A related website provides additional teaching material which can be used either as part of a course or for self study.
The 7th ACIS International Conference on Software Engineering Research, Management and Applications (SERA 2009) was held on Hainan Island, China from December 2 – 4. SERA ’09 featured excellent theoretical and practical contributions in the areas of formal methods and tools, requirements engineering, software process models, communication systems and networks, software quality and evaluation, software engineering, networks and mobile computing, parallel/distributed computing, software testing, reuse and metrics, database retrieval, computer security, software architectures and modeling. Our conference officers selected the best 17 papers from those papers accepted for presentation at the conference in order to publish them in this volume. The papers were chosen based on review scores submitted by members or the program committee, and underwent further rigorous rounds of review.
This open access book presents the outcomes of the “Design for Future – Managed Software Evolution” priority program 1593, which was launched by the German Research Foundation (“Deutsche Forschungsgemeinschaft (DFG)”) to develop new approaches to software engineering with a specific focus on long-lived software systems. The different lifecycles of software and hardware platforms lead to interoperability problems in such systems. Instead of separating the development, adaptation and evolution of software and its platforms, as well as aspects like operation, monitoring and maintenance, they should all be integrated into one overarching process. Accordingly, the book is split into three major parts, the first of which includes an introduction to the nature of software evolution, followed by an overview of the specific challenges and a general introduction to the case studies used in the project. The second part of the book consists of the main chapters on knowledge carrying software, and cover tacit knowledge in software evolution, continuous design decision support, model-based round-trip engineering for software product lines, performance analysis strategies, maintaining security in software evolution, learning from evolution for evolution, and formal verification of evolutionary changes. In turn, the last part of the book presents key findings and spin-offs. The individual chapters there describe various case studies, along with their benefits, deliverables and the respective lessons learned. An overview of future research topics rounds out the coverage. The book was mainly written for scientific researchers and advanced professionals with an academic background. They will benefit from its comprehensive treatment of various topics related to problems that are now gaining in importance, given the higher costs for maintenance and evolution in comparison to the initial development, and the fact that today, most software is not developed from scratch, but as part of a continuum of former and future releases.
Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. - In-depth surveys and tutorials on new computer technology - Well-known authors and researchers in the field - Extensive bibliographies with most chapters - Many of the volumes are devoted to single themes or subfields of computer science