Collaboration among individuals – from users to developers – is central to modern software engineering. It takes many forms: joint activity to solve common problems, negotiation to resolve conflicts, creation of shared definitions, and both social and technical perspectives impacting all software development activity. The difficulties of collaboration are also well documented. The grand challenge is not only to ensure that developers in a team deliver effectively as individuals, but that the whole team delivers more than just the sum of its parts. The editors of this book have assembled an impressive selection of authors, who have contributed to an authoritative body of work tackling a wide range of issues in the field of collaborative software engineering. The resulting volume is divided into four parts, preceded by a general editorial chapter providing a more detailed review of the domain of collaborative software engineering. Part 1 is on "Characterizing Collaborative Software Engineering", Part 2 examines various "Tools and Techniques", Part 3 addresses organizational issues, and finally Part 4 contains four examples of "Emerging Issues in Collaborative Software Engineering". As a result, this book delivers a comprehensive state-of-the-art overview and empirical results for researchers in academia and industry in areas like software process management, empirical software engineering, and global software development. Practitioners working in this area will also appreciate the detailed descriptions and reports which can often be used as guidelines to improve their daily work.
Software Engineering for Science provides an in-depth collection of peer-reviewed chapters that describe experiences with applying software engineering practices to the development of scientific software. It provides a better understanding of how software engineering is and should be practiced, and which software engineering practices are effective for scientific software. The book starts with a detailed overview of the Scientific Software Lifecycle, and a general overview of the scientific software development process. It highlights key issues commonly arising during scientific software development, as well as solutions to these problems. The second part of the book provides examples of the use of testing in scientific software development, including key issues and challenges. The chapters then describe solutions and case studies aimed at applying testing to scientific software development efforts. The final part of the book provides examples of applying software engineering techniques to scientific software, including not only computational modeling, but also software for data management and analysis. The authors describe their experiences and lessons learned from developing complex scientific software in different domains. About the Editors Jeffrey Carver is an Associate Professor in the Department of Computer Science at the University of Alabama. He is one of the primary organizers of the workshop series on Software Engineering for Science (http://www.SE4Science.org/workshops). Neil P. Chue Hong is Director of the Software Sustainability Institute at the University of Edinburgh. His research interests include barriers and incentives in research software ecosystems and the role of software as a research object. George K. Thiruvathukal is Professor of Computer Science at Loyola University Chicago and Visiting Faculty at Argonne National Laboratory. His current research is focused on software metrics in open source mathematical and scientific software.
This book features high-quality research papers presented at the International Conference on Advanced Computing and Intelligent Engineering (ICACIE 2017). It includes sections describing technical advances in the fields of advanced computing and intelligent engineering, which are based on the presented articles. Intended for postgraduate students and researchers working in the discipline of computer science and engineering, the proceedings also appeal to researchers in the domain of electronics as it covers hardware technologies and future communication technologies.
Software timing behavior measurements, such as response times, often show high statistical variance. This variance can make the analysis difficult or even threaten the applicability of statistical techniques. This thesis introduces a method for improving the analysis of software response time measurements that show high variance. Our approach can find relations between timing behavior variance and both trace shape information and workload intensity information. This relation is used to provide timing behavior measurements with virtually less variance. This can make timing behavior analysis more robust (e.g., improved confidence and precision) and faster (e.g., less simulation runs and shorter monitoring period). The thesis contributes TracSTA (Trace-Context-Sensitive Timing Behavior Analysis) and WiSTA (Workload-Intensity-Sensitive Timing Behavior Analysis). TracSTA uses trace shape information (i.e., the shape of the control flow corresponding to a software operation execution) and WiSTA uses workload intensity metrics (e.g., the number of concurrent software executions) to create context-specific timing behavior profiles. Both the applicability and effectiveness are evaluated in several case studies and field studies. The evaluation shows a strong relation between timing behavior and the metrics considered by TracSTA and WiSTA. Additionally, a fault localization approach for enterprise software systems is presented as application scenario. It uses the timing behavior data provided by TracSTA and WiSTA for anomaly detection.
Der Entwurf und die Realisierung dienstbasierender Architekturen wirft eine Vielzahl von Forschungsfragestellungen aus den Gebieten der Softwaretechnik, der Systemmodellierung und -analyse, sowie der Adaptierbarkeit und Integration von Applikationen auf. Komponentenorientierung und WebServices sind zwei Ansätze für den effizienten Entwurf und die Realisierung komplexer Web-basierender Systeme. Sie ermöglichen die Reaktion auf wechselnde Anforderungen ebenso, wie die Integration großer komplexer Softwaresysteme. Heute übliche Technologien, wie J2EE und .NET, sind de facto Standards für die Entwicklung großer verteilter Systeme. Die Evolution solcher Komponentensysteme führt über WebServices zu dienstbasierenden Architekturen. Dies manifestiert sich in einer Vielzahl von Industriestandards und Initiativen wie XML, WSDL, UDDI, SOAP. All diese Schritte führen letztlich zu einem neuen, vielversprechenden Paradigma für IT Systeme, nach dem komplexe Softwarelösungen durch die Integration vertraglich vereinbarter Software-Dienste aufgebaut werden sollen. "Service-Oriented Systems Engineering" repräsentiert die Symbiose bewährter Praktiken aus den Gebieten der Objektorientierung, der Komponentenprogrammierung, des verteilten Rechnen sowie der Geschäftsprozesse und berücksichtigt auch die Integration von Geschäftsanliegen und Informationstechnologien. Die Klausurtagung des Forschungskollegs "Service-oriented Systems Engineering" findet einmal jährlich statt und bietet allen Kollegiaten die Möglichkeit den Stand ihrer aktuellen Forschung darzulegen. Bedingt durch die Querschnittstruktur des Kollegs deckt dieser Bericht ein weites Spektrum aktueller Forschungsthemen ab. Dazu zählen unter anderem Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; sowie Services Specification, Composition, and Enactment. Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
As modern technologies continue to develop and evolve, the ability of users to interface with new systems becomes a paramount concern. Research into new ways for humans to make use of advanced computers and other such technologies is necessary to fully realize the potential of twenty-first-century tools. Innovative Methods, User-Friendly Tools, Coding, and Design Approaches in People-Oriented Programming is a critical scholarly resource that examines development and customization user interfaces for advanced technologies and how these interfaces can facilitate new developments in various fields. Featuring coverage on a broad range of topics such as role-based modeling, end-user composition, and wearable computing, this book is a vital reference source for programmers, developers, students, and educators seeking current research on the enhancement of user-centric information system development.
Information system architecture (ISA) specification as a part of software engineering field has been an information systems research topic since the 60's of the 20th century. There have been manifold specification methodologies over the recent decades, developed newly or adapted in order to target the domains of software modelling, legacy systems, steel production, and automotive safety. Still, there exist considerable issues constituting the need for a flexible ISA development, e.g. incomplete methodology for requirements in model-driven architectures, lacking qualitative methods for thorough definition and usage of viewpoints. Currently existing methods for information system architecture specification usually de- vise the target architectures either addressing only a part of software life-cycles or neglect- ing less structured information. The method for flexible information system architectures (FISA) specification uses the viewpoint concept for mediating the domain expert and technical system levels. The FISA-method defines construction and application reference models based on the ANSI/IEEE Standard 1471-2000, viewpoints with model transfor- mations based on OMG-Standard Model-Driven Architecture (MDA), and four different approaches for ISA specification, thus providing for flexibility both in construction and refactoring procedures. The development of FISA-method has been based on a thorough analysis of the ISA specification method field and constructs a comprehensive procedure and reference engi- neering models for flexible ISA specification. The genericity of the conceived construction and application procedure models of FISA allows for its usage not only in research, but also in industry settings, as presented on illustrative scenarios in steel manufacturing and automotive safety.
Software is continuously increasing in complexity. Paradigmatic shifts and new development frameworks make it easier to implement software – but not to test it. Software testing remains to be a topic with many open questions with regard to both technical low-level aspects and to the organizational embedding of testing. However, a desired level of software quality cannot be achieved by either choosing a technical procedure or by optimizing testing processes. In fact, it requires a holistic approach.This Brief summarizes the current knowledge of software testing and introduces three current research approaches. The base of knowledge is presented comprehensively in scope but concise in length; thereby the volume can be used as a reference. Research is highlighted from different points of view. Firstly, progress on developing a tool for automated test case generation (TCG) based on a program’s structure is introduced. Secondly, results from a project with industry partners on testing best practices are highlighted. Thirdly, embedding testing into e-assessment of programming exercises is described.
This monograph discusses software reuse and how it can be applied at different stages of the software development process, on different types of data and at different levels of granularity. Several challenging hypotheses are analyzed and confronted using novel data-driven methodologies, in order to solve problems in requirements elicitation and specification extraction, software design and implementation, as well as software quality assurance. The book is accompanied by a number of tools, libraries and working prototypes in order to practically illustrate how the phases of the software engineering life cycle can benefit from unlocking the potential of data. Software engineering researchers, experts, and practitioners can benefit from the various methodologies presented and can better understand how knowledge extracted from software data residing in various repositories can be combined and used to enable effective decision making and save considerable time and effort through software reuse. Mining Software Engineering Data for Software Reuse can also prove handy for graduate-level students in software engineering.
This work developed an automatic approach for the assessment of software reliability which is both theoretical sound and practical. The developed approach extends and combines theoretical sound approaches in a novel manner to systematically reduce the overhead of reliability assessment.