New insights into the shifting cultures of today’s ‘hypervisual’ digital universe With the advent of digital technologies and the Internet, photography can, at last, fulfill its promise and forgotten potential as both a versatile medium and an adaptable creative practice. This multidisciplinary volume provides new insights into the shifting cultures affecting the production, collection, usage, and circulation of photographic images on interactive World Wide Web platforms.
This book is for intermediate and advanced trumpeters. It features 16 selections of printed music in the form of duets in multiple styles with written explanations of techniques and methods. It is complete with accompanying play-along tracks via download or CD.
This book examines the reception of rhetoric and the rhetoric of reception. By considering salient rhetorical traits of rhetorical utterances and texts seen in context, and relating this to different kinds of reception and/or audience use and negotiation, the authors explore the connections between rhetoric and reception. In our time, new media and new forms of communication make it harder to distinguish between speaker and audience. The active involvement of users and audiences is more important than ever before. This project is based on the premise that rhetorical research should reconsider the understanding, conceptualization and examination of the rhetorical audience. From mostly understanding audiences as theoretical constructions that are examined textually and speculatively, the contributors give more attention to empirical explorations of actual audiences and users. The book will provide readers with new knowledge on the workings of rhetoric as well as illustrative and guiding examples of new methods of rhetorical studies.
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 (available at fs.unm.edu/DSmT-book4.pdf or www.onera.fr/sites/default/files/297/2015-DSmT-Book4.pdf) in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well. We want to thank all the contributors of this fifth volume for their research works and their interests in the development of DSmT, and the belief functions. We are grateful as well to other colleagues for encouraging us to edit this fifth volume, and for sharing with us several ideas and for their questions and comments on DSmT through the years. We thank the International Society of Information Fusion (www.isif.org) for diffusing main research works related to information fusion (including DSmT) in the international fusion conferences series over the years. Florentin Smarandache is grateful to The University of New Mexico, U.S.A., that many times partially sponsored him to attend international conferences, workshops and seminars on Information Fusion. Jean Dezert is grateful to the Department of Information Processing and Systems (DTIS) of the French Aerospace Lab (Office National d’E´tudes et de Recherches Ae´rospatiales), Palaiseau, France, for encouraging him to carry on this research and for its financial support. Albena Tchamova is first of all grateful to Dr. Jean Dezert for the opportunity to be involved during more than 20 years to follow and share his smart and beautiful visions and ideas in the development of the powerful Dezert-Smarandache Theory for data fusion. She is also grateful to the Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, for sponsoring her to attend international conferences on Information Fusion.
This Handbook links the growing body of media and conflict research with the field of security studies. The academic sub-field of media and conflict has developed and expanded greatly over the past two decades. Operating across a diverse range of academic disciplines, academics are studying the impact the media has on governments pursuing war, responses to humanitarian crises and violent political struggles, and the role of the media as a facilitator of, and a threat to, both peace building and conflict prevention. This handbook seeks to consolidate existing knowledge by linking the body of conflict and media studies with work in security studies. The handbook is arranged into five parts: Theory and Principles. Media, the State and War Media and Human Security Media and Policymaking within the Security State New Issues in Security and Conflict and Future Directions For scholars of security studies, this handbook will provide a key point of reference for state of the art scholarship concerning the media-security nexus; for scholars of communication and media studies, the handbook will provide a comprehensive mapping of the media-conflict field.
The 30-volume set, comprising the LNCS books 12346 until 12375, constitutes the refereed proceedings of the 16th European Conference on Computer Vision, ECCV 2020, which was planned to be held in Glasgow, UK, during August 23-28, 2020. The conference was held virtually due to the COVID-19 pandemic. The 1360 revised papers presented in these proceedings were carefully reviewed and selected from a total of 5025 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation.
This paper describes an original method of global machine condition assessment for infrared condition monitoring and diagnostics systems. This method integrates two approaches: the first is processing and analysis of infrared images in the frequency domain by the use of 2D Fourier transform and a set of F-image features, the second uses fusion of classification results obtained independently for F-image features. To find the best condition assessment solution, the two different types of classifiers, k-nearest neighbours and support vector machine, as well as data fusion method based on Dezert–Smarandache theory have been investigated. This method has been verified using infrared images recorded during experiments performed on the laboratory model of rotating machinery. The results obtained during the research confirm that the method could be successfully used for the identification of operational conditions that are difficult to be recognised.
"Having undergone profound material, aesthetic, and institutional transformations since the arrival of digital technologies, photography and film frequently intersect in the processes of convergence (the shared technological basis of diverse media in digital code) and remediation (the mutual reshaping of old and new media). However, the foundational relations between film and photography have a long history extending well back into the nineteenth century. This history includes many acclaimed practitioners who have worked in both media, such as Albert Kahn, Helen Levitt, Agnès Varda, Chris Marker, Robert Frank, Wim Wenders, Abbas Kiarostami, and Fiona Tan, but it also involves a range of intermedial forms that combine elements of both media, such as the film still, the film photonovel, and the photofilm. These hybrid forms were long neglected critically because they were considered marginal forms of paratextuality or deviations from medium specificity-the idea that a medium must be deployed according to its own specific capacities compared to other media"--
While it has traditionally been seen as a means of documenting an external reality or expressing an internal feeling, photography is now capable of actualizing never-existed pasts and never-lived experiences. Thanks to the latest photographic technologies, we can now take photos in computer games, interpolate them in extended reality platforms, or synthesize them via artificial intelligence. To account for the most recent shifts in conceptualizations of photography, this book proposes the term virtual photography as a binding theoretical framework, defined as a photography that retains the efficiency and function of real photography (made with or without a camera) while manifesting these in an unfamiliar or noncustomary form.
Several recent papers underline methodological points that limit the validity of published results in imaging studies in the life sciences and especially the neurosciences (Carp, 2012; Ingre, 2012; Button et al., 2013; Ioannidis, 2014). At least three main points are identified that lead to biased conclusions in research findings: endemic low statistical power and, selective outcome and selective analysis reporting. Because of this, and in view of the lack of replication studies, false discoveries or solutions persist. To overcome the poor reliability of research findings, several actions should be promoted including conducting large cohort studies, data sharing and data reanalysis. The construction of large-scale online databases should be facilitated, as they may contribute to the definition of a “collective mind” (Fox et al., 2014) facilitating open collaborative work or “crowd science” (Franzoni and Sauermann, 2014). Although technology alone cannot change scientists’ practices (Wicherts et al., 2011; Wallis et al., 2013, Poldrack and Gorgolewski 2014; Roche et al. 2014), technical solutions should be identified which support a more “open science” approach. Also, the analysis of the data plays an important role. For the analysis of large datasets, image processing pipelines should be constructed based on the best algorithms available and their performance should be objectively compared to diffuse the more relevant solutions. Also, provenance of processed data should be ensured (MacKenzie-Graham et al., 2008). In population imaging this would mean providing effective tools for data sharing and analysis without increasing the burden on researchers. This subject is the main objective of this research topic (RT), cross-listed between the specialty section “Computer Image Analysis” of Frontiers in ICT and Frontiers in Neuroinformatics. Firstly, it gathers works on innovative solutions for the management of large imaging datasets possibly distributed in various centers. The paper of Danso et al. describes their experience with the integration of neuroimaging data coming from several stroke imaging research projects. They detail how the initial NeuroGrid core metadata schema was gradually extended for capturing all information required for future metaanalysis while ensuring semantic interoperability for future integration with other biomedical ontologies. With a similar preoccupation of interoperability, Shanoir relies on the OntoNeuroLog ontology (Temal et al., 2008; Gibaud et al., 2011; Batrancourt et al., 2015), a semantic model that formally described entities and relations in medical imaging, neuropsychological and behavioral assessment domains. The mechanism of “Study Card” allows to seamlessly populate metadata aligned with the ontology, avoiding fastidious manual entrance and the automatic control of the conformity of imported data with a predefined study protocol. The ambitious objective with the BIOMIST platform is to provide an environment managing the entire cycle of neuroimaging data from acquisition to analysis ensuring full provenance information of any derived data. Interestingly, it is conceived based on the product lifecycle management approach used in industry for managing products (here neuroimaging data) from inception to manufacturing. Shanoir and BIOMIST share in part the same OntoNeuroLog ontology facilitating their interoperability. ArchiMed is a data management system locally integrated for 5 years in a clinical environment. Not restricted to Neuroimaging, ArchiMed deals with multi-modal and multi-organs imaging data with specific considerations for data long-term conservation and confidentiality in accordance with the French legislation. Shanoir and ArchiMed are integrated into FLI-IAM1, the national French IT infrastructure for in vivo imaging.