Probabilistic and Biologically Inspired Feature Representations

Probabilistic and Biologically Inspired Feature Representations

Author: Michael Felsberg

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 89

ISBN-13: 3031018222

DOWNLOAD EBOOK

Under the title "Probabilistic and Biologically Inspired Feature Representations," this text collects a substantial amount of work on the topic of channel representations. Channel representations are a biologically motivated, wavelet-like approach to visual feature descriptors: they are local and compact, they form a computational framework, and the represented information can be reconstructed. The first property is shared with many histogram- and signature-based descriptors, the latter property with the related concept of population codes. In their unique combination of properties, channel representations become a visual Swiss army knife—they can be used for image enhancement, visual object tracking, as 2D and 3D descriptors, and for pose estimation. In the chapters of this text, the framework of channel representations will be introduced and its attributes will be elaborated, as well as further insight into its probabilistic modeling and algorithmic implementation will be given. Channel representations are a useful toolbox to represent visual information for machine learning, as they establish a generic way to compute popular descriptors such as HOG, SIFT, and SHOT. Even in an age of deep learning, they provide a good compromise between hand-designed descriptors and a-priori structureless feature spaces as seen in the layers of deep networks.


Probabilistic and Biologically Inspired Feature Representations

Probabilistic and Biologically Inspired Feature Representations

Author: Michael Felsberg

Publisher: Morgan & Claypool Publishers

Published: 2018-05-29

Total Pages: 105

ISBN-13: 1681730243

DOWNLOAD EBOOK

Under the title "Probabilistic and Biologically Inspired Feature Representations," this text collects a substantial amount of work on the topic of channel representations. Channel representations are a biologically motivated, wavelet-like approach to visual feature descriptors: they are local and compact, they form a computational framework, and the represented information can be reconstructed. The first property is shared with many histogram- and signature-based descriptors, the latter property with the related concept of population codes. In their unique combination of properties, channel representations become a visual Swiss army knife—they can be used for image enhancement, visual object tracking, as 2D and 3D descriptors, and for pose estimation. In the chapters of this text, the framework of channel representations will be introduced and its attributes will be elaborated, as well as further insight into its probabilistic modeling and algorithmic implementation will be given. Channel representations are a useful toolbox to represent visual information for machine learning, as they establish a generic way to compute popular descriptors such as HOG, SIFT, and SHOT. Even in an age of deep learning, they provide a good compromise between hand-designed descriptors and a-priori structureless feature spaces as seen in the layers of deep networks.


Advanced Methods and Deep Learning in Computer Vision

Advanced Methods and Deep Learning in Computer Vision

Author: E. R. Davies

Publisher: Academic Press

Published: 2021-11-09

Total Pages: 584

ISBN-13: 0128221496

DOWNLOAD EBOOK

Advanced Methods and Deep Learning in Computer Vision presents advanced computer vision methods, emphasizing machine and deep learning techniques that have emerged during the past 5–10 years. The book provides clear explanations of principles and algorithms supported with applications. Topics covered include machine learning, deep learning networks, generative adversarial networks, deep reinforcement learning, self-supervised learning, extraction of robust features, object detection, semantic segmentation, linguistic descriptions of images, visual search, visual tracking, 3D shape retrieval, image inpainting, novelty and anomaly detection. This book provides easy learning for researchers and practitioners of advanced computer vision methods, but it is also suitable as a textbook for a second course on computer vision and deep learning for advanced undergraduates and graduate students. - Provides an important reference on deep learning and advanced computer methods that was created by leaders in the field - Illustrates principles with modern, real-world applications - Suitable for self-learning or as a text for graduate courses


Visual Domain Adaptation in the Deep Learning Era

Visual Domain Adaptation in the Deep Learning Era

Author: Gabriela Csurka

Publisher: Springer Nature

Published: 2022-06-06

Total Pages: 182

ISBN-13: 3031791754

DOWNLOAD EBOOK

Solving problems with deep neural networks typically relies on massive amounts of labeled training data to achieve high performance. While in many situations huge volumes of unlabeled data can be and often are generated and available, the cost of acquiring data labels remains high. Transfer learning (TL), and in particular domain adaptation (DA), has emerged as an effective solution to overcome the burden of annotation, exploiting the unlabeled data available from the target domain together with labeled data or pre-trained models from similar, yet different source domains. The aim of this book is to provide an overview of such DA/TL methods applied to computer vision, a field whose popularity has increased significantly in the last few years. We set the stage by revisiting the theoretical background and some of the historical shallow methods before discussing and comparing different domain adaptation strategies that exploit deep architectures for visual recognition. We introduce the space of self-training-based methods that draw inspiration from the related fields of deep semi-supervised and self-supervised learning in solving the deep domain adaptation. Going beyond the classic domain adaptation problem, we then explore the rich space of problem settings that arise when applying domain adaptation in practice such as partial or open-set DA, where source and target data categories do not fully overlap, continuous DA where the target data comes as a stream, and so on. We next consider the least restrictive setting of domain generalization (DG), as an extreme case where neither labeled nor unlabeled target data are available during training. Finally, we close by considering the emerging area of learning-to-learn and how it can be applied to further improve existing approaches to cross domain learning problems such as DA and DG.


Computer Vision in the Infrared Spectrum

Computer Vision in the Infrared Spectrum

Author: Michael Teutsch

Publisher: Springer Nature

Published: 2022-06-01

Total Pages: 128

ISBN-13: 3031018265

DOWNLOAD EBOOK

Human visual perception is limited to the visual-optical spectrum. Machine vision is not. Cameras sensitive to the different infrared spectra can enhance the abilities of autonomous systems and visually perceive the environment in a holistic way. Relevant scene content can be made visible especially in situations, where sensors of other modalities face issues like a visual-optical camera that needs a source of illumination. As a consequence, not only human mistakes can be avoided by increasing the level of automation, but also machine-induced errors can be reduced that, for example, could make a self-driving car crash into a pedestrian under difficult illumination conditions. Furthermore, multi-spectral sensor systems with infrared imagery as one modality are a rich source of information and can provably increase the robustness of many autonomous systems. Applications that can benefit from utilizing infrared imagery range from robotics to automotive and from biometrics to surveillance. In this book, we provide a brief yet concise introduction to the current state-of-the-art of computer vision and machine learning in the infrared spectrum. Based on various popular computer vision tasks such as image enhancement, object detection, or object tracking, we first motivate each task starting from established literature in the visual-optical spectrum. Then, we discuss the differences between processing images and videos in the visual-optical spectrum and the various infrared spectra. An overview of the current literature is provided together with an outlook for each task. Furthermore, available and annotated public datasets and common evaluation methods and metrics are presented. In a separate chapter, popular applications that can greatly benefit from the use of infrared imagery as a data source are presented and discussed. Among them are automatic target recognition, video surveillance, or biometrics including face recognition. Finally, we conclude with recommendations for well-fitting sensor setups and data processing algorithms for certain computer vision tasks. We address this book to prospective researchers and engineers new to the field but also to anyone who wants to get introduced to the challenges and the approaches of computer vision using infrared images or videos. Readers will be able to start their work directly after reading the book supported by a highly comprehensive backlog of recent and relevant literature as well as related infrared datasets including existing evaluation frameworks. Together with consistently decreasing costs for infrared cameras, new fields of application appear and make computer vision in the infrared spectrum a great opportunity to face nowadays scientific and engineering challenges.


Computational Texture and Patterns

Computational Texture and Patterns

Author: Kristin J. Dana

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 99

ISBN-13: 3031018230

DOWNLOAD EBOOK

Visual pattern analysis is a fundamental tool in mining data for knowledge. Computational representations for patterns and texture allow us to summarize, store, compare, and label in order to learn about the physical world. Our ability to capture visual imagery with cameras and sensors has resulted in vast amounts of raw data, but using this information effectively in a task-specific manner requires sophisticated computational representations. We enumerate specific desirable traits for these representations: (1) intraclass invariance—to support recognition; (2) illumination and geometric invariance for robustness to imaging conditions; (3) support for prediction and synthesis to use the model to infer continuation of the pattern; (4) support for change detection to detect anomalies and perturbations; and (5) support for physics-based interpretation to infer system properties from appearance. In recent years, computer vision has undergone a metamorphosis with classic algorithms adapting to new trends in deep learning. This text provides a tour of algorithm evolution including pattern recognition, segmentation and synthesis. We consider the general relevance and prominence of visual pattern analysis and applications that rely on computational models.


Multi-Modal Face Presentation Attack Detection

Multi-Modal Face Presentation Attack Detection

Author: Jun Wan

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 76

ISBN-13: 3031018249

DOWNLOAD EBOOK

For the last ten years, face biometric research has been intensively studied by the computer vision community. Face recognition systems have been used in mobile, banking, and surveillance systems. For face recognition systems, face spoofing attack detection is a crucial stage that could cause severe security issues in government sectors. Although effective methods for face presentation attack detection have been proposed so far, the problem is still unsolved due to the difficulty in the design of features and methods that can work for new spoofing attacks. In addition, existing datasets for studying the problem are relatively small which hinders the progress in this relevant domain. In order to attract researchers to this important field and push the boundaries of the state of the art on face anti-spoofing detection, we organized the Face Spoofing Attack Workshop and Competition at CVPR 2019, an event part of the ChaLearn Looking at People Series. As part of this event, we released the largest multi-modal face anti-spoofing dataset so far, the CASIA-SURF benchmark. The workshop reunited many researchers from around the world and the challenge attracted more than 300 teams. Some of the novel methodologies proposed in the context of the challenge achieved state-of-the-art performance. In this manuscript, we provide a comprehensive review on face anti-spoofing techniques presented in this joint event and point out directions for future research on the face anti-spoofing field.


Tenth International Conference on Applications and Techniques in Cyber Intelligence (ICATCI 2022)

Tenth International Conference on Applications and Techniques in Cyber Intelligence (ICATCI 2022)

Author: Jemal H. Abawajy

Publisher: Springer Nature

Published: 2023-03-29

Total Pages: 775

ISBN-13: 3031288939

DOWNLOAD EBOOK

This book presents innovative ideas, cutting-edge findings, and novel techniques, methods, and applications in a broad range of cybersecurity and cyberthreat intelligence areas. As our society becomes smarter, there is a corresponding need to secure our cyberfuture. The book describes approaches and findings that are of interest to business professionals and governments seeking to secure our data and underpin infrastructures, as well as to individual users.


Person Re-Identification with Limited Supervision

Person Re-Identification with Limited Supervision

Author: Rameswar Panda

Publisher: Springer Nature

Published: 2022-06-01

Total Pages: 86

ISBN-13: 3031018257

DOWNLOAD EBOOK

Person re-identification is the problem of associating observations of targets in different non-overlapping cameras. Most of the existing learning-based methods have resulted in improved performance on standard re-identification benchmarks, but at the cost of time-consuming and tediously labeled data. Motivated by this, learning person re-identification models with limited to no supervision has drawn a great deal of attention in recent years. In this book, we provide an overview of some of the literature in person re-identification, and then move on to focus on some specific problems in the context of person re-identification with limited supervision in multi-camera environments. We expect this to lead to interesting problems for researchers to consider in the future, beyond the conventional fully supervised setup that has been the framework for a lot of work in person re-identification. Chapter 1 starts with an overview of the problems in person re-identification and the major research directions. We provide an overview of the prior works that align most closely with the limited supervision theme of this book. Chapter 2 demonstrates how global camera network constraints in the form of consistency can be utilized for improving the accuracy of camera pair-wise person re-identification models and also selecting a minimal subset of image pairs for labeling without compromising accuracy. Chapter 3 presents two methods that hold the potential for developing highly scalable systems for video person re-identification with limited supervision. In the one-shot setting where only one tracklet per identity is labeled, the objective is to utilize this small labeled set along with a larger unlabeled set of tracklets to obtain a re-identification model. Another setting is completely unsupervised without requiring any identity labels. The temporal consistency in the videos allows us to infer about matching objects across the cameras with higher confidence, even with limited to no supervision. Chapter 4 investigates person re-identification in dynamic camera networks. Specifically, we consider a novel problem that has received very little attention in the community but is critically important for many applications where a new camera is added to an existing group observing a set of targets. We propose two possible solutions for on-boarding new camera(s) dynamically to an existing network using transfer learning with limited additional supervision. Finally, Chapter 5 concludes the book by highlighting the major directions for future research.


Biologically Inspired Cognitive Architectures 2018

Biologically Inspired Cognitive Architectures 2018

Author: Alexei V. Samsonovich

Publisher: Springer

Published: 2018-08-23

Total Pages: 377

ISBN-13: 331999316X

DOWNLOAD EBOOK

The book focuses on original approaches intended to support the development of biologically inspired cognitive architectures. It bridges together different disciplines, from classical artificial intelligence to linguistics, from neuro- and social sciences to design and creativity, among others. The chapters, based on contributions presented at the Ninth Annual Meeting of the BICA Society, held in on August 23-24, 2018, in Prague, Czech Republic, discuss emerging methods, theories and ideas towards the realization of general-purpose humanlike artificial intelligence or fostering a better understanding of the ways the human mind works. All in all, the book provides engineers, mathematicians, psychologists, computer scientists and other experts with a timely snapshot of recent research and a source of inspiration for future developments in the broadly intended areas of artificial intelligence and biological inspiration.