Mathematical morphology has developed a powerful methodology for segmenting images, based on connected filters and watersheds. We have chosen the abstract framework of node- or edge-weighted graphs for an extensive mathematical and algorithmic description of these tools. Volume 1 is devoted to watersheds. The topography of a graph appears by observing the evolution of a drop of water moving from node to node on a weighted graph, along flowing paths, until it reaches regional minima. The upstream nodes of a regional minimum constitute its catchment zone. The catchment zones may be constructed independently of each other and locally, in contrast with the traditional approach where the catchment basins have to be constructed all at the same time. Catchment zones may overlap, and thus, a new segmentation paradigm is proposed in which catchment zones cover each other according to a priority order. The resulting partition may then be corrected, by local and parallel treatments, in order to achieve the desired precision.
Mathematical morphology has developed a powerful methodology for segmenting images, based on connected filters and watersheds. We have chosen the abstract framework of node- or edge-weighted graphs for an extensive mathematical and algorithmic description of these tools. Volume 2 proposes two physical models for describing valid flooding on a node- or edge-weighted graph, and establishes how to pass from one to another. Many new flooding algorithms are derived, allowing parallel and local flooding of graphs. Watersheds and flooding are then combined for solving real problems. Their ability to model a real hydrographic basin represented by its digital elevation model constitutes a good validity check of the underlying physical models. The last part of Volume 2 explains why so many different watershed partitions exist for the same graph. Marker-based segmentation is the method of choice for curbing this proliferation. This book proposes new algorithms combining the advantages of the previous methods which treated node- and edge-weighted graphs differently.
The second volume will deal with a presentation of the main matrix and tensor decompositions and their properties of uniqueness, as well as very useful tensor networks for the analysis of massive data. Parametric estimation algorithms will be presented for the identification of the main tensor decompositions. After a brief historical review of the compressed sampling methods, an overview of the main methods of retrieving matrices and tensors with missing data will be performed under the low rank hypothesis. Illustrative examples will be provided.
Nowadays, tensors play a central role for the representation, mining, analysis, and fusion of multidimensional, multimodal, and heterogeneous big data in numerous fields. This set on Matrices and Tensors in Signal Processing aims at giving a self-contained and comprehensive presentation of various concepts and methods, starting from fundamental algebraic structures to advanced tensor-based applications, including recently developed tensor models and efficient algorithms for dimensionality reduction and parameter estimation. Although its title suggests an orientation towards signal processing, the results presented in this set will also be of use to readers interested in other disciplines. This first book provides an introduction to matrices and tensors of higher-order based on the structures of vector space and tensor space. Some standard algebraic structures are first described, with a focus on the hilbertian approach for signal representation, and function approximation based on Fourier series and orthogonal polynomial series. Matrices and hypermatrices associated with linear, bilinear and multilinear maps are more particularly studied. Some basic results are presented for block matrices. The notions of decomposition, rank, eigenvalue, singular value, and unfolding of a tensor are introduced, by emphasizing similarities and differences between matrices and tensors of higher-order.
This book constitutes the proceedings of the First IAPR International Conference on Discrete Geometry and Mathematical Morphology, DGMM 2021, which was held during May 24-27, 2021, in Uppsala, Sweden. The conference was created by joining the International Conference on Discrete Geometry for computer Imagery, DGCI, with the International Symposium on Mathematical Morphology, ISMM. The 36 papers included in this volume were carefully reviewed and selected from 59 submissions. They were organized in topical sections as follows: applications in image processing, computer vision, and pattern recognition; discrete and combinatorial topology; discrete geometry - models, transforms, visualization; discrete tomography and inverse problems; hierarchical and graph-based models, analysis and segmentation; learning-based approaches to mathematical morphology; multivariate and PDE-based mathematical morphology, morphological filtering. The book also contains 3 invited keynote papers.
Mathematical morphology has developed a powerful methodology for segmenting images, based on connected filters and watersheds. We have chosen the abstract framework of node- or edge-weighted graphs for an extensive mathematical and algorithmic description of these tools. Volume 1 is devoted to watersheds. The topography of a graph appears by observing the evolution of a drop of water moving from node to node on a weighted graph, along flowing paths, until it reaches regional minima. The upstream nodes of a regional minimum constitute its catchment zone. The catchment zones may be constructed independently of each other and locally, in contrast with the traditional approach where the catchment basins have to be constructed all at the same time. Catchment zones may overlap, and thus, a new segmentation paradigm is proposed in which catchment zones cover each other according to a priority order. The resulting partition may then be corrected, by local and parallel treatments, in order to achieve the desired precision.
A unique and important resource, full of critical practical knowledge and technical details made readily accessible. - Tiffany Ito, University of Colorado at Boulder "A comprehensive and engaging guide to EEG methods in social neuroscience; Dickter and Kiefabber offer practical details for conducting EEG research in a social/personality lab, with a broad perspective on how neuroscience can inform psychology. This is a unique and invaluable resource - a must-have for scientists interested in the social brain." - David M. Amodio, New York University Electroencephalography (EEG) has seen a dramatic increase in application as a research tool in the psychological sciences in recent years. This book provides an introduction to the technology and techniques of EEG in the context of social and cognitive neuroscience research that will appeal to investigators (students or researchers) wishing to broaden their research aims to include EEG, and to those already using EEG but wishing to expand their analytic repertoire. It can also serve as a textbook for a postgraduate course or upper-level undergraduate course in any area of behavioural neuroscience. The book provides an introduction to the theory, technology, and techniques of EEG data analysis along with the practical skills required to engage this popular technology. Beginning with a background in the neural origins and physical principles involved in recording EEG, readers will also find discussions of practical considerations regarding the recording of EEG in humans as well as tips for the configuration of an EEG laboratory. The analytic methods covered include event-related brain potentials (ERPs), spectral asymmetry, and time-frequency analyses. A conceptual background and review of domain-specific applications of the method is provided for each type of analysis. There′s also comprehensive guided analysis for each analytic method that includes tutorial-style instruction and sample datasets. This book is perfect for advanced students and researchers in the psychological sciences and related disciplines who are using EEG in their research.
Additive manufacturing (AM) is a fast-growing sector with the ability to evoke a revolution in manufacturing due to its almost unlimited design freedom and its capability to produce personalised parts locally and with efficient material use. AM companies, however, still face technological challenges such as limited precision due to shrinkage, built-in stresses and limited process stability and robustness. Moreover, often post-processing is needed due to high roughness and remaining porosity. Qualified, trained personnel are also in short supply. In recent years, there have been dramatic improvements in AM design methods, process control, post-processing, material properties and material range. However, if AM is going to gain a significant market share, it must be developed into a true precision manufacturing method. The production of precision parts relies on three principles: Production is robust (i.e. all sensitive parameters can be controlled). Production is predictable (for example, the shrinkage that occurs is acceptable because it can be predicted and compensated in the design). Parts are measurable (as without metrology, accuracy, repeatability and quality assurance cannot be known). AM of metals is inherently a high-energy process with many sensitive and inter-related process parameters, making it susceptible to thermal distortions, defects and process drift. The complete modelling of these processes is beyond current computational power, and novel methods are needed to practicably predict performance and inform design. In addition, metal AM produces highly textured surfaces and complex surface features that stretch the limits of contemporary metrology. With so many factors to consider, there is a significant shortage of background material on how to inject precision into AM processes. Shortage in such material is an important barrier for a wider uptake of advanced manufacturing technologies, and a comprehensive book is thus needed. This book aims to inform the reader how to improve the precision of metal AM processes by tackling the three principles of robustness, predictability and metrology, and by developing computer-aided engineering methods that empower rather than limit AM design. Richard Leach is a professor in metrology at the University of Nottingham and heads up the Manufacturing Metrology Team. Prior to this position, he was at the National Physical Laboratory from 1990 to 2014. His primary love is instrument building, from concept to final installation, and his current interests are the dimensional measurement of precision and additive manufactured structures. His research themes include the measurement of surface topography, the development of methods for measuring 3D structures, the development of methods for controlling large surfaces to high resolution in industrial applications and the traceability of X-ray computed tomography. He is a leader of several professional societies and a visiting professor at Loughborough University and the Harbin Institute of Technology. Simone Carmignato is a professor in manufacturing engineering at the University of Padua. His main research activities are in the areas of precision manufacturing, dimensional metrology and industrial computed tomography. He is the author of books and hundreds of scientific papers, and he is an active member of leading technical and scientific societies. He has been chairman, organiser and keynote speaker for several international conferences, and received national and international awards, including the Taylor Medal from CIRP, the International Academy for Production Engineering.
The book presents selected papers from the International Conference on Data Science and Communication (ICTDsC 2023) organized by the Department of Electronics and Communication Engineering and Department of Engineering Science and Humanities (DESH) Siliguri Institute of Technology, India during 23 – 24 March 2023 in Siliguri, India. The book covers state-of-the-art research insights on artificial intelligence, machine learning, big data, data analytics, cyber security and forensic, network and mobile security, advanced computing, cloud computing, quantum computing, electronics system, Internet of Things, robotics and automations, blockchain and software technology, and digital technologies for future.