Video segmentation has become one of the core areas in visual signal processing research. The objective of Video Segmentation and Its Applications is to present the latest advances in video segmentation and analysis techniques while covering the theoretical approaches, real applications and methods being developed in the computer vision and video analysis community. The book will also provide researchers and practitioners a comprehensive understanding of state-of-the-art of video segmentation techniques and a resource for potential applications and successful practice.
The three-volume set LNCS 5101-5103 constitutes the refereed proceedings of the 8th International Conference on Computational Science, ICCS 2008, held in Krakow, Poland in June 2008. The 167 revised papers of the main conference track presented together with the abstracts of 7 keynote talks and the 100 revised papers from 14 workshops were carefully reviewed and selected for inclusion in the three volumes. The main conference track was divided into approximately 20 parallel sessions addressing topics such as e-science applications and systems, scheduling and load balancing, software services and tools, new hardware and its applications, computer networks, simulation of complex systems, image processing and visualization, optimization techniques, numerical linear algebra, and numerical algorithms. The second volume contains workshop papers related to various computational research areas, e.g.: computer graphics and geometric modeling, simulation of multiphysics multiscale systems, computational chemistry and its applications, computational finance and business intelligence, physical, biological and social networks, geocomputation, and teaching computational science. The third volume is mostly related to computer science topics such as bioinformatics' challenges to computer science, tools for program development and analysis in computational science, software engineering for large-scale computing, collaborative and cooperative environments, applications of workflows in computational science, as well as intelligent agents and evolvable systems.
During the past few years, we have been witnessing the rapid growth of the ap plications of Interactive Digital Video, Multimedia Computing, Desktop Video Teleconferencing, Virtual Reality, and High Definition Television (HDTV). An other information revolution which is tied to Cyberspace is almost within reach. The information, data, text, graphics, video, sound, etc. , in the form of multi media, can be requested, accessed, distributed, and transmitted to potentially every household. This is changing and will continue to change the way of people doing business, functioning in the society, and entertaining. In the foreseeable future, many personalized, portable information terminals, which can be car ried while traveling, will provide the link to central computer network to allow information exchange including videos from a node to node, from a center to a node, or nodes. Facing this opportunity, the question is what are the major significant technical challenges that people have to solve to push the-state-of-the-art for the realiza tion of the above mentioned technology advancement? From our professional judgement We feel that one of the major technical challenges is in Video Data Compression. Video communications in the form of desktop teleconferencing, videophone, network video delivery on demand, even games, are going to be major media traveling in the information super highway, hopping from one node in the Cyberspace to the other.
One of the most intriguing problems in video processing is the removal of the redundancy or the compression of a video signal. There are a large number of applications which depend on video compression. Data compression represents the enabling technology behind the multimedia and digital television revolution. In motion compensated lossy video compression the original video sequence is first split into three new sources of information, segmentation, motion and residual error. These three information sources are then quantized, leading to a reduced rate for their representation but also to a distorted reconstructed video sequence. After the decomposition of the original source into segmentation, mo tion and residual error information is decided, the key remaining problem is the allocation of the available bits into these three sources of information. In this monograph a theory is developed which provides a solution to this fundamental bit allocation problem. It can be applied to all quad-tree-based motion com pensated video coders which use a first order differential pulse code modulation (DPCM) scheme for the encoding of the displacement vector field (DVF) and a block-based transform scheme for the encoding of the displaced frame differ ence (DFD). An optimal motion estimator which results in the smallest DFD energy for a given bit rate for the encoding of the DVF is also a result of this theory. Such a motion estimator is used to formulate a motion compensated interpolation scheme which incorporates a global smoothness constraint for the DVF.
The 30-volume set, comprising the LNCS books 12346 until 12375, constitutes the refereed proceedings of the 16th European Conference on Computer Vision, ECCV 2020, which was planned to be held in Glasgow, UK, during August 23-28, 2020. The conference was held virtually due to the COVID-19 pandemic. The 1360 revised papers presented in these proceedings were carefully reviewed and selected from a total of 5025 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation.
The increasing demand for sophisticated network applications, allied to the growth of the Internet traffic, has lead to great efforts in the search of improvements in data transmission technologies with the intention of satisfying the increasing demand for bandwidth. So far as optical networking is concerned, WDM (Wavelength Division Multiplexing) appears as the main advance in the transmission area, because it allows transmission rates near to the theoretical limit of optical fibers, of the order of dozens of terabits a second [1]. An essential issue in optical network design is defining how the network will be controlled, that is, what type of signalling will be responsible for resource reservation, route determination and fault handling, among other functions that constitute the control plane. Label switching, which in IP networks is exemplified by MPLS (Multiprotocol Label Switching) [2], was extended through GMPLS (Generalized Multiprotocol Label Switching) [3] to operate with several different network technologies, where the label can be represented in other ways, for example, as time-slots in TDM networks, as physical switch ports and as wavelengths (lambdas) in WDM networks.
Multimedia stands as one of the most challenging and exciting aspects of the information era. Although there are books available that deal with various facets of multimedia, the field has urgently needed a comprehensive look at recent developments in the systems, processing, and applications of image and video data in a multimedia environment.
Video is the main driver of bandwidth use, accounting for over 80 per cent of consumer Internet traffic. Video compression is a critical component of many of the available multimedia applications, it is necessary for storage or transmission of digital video over today's band-limited networks. The majority of this video is coded using international standards developed in collaboration with ITU-T Study Group and MPEG. The MPEG family of video coding standards begun on the early 1990s with MPEG-1, developed for video and audio storage on CD-ROMs, with support for progressive video. MPEG-2 was standardized in 1995 for applications of video on DVD, standard and high definition television, with support for interlaced and progressive video. MPEG-4 part 2, also known as MPEG-2 video, was standardized in 1999 for applications of low- bit rate multimedia on mobile platforms and the Internet, with the support of object-based or content based coding by modeling the scene as background and foreground. Since MPEG-1, the main video coding standards were based on the so-called macroblocks. However, research groups continued the work beyond the traditional video coding architectures and found that macroblocks could limit the performance of the compression when using high-resolution video. Therefore, in 2013 the high efficiency video coding (HEVC) also known and H.265, was released, with a structure similar to H.264/AVC but using coding units with more flexible partitions than the traditional macroblocks. HEVC has greater flexibility in prediction modes and transform block sizes, also it has a more sophisticated interpolation and de blocking filters. In 2006 the VC-1 was released. VC-1 is a video codec implemented by Microsoft and the Microsoft Windows Media Video (VMW) 9 and standardized by the Society of Motion Picture and Television Engineers (SMPTE). In 2017 the Joint Video Experts Team (JVET) released a call for proposals for a new video coding standard initially called Beyond the HEVC, Future Video Coding (FVC) or known as Versatile Video Coding (VVC). VVC is being built on top of HEVC for application on Standard Dynamic Range (SDR), High Dynamic Range (HDR) and 360° Video. The VVC is planned to be finalized by 2020. This book presents the new VVC, and updates on the HEVC. The book discusses the advances in lossless coding and covers the topic of screen content coding. Technical topics discussed include: Beyond the High Efficiency Video CodingHigh Efficiency Video Coding encoderScreen contentLossless and visually lossless coding algorithmsFast coding algorithmsVisual quality assessmentOther screen content coding algorithmsOverview of JPEG Series