Advances in Photometric 3D-Reconstruction

Advances in Photometric 3D-Reconstruction

Author: Jean-Denis Durou

Publisher: Springer Nature

Published: 2020-09-16

Total Pages: 239

ISBN-13: 3030518663

DOWNLOAD EBOOK

This book presents the latest advances in photometric 3D reconstruction. It provides the reader with an overview of the state of the art in the field, and of the latest research into both the theoretical foundations of photometric 3D reconstruction and its practical application in several fields (including security, medicine, cultural heritage and archiving, and engineering). These techniques play a crucial role within such emerging technologies as 3D printing, since they permit the direct conversion of an image into a solid object. The book covers both theoretical analysis and real-world applications, highlighting the importance of deepening interdisciplinary skills, and as such will be of interest to both academic researchers and practitioners from the computer vision and mathematical 3D modeling communities, as well as engineers involved in 3D printing. No prior background is required beyond a general knowledge of classical computer vision models, numerical methods for optimization, and partial differential equations.


Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds

Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds

Author: Vladislav Golyanik

Publisher: Springer Nature

Published: 2020-06-04

Total Pages: 352

ISBN-13: 3658305673

DOWNLOAD EBOOK

Vladislav Golyanik proposes several new methods for dense non-rigid structure from motion (NRSfM) as well as alignment of point clouds. The introduced methods improve the state of the art in various aspects, i.e. in the ability to handle inaccurate point tracks and 3D data with contaminations. NRSfM with shape priors obtained on-the-fly from several unoccluded frames of the sequence and the new gravitational class of methods for point set alignment represent the primary contributions of this book. About the Author: Vladislav Golyanik is currently a postdoctoral researcher at the Max Planck Institute for Informatics in Saarbrücken, Germany. The current focus of his research lies on 3D reconstruction and analysis of general deformable scenes, 3D reconstruction of human body and matching problems on point sets and graphs. He is interested in machine learning (both supervised and unsupervised), physics-based methods as well as new hardware and sensors for computer vision and graphics (e.g., quantum computers and event cameras).


Computer Vision

Computer Vision

Author: Roberto Cipolla

Publisher: Springer

Published: 2010-04-06

Total Pages: 362

ISBN-13: 3642128483

DOWNLOAD EBOOK

Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http://www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and tutorials held during the first two editions of the school on topics such as Recognition, Registration and Reconstruction. The chapters provide an in-depth overview of these challenging areas with key references to the existing literature.


Mathematical Methods for Objects Reconstruction

Mathematical Methods for Objects Reconstruction

Author: Emiliano Cristiani

Publisher: Springer Nature

Published: 2023-07-31

Total Pages: 185

ISBN-13: 9819907764

DOWNLOAD EBOOK

The volume collects several contributions to the INDAM workshop Mathematical Methods for Objects Reconstruction: from 3D Vision to 3D Printing held in Rome, February, 2021. The goal of the workshop was to discuss new methods and conceptual structures for managing these challenging problems. The chapters reflect this goal and the authors are academic researchers and some experts from industry working in the areas of 3D modeling, computer vision, 3D printing and/or developing new mathematical methods for these problems. The contributions present methodologies and challenges raised by the emergence of large-scale 3D reconstruction applications and low-cost 3D printers. The volume collects complementary knowledges from different areas of mathematics, computer science and engineering on research topics related to 3D printing, which are, so far, widely unexplored. Young researchers and future scientific leaders in the field of 3D data acquisition, 3D scene reconstruction, and 3D printing software development will find an excellent introduction to these problems and to the mathematical techniques necessary to solve them.


Image-based Deformable 3D Reconstruction Using Differential Geometry and Cartan's Connections

Image-based Deformable 3D Reconstruction Using Differential Geometry and Cartan's Connections

Author: Shaifali Parashar

Publisher:

Published: 2017

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Reconstructing the 3D shape of objects from multiple images is an important goal in computer vision and has been extensively studied for both rigid and non-rigid (or deformable) objects. Structure-from-Motion (SfM) is an algorithm that performs the 3D reconstruction of rigid objects using the inter-image visual motion from multiple images obtained from a moving camera. SfM is a very accurate and stable solution. Deformable 3D reconstruction, however, has been widely studied for monocular images (obtained from a single camera) and still remains an open research problem. The current methods exploit visual cues such as the inter-image visual motion and shading in order to formalise a reconstruction algorithm. This thesis focuses on the use of the inter-image visual motion for solving this problem. Two types of scenarios exist in the literature: 1) Non-Rigid Structure-from-Motion (NRSfM) and 2) Shape-from-Template (SfT). The goal of NRSfM is to reconstruct multiple shapes of a deformable object as viewed in multiple images while SfT (also referred to as template-based reconstruction) uses a single image of a deformed object and its 3D template (a textured 3D shape of the object in one configuration) to recover the deformed shape of the object. We propose an NRSfM method to reconstruct the deformable surfaces undergoing isometric deformations (the objects do not stretch or shrink under an isometric deformation) using Riemannian geometry. This allows NRSfM to be expressed in terms of Partial Differential Equations (PDE) and to be solved algebraically. We show that the problem has linear complexity and the reconstruction algorithm has a very low computational cost compared to existing NRSfM methods. This work motivated us to use differential geometry and Cartan's theory of connections to model NRSfM, which led to the possibility of extending the solution to deformations other than isometry. In fact, this led to a unified theoretical framework for modelling and solving both NRSfM and SfT for various types of deformations. In addition, it also makes it possible to have a solution to SfT which does not require an explicit modelling of deformation. An important point is that most of the NRSfM and SfT methods reconstruct the thin-shell surface of the object. The reconstruction of the entire volume (the thin-shell surface and the interior) has not been explored yet. We propose the first SfT method that reconstructs the entire volume of a deformable object.


Towards Photo-realistic 3D Reconstruction from Casual Scanning

Towards Photo-realistic 3D Reconstruction from Casual Scanning

Author: Jeong Joon Park

Publisher:

Published: 2021

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

In this thesis, I address the problem of obtaining photo-realistic 3D models of small-scale indoor scenes from a stream of images captured with a hand-held camera. Recovering 3D structure of real-world scenes has been an important topic of research in computer vision, due to its wide applicability in virtual tourism, augmented reality, autonomous-driving or robotics. While numerous reconstruction methods have been proposed, they typically present trade-offs between practicality of capture and the realism of the reconstructed model. I introduce novel 3D reconstruction techniques that effectively navigate the trade-off curve, in order to produce photo-realistic models from user-friendly capture setups. Finally, I suggest new directions for learning generalizable scene priors to enable capture from partial inputs. Creating a photo-realistic digital replica of a physical scene involves careful modeling of geometry, surface materials, and scene lighting, all of which I address in this thesis. At the same time, a reconstruction system should be easy to use for casual users to truly unlock 3D-related applications. This thesis suggests three criteria required for a casual reconstruction system that could greatly reduce the time and resources during scanning: i) the input method should be from a hand-held consumer-grade camera, ii) the system should reconstruct full appearance from a handful of input views of a scene as opposed to a dense view-sampling, and iii) it should automatically complete unobserved parts of a scene. The thesis proposes novel techniques to tackle each of these criteria. I first describe a technique to reconstruct the appearance of shiny objects, leveraging the infrared laser system of an RGB-D sensor as a calibrated point light source to recover surface reflectance. This method takes video as an input from a hand-held camera and the scene lighting captured with a 360$^\circ$ camera to generate a realistic replication of a scene, featuring high-resolution texture and specular highlight modeling. The output model allows virtually rendering the captured scene from any viewing direction. Next, I discuss joint reconstruction of photo-realistic scene appearance and environment lighting of a target scene using a hand-held sensor. I achieve this through a joint optimization of a segmentation neural network, and a material-specific lighting model to reconstruct the input images, and adopt a neural network-enhanced rendering technique that achieve exceptional realism. The combination of physics and machine learning achieves both photo-realism and the ability to extrapolate to new views, reducing the range of required views by the users. While the first two approaches allow realistic reconstruction from casual scanning, they can only model surfaces that are captured during scanning, i.e., they do not complete missing surfaces. Completing unobserved regions typically calls for machine learning algorithms to extract and apply scene/object priors from a large database. Traditionally, the lack of efficient 3D representations has limited the development of deep learning approaches in 3D. To facilitate machine learning in 3D, I devise my DeepSDF approach that describes 3D surface as a decision boundary of a neural network, which is highly efficient in memory and at the same time can model continuous surfaces. The new representation, along with a newly proposed learning algorithm, allows reconstructing a full, plausible shape from a partial and noisy object scan. I show through experiments that the new representation is highly effective in learning geometric priors from a dataset of objects. Finally, I extend the DeepSDF representation to model multi-object scenes. Specifically, I introduce a new method of training a generative model of unaligned objects via an adversarial training in the feature space. I show that reconstructing a multi-object scene from a noisy, partial scan amounts to simply optimizing the randomly initialized latent vectors of the generative model to fit the observed points.


3D Reconstruction

3D Reconstruction

Author: Jim Ashworth

Publisher:

Published: 2014

Total Pages: 0

ISBN-13: 9781629482651

DOWNLOAD EBOOK

Three-dimensional (3D) reconstruction is the process of capturing the shape and appearance of real objects using computer vision and computer graphics. In this book, the authors present topical research in the study of the methods, applications and challenges of 3D reconstruction. Topics include 3D medical reconstruction and case studies; 3D reconstruction of coronary anatomy using invasive imaging modalities; recent advances in eel spectroscopic tomography; stereoscopic Schlieren/shadowgraph 3D reconstruction techniques; three-dimensional refractive index imaging of cells to study light scattering properties of cells and tissue; 3D imaging of material properties by combination of scanning probe microscope and ultramicrotome; 3D reconstruction and its application for maxillofacial surgery training; the automated systems of processing of the fragmented material at archaeological and craniology 3D reconstruction; three-dimensional reconstruction of an acinus for numerical and experimental studies; large scene reconstruction based on ToF cameras; and the structure and motion factorisation of non-rigid objects.


State of the Art in Dense Monocular Non-rigid 3D Reconstruction

State of the Art in Dense Monocular Non-rigid 3D Reconstruction

Author:

Publisher:

Published: 2023

Total Pages: 0

ISBN-13:

DOWNLOAD EBOOK

Abstract: 3D reconstruction of deformable (or non-rigid) scenes from a set of monocular 2D image observations is a long-standing and actively researched area of computer vision and graphics. It is an ill-posed inverse problem, since--without additional prior assumptions--it permits infinitely many solutions leading to accurate projection to the input 2D images. Non-rigid reconstruction is a foundational building block for downstream applications like robotics, AR/VR, or visual content creation. The key advantage of using monocular cameras is their omnipresence and availability to the end users as well as their ease of use compared to more sophisticated camera set-ups such as stereo or multi-view systems. This survey focuses on state-of-the-art methods for dense non-rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views. It reviews the fundamentals of 3D reconstruction and deformation modeling from 2D image observations. We then start from general methods--that handle arbitrary scenes and make only a few prior assumptions--and proceed towards techniques making stronger assumptions about the observed objects and types of deformations (e.g. human faces, bodies, hands, and animals). A significant part of this STAR is also devoted to classification and a high-level comparison of the methods, as well as an overview of the datasets for training and evaluation of the discussed techniques. We conclude by discussing open challenges in the field and the social aspects associated with the usage of the reviewed methods