Towards Photo-realistic 3D Reconstruction from Casual Scanning

preview-18
  • Towards Photo-realistic 3D Reconstruction from Casual Scanning Book Detail

  • Author : Jeong Joon Park
  • Release Date : 2021
  • Publisher :
  • Genre :
  • Pages : 0
  • ISBN 13 :
  • File Size : 59,59 MB

Towards Photo-realistic 3D Reconstruction from Casual Scanning by Jeong Joon Park PDF Summary

Book Description: In this thesis, I address the problem of obtaining photo-realistic 3D models of small-scale indoor scenes from a stream of images captured with a hand-held camera. Recovering 3D structure of real-world scenes has been an important topic of research in computer vision, due to its wide applicability in virtual tourism, augmented reality, autonomous-driving or robotics. While numerous reconstruction methods have been proposed, they typically present trade-offs between practicality of capture and the realism of the reconstructed model. I introduce novel 3D reconstruction techniques that effectively navigate the trade-off curve, in order to produce photo-realistic models from user-friendly capture setups. Finally, I suggest new directions for learning generalizable scene priors to enable capture from partial inputs. Creating a photo-realistic digital replica of a physical scene involves careful modeling of geometry, surface materials, and scene lighting, all of which I address in this thesis. At the same time, a reconstruction system should be easy to use for casual users to truly unlock 3D-related applications. This thesis suggests three criteria required for a casual reconstruction system that could greatly reduce the time and resources during scanning: i) the input method should be from a hand-held consumer-grade camera, ii) the system should reconstruct full appearance from a handful of input views of a scene as opposed to a dense view-sampling, and iii) it should automatically complete unobserved parts of a scene. The thesis proposes novel techniques to tackle each of these criteria. I first describe a technique to reconstruct the appearance of shiny objects, leveraging the infrared laser system of an RGB-D sensor as a calibrated point light source to recover surface reflectance. This method takes video as an input from a hand-held camera and the scene lighting captured with a 360$^\circ$ camera to generate a realistic replication of a scene, featuring high-resolution texture and specular highlight modeling. The output model allows virtually rendering the captured scene from any viewing direction. Next, I discuss joint reconstruction of photo-realistic scene appearance and environment lighting of a target scene using a hand-held sensor. I achieve this through a joint optimization of a segmentation neural network, and a material-specific lighting model to reconstruct the input images, and adopt a neural network-enhanced rendering technique that achieve exceptional realism. The combination of physics and machine learning achieves both photo-realism and the ability to extrapolate to new views, reducing the range of required views by the users. While the first two approaches allow realistic reconstruction from casual scanning, they can only model surfaces that are captured during scanning, i.e., they do not complete missing surfaces. Completing unobserved regions typically calls for machine learning algorithms to extract and apply scene/object priors from a large database. Traditionally, the lack of efficient 3D representations has limited the development of deep learning approaches in 3D. To facilitate machine learning in 3D, I devise my DeepSDF approach that describes 3D surface as a decision boundary of a neural network, which is highly efficient in memory and at the same time can model continuous surfaces. The new representation, along with a newly proposed learning algorithm, allows reconstructing a full, plausible shape from a partial and noisy object scan. I show through experiments that the new representation is highly effective in learning geometric priors from a dataset of objects. Finally, I extend the DeepSDF representation to model multi-object scenes. Specifically, I introduce a new method of training a generative model of unaligned objects via an adversarial training in the feature space. I show that reconstructing a multi-object scene from a noisy, partial scan amounts to simply optimizing the randomly initialized latent vectors of the generative model to fit the observed points.

Disclaimer: www.yourbookbest.com does not own Towards Photo-realistic 3D Reconstruction from Casual Scanning books pdf, neither created or scanned. We just provide the link that is already available on the internet, public domain and in Google Drive. If any way it violates the law or has any issues, then kindly mail us via contact us page to request the removal of the link.

3D Image Reconstruction for CT and PET

3D Image Reconstruction for CT and PET

File Size : 94,94 MB
Total View : 2477 Views
DOWNLOAD

This is a practical guide to tomographic image reconstruction with projection data, with strong focus on Computed Tomography (CT) and Positron Emission Tomograp

Image-Based Rendering

Image-Based Rendering

File Size : 53,53 MB
Total View : 5807 Views
DOWNLOAD

Focusing exclusively on Image-Based Rendering (IBR) this book examines the theory, practice, and applications associated with image-based rendering and modeling

Computational Photography

Computational Photography

File Size : 57,57 MB
Total View : 4236 Views
DOWNLOAD

Computational Photography combines plentiful computing, digital sensors, modern optics, actuators, probes, and smart lights to escape the limitations of traditi

Image Mosaicing and Super-resolution

Image Mosaicing and Super-resolution

File Size : 18,18 MB
Total View : 3817 Views
DOWNLOAD

This book investigates sets of images consisting of many overlapping viewsofa scene, and how the information contained within them may be combined to produce si