Google Patent | Enhancing performance capture with real-time neural rendering
Patent: Enhancing performance capture with real-time neural rendering
Drawings: Click to check drawins
Publication Number: 20220014723
Publication Date: 20220113
Applicant: Google
Abstract
Three-dimensional (3D) performance capture and machine learning can be used to re-render high quality novel viewpoints of a captured scene. A textured 3D reconstruction is first rendered to a novel viewpoint. Due to imperfections in geometry and low-resolution texture, the 2D rendered image contains artifacts and is low quality. Accordingly, a deep learning technique is disclosed that takes these images as input and generates more visually enhanced re-rendering. The system is specifically designed for VR and AR headsets, and accounts for consistency between two stereo views.
Claims
-
A method for re-rendering an image rendered using a volumetric reconstruction to improve its quality, comprising: receiving the image rendered using the volumetric reconstruction, the image having imperfections; defining a synthesizing function and a segmentation mask to generate an enhanced image from the image, the enhanced image having fewer imperfections than the image; and computing the synthesizing function and the segmentation mask using a neural network trained based on minimizing a loss function between a predicted image generated by the neural network and a ground truth image captured by a ground truth camera during training.
-
The method according to claim 1, wherein the method further includes prior to receiving the image rendered using the volumetric reconstruction: capturing a 3D model using a volumetric capture system; and rendering the image using the volumetric reconstruction.
-
The method according to claim 2, wherein the ground truth camera and the volumetric capture system are both directed to a view during training, the ground truth camera producing higher quality images than the volumetric capture system.
-
The method according to claim 1, wherein the loss function includes a reconstruction loss based on a reconstruction difference between a segmented ground truth image mapped to activations of layers in a neural network and a segmented predicted image mapped to activations of layers in a neural network, the segmented ground truth image segmented by a ground truth segmentation mask to remove background pixels and the segmented predicted image segmented by a predicted segmentation mask to remove back ground pixels.
-
The method according to claim 1, wherein the loss function includes a head reconstruction loss based on a reconstruction difference between a cropped ground truth image mapped to activations of layers in a neural network and a cropped predicted image mapped to activations of layers in a neural network, the cropped ground truth image cropped to a head of a person identified in a ground truth segmentation mask and the cropped predicted image cropped to the head of the person identified in a predicted segmentation mask.
-
The method according to claim 4, wherein the reconstruction difference is saliency re-weighted to down-weight reconstruction differences for pixels above a maximum error or below a minimum error.
-
The method according to claim 1, wherein the loss function includes a mask loss based on a mask difference between a ground truth segmentation mask and a predicted segmentation mask.
-
The method according to claim 7, wherein the mask difference is saliency re-weighted to down-weight reconstruction differences for pixels above a maximum error or below a minimum error.
-
The method according to claim 1, wherein: the predicted image is one of a series of consecutive frames of a predicted sequence and the ground truth image is one of a series of consecutive frames of a ground truth sequence; and wherein: the loss function includes a temporal loss based on a gradient difference between a temporal gradient of the predicted sequence and a temporal gradient of the ground truth sequence.
-
The method according to claim 1, wherein the predicted image is one of a predicted stereo pair of images and the loss function includes a stereo loss based on a stereo difference between the predicted stereo pair of images.
-
The method according to claim 1, wherein the neural network is based on a fully convolutional model.
-
The method according to claim 1, wherein the computing the synthesizing function and segmentation mask using a neural network comprises: computing the synthesizing function and segmentation mask for a left eye viewpoint; and computing the synthesizing function and segmentation mask for a right eye view point.
-
The method according to claim 1, wherein the computing the synthesizing function and segmentation mask using a neural network is performed in real time.
-
A performance capture system comprising: a volumetric capture system configured to render at least one image reconstructed from at least one viewpoint of a captured 3D model, the at least one image including imperfections; a rendering system configured to receive the at least one image from the volumetric capture system and to generate, in real time, at least one enhanced image in which the imperfections of the at least one image are reduced, the rendering system including a neural network configured to generate the at least one enhanced image by training prior to use, the training including minimizing a loss function between predicted images generated by the neural network during training and corresponding ground truth images captured by at least one ground truth camera coordinated with the volumetric capture system during training.
-
The performance capture system according to claim 14, wherein the at least one ground truth camera is included in the performance capture system during training and otherwise not included in the performance capture system.
-
The performance capture system according to claim 14, wherein the volumetric capture system includes a single active stereo camera directed to a single view and, during training, includes a single ground truth camera directed to the single view.
-
The performance capture system according to claim 14, wherein the volumetric capture system includes a plurality of active stereo cameras directed to multiple views and, during training, includes a plurality of ground truth cameras directed to the multiple views.
-
The performance capture system according to claim 14, wherein the performance capture system includes a stereo display configured to display one of the at least one enhanced image as a left eye view and one of the at least one enhanced image as a right eye view.
-
The performance capture system according to claim 18, wherein the performance capture system is a virtual reality (VR) headset.
-
The performance capture system according to claim 18, wherein the stereo display is included in an augmented reality (AR) headset.
-
The performance capture system according to claim 18, wherein the stereo display is a head-tracked auto-stereo display.
-
A non-transitory computer readable storage medium containing program code that when executed by a processor of a computing device causes the computing device to perform a method for re-rendering an image rendered using a volumetric reconstruction to improve its quality, the method including: receiving the image rendered using the volumetric reconstruction, the image having imperfections; defining a synthesizing function and a segmentation mask to generate an enhanced image from the image, the enhanced image having fewer imperfections than the image; and computing the synthesizing function and the segmentation mask using a neural network trained based on minimizing a loss function between a predicted image generated by the neural network and a ground truth image captured by a ground truth camera during training.
-
The non-transitory computer readable storage medium containing program code that when executed by a processor of a computing device causes the computing device to perform a method for re-rendering an image rendered using a volumetric reconstruction to improve its quality according to claim 22, wherein the loss function includes a reconstruction loss, a mask loss, a head loss, a temporal loss, and a stereo loss.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Patent Application Ser. No. 62/774,662, filed on Dec. 3, 2018, entitled “ENHANCING PERFORMANCE CAPTURE WITH REAL-TIME NEURAL RENDERING”, the disclosure of which is incorporated by reference herein in its entirety.
FIELD
[0002] Embodiments relate to capturing and rendering three-dimensional (3D) video. Embodiments further relate to training a neural network model for use in re-rendering an image for display.
BACKGROUND
[0003] The rise of augmented reality (AR) and virtual reality (VR) has created a demand for high quality display of 3D content (e.g., humans, characters, actors, animals, and/or the like) using performance capture rigs (e.g., camera and video rigs). Recently, real-time performance capture systems have enabled new use cases for telepresence, augmented videos and live performance broadcasting (in addition to offline multi-view performance capture systems). Existing performance capture systems can suffer from one or more technical problems, including some combination of distorted geometry, poor texturing, and inaccurate lighting, and therefore can make it difficult to reach the level of quality required in AR and VR applications. These technical problems can result in a less than desirable final user experience.
SUMMARY
[0004] In at least one aspect, the present disclosure generally describes a method for re-rendering an image rendered using a volumetric reconstruction to improve its quality. The method includes receiving the image rendered using the volumetric reconstruction, the image having imperfections. The method further includes defining a synthesizing function and a segmentation mask to generate an enhanced image from the image, the enhanced image having fewer imperfections than the image. The method further includes computing the synthesizing function and the segmentation mask using a neural network trained based on minimizing a loss function between a predicted image generated by the neural network and a ground truth image captured by a ground truth camera during training. Accordingly, rendering can mean to generate a photorealistic or non-photorealistic image from a 3D model.
[0005] In one possible implementation, the method may be performed by a computing device based on the execution of program code by a processor, the program code contained on a non-transitory computer readable storage medium.
[0006] In another possible implementation of the method, the loss function includes one or more of a reconstruction loss, a mask loss, a head loss, a temporal loss, and a stereo loss.
[0007] In another possible implementation of the method, the imperfections include artifacts in the image such as holes, noise, poor lighting, color artifacts, and/or low resolution.
[0008] In another possible implementation of the method, the method further includes capturing a 3D model using a volumetric capture system and rendering the image using the volumetric reconstruction prior to receiving the image.
[0009] In another possible implementation of the method, the ground truth camera and the volumetric capture system are both directed to a view during training, the ground truth camera producing higher quality images than the volumetric capture system
[0010] In another possible implementation of the method, the loss function includes a reconstruction loss based on a reconstruction difference between a segmented ground truth image mapped to activations of layers in a neural network and a segmented predicted image mapped to activations of layers in a neural network, the segmented ground truth image segmented by a ground truth segmentation mask to remove background pixels and the segmented predicted image segmented by a predicted segmentation mask to remove back ground pixels. Further, the reconstruction difference may be saliency re-weighted to down-weight reconstruction differences for pixels above a maximum error or below a minimum error.
[0011] In another possible implementation of the method, the loss function includes a head reconstruction loss based on a reconstruction difference between a cropped ground truth image mapped to activations of layers in a neural network and a cropped predicted image mapped to activations of layers in a neural network, the cropped ground truth image cropped to a head of a person identified in a ground truth segmentation mask and the cropped predicted image cropped to the head of the person identified in a predicted segmentation mask. Further, the reconstruction difference may be saliency re-weighted to down-weight reconstruction differences for pixels above a maximum error or below a minimum error.
[0012] In another possible implementation of the method, the loss function includes a mask loss based on a mask difference between a ground truth segmentation mask and a predicted segmentation mask. Further the mask different may be saliency re-weighted to down-weight reconstruction differences for pixels above a maximum error or below a minimum error.
[0013] In another possible implementation of the method, the predicted image is one of a series of consecutive frames of a predicted sequence and the ground truth image is one of a series of consecutive frames of a ground truth sequence. Further, the loss function includes a temporal loss based on a gradient difference between a temporal gradient of the predicted sequence and a temporal gradient of the ground truth sequence.
[0014] In another possible implementation of the method, the predicted image is one of a predicted stereo pair of images and the loss function includes a stereo loss based on a stereo difference between the predicted stereo pair of images.
[0015] In another possible implementation of the method, the neural network is based on a fully convolutional model.
[0016] In another possible implementation of the method, computing the synthesizing function and segmentation mask using a neural network includes computing the synthesizing function and segmentation mask for a left eye viewpoint, and computing the synthesizing function and segmentation mask for a right eye view point.
[0017] In another possible implementation of the method, computing the synthesizing function and segmentation mask using a neural network is performed in real time.
[0018] In at least one other aspect, the present disclosure generally describes a performance capture system. The performance capture system includes a volumetric capture system that is configured to render a at least one image reconstructed from at least one viewpoint of a captured 3D model, the at least one image including imperfections. The performance capture system further includes a rendering system that is configured to receive the at least one image from the volumetric capture system and to generate, e.g., in real time, at least one enhanced image in which the imperfections of the at least one image are reduced. The rendering system includes a neural network that is configured to generate the at least one enhanced image by training prior to use. The training includes minimizing a loss function between predicted images generated by the neural network during training and corresponding ground truth images captured by at least one ground truth camera coordinated with the volumetric capture system during training.
[0019] In one possible implementation of the performance capture system, the at least one ground truth camera is included in the performance capture system during training and otherwise not included in the performance capture system.
[0020] In another possible implementation of the performance capture system, the volumetric capture system includes a plurality of active stereo cameras directed to multiple views and, during training, includes a plurality of ground truth cameras directed to the multiple views.
[0021] In another possible implementation of the performance capture system, a stereo display is included and configured to display one of the at least one enhanced image as a left eye view and one of the at least one enhanced image as a right eye view. For example, the performance capture system may be a virtual reality (VR) headset.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] Example embodiments will become more fully understood from the detailed description given herein below and the accompanying drawings, wherein like elements are represented by like reference numerals, which are given by way of illustration only and thus are not limiting of the example embodiments.
[0023] FIG. 1 illustrates a block diagram of a performance capture system according to at least one example embodiment.
[0024] FIG. 2 illustrates a block diagram of a rendering system according to at least one example embodiment.
[0025] FIGS. 3A and 3B illustrate a method for rendering a frame of 3D video according to at least one example embodiment.
[0026] FIG. 4 illustrates a block diagram of a learning module system according to at least one example embodiment.
[0027] FIG. 5 illustrates a block diagram of a neural re-rendering module according to at least one example embodiment.
[0028] FIG. 6A illustrates layers in a convolutional neural network with no sparsity constraints.
[0029] FIG. 6B illustrates layers in a convolutional neural network with sparsity constraints.
[0030] FIGS. 7A and 7B pictorially illustrates a deep learning technique that generates visually enhanced re-rendered images from low quality images according to at least one example embodiment.
[0031] FIG. 8 pictorially illustrates examples of low-quality images.
[0032] FIG. 9 pictorially illustrates example training data for a convolutional neural network model according to at least one example embodiment.
[0033] FIG. 10A pictorially illustrates reconstruction loss according to at least one example embodiment.
[0034] FIG. 10B pictorially illustrates mask loss according to at least one example embodiment.
[0035] FIG. 10C pictorially illustrates head loss according to at least one example embodiment.
[0036] FIG. 10D pictorially illustrates stereo loss according to at least one example embodiment.
[0037] FIG. 10E pictorially illustrates temporal loss according to at least one example embodiment.
[0038] FIG. 10F pictorially illustrates saliency loss according to at least one example embodiment.
[0039] FIG. 11 pictorially illustrates a full body capture system according to at least one example embodiment.
[0040] FIG. 12 pictorially illustrates images enhanced using the disclosed technique on an un-trained sequence of images of a known (or previously trained) participant according to at least one example embodiment.
[0041] FIG. 13 pictorially illustrates viewpoint robustness of images enhanced using the disclosed technique according to at least one example embodiment.
[0042] FIG. 14 pictorially illustrates using the disclosed technique together with a super-resolution technique according to at least one example embodiment.
[0043] FIG. 15 pictorially illustrates images enhanced using the disclosed technique on an un-trained, unknown participant according to at least one example embodiment.
[0044] FIG. 16 pictorially illustrates images enhanced using the disclosed technique where the participant varies a characteristic according to at least one example embodiment.
[0045] FIG. 17 pictorially illustrates an effect of using a predicted foreground mask with the disclosed technique according to at least one example embodiment.
[0046] FIG. 18 pictorially illustrates using head loss in the disclosed technique according to at least one example embodiment.
[0047] FIG. 19 pictorially illustrates using temporal loss and stereo loss in the disclosed technique according to at least one example embodiment.
[0048] FIG. 20 pictorially illustrates using a saliency re-weighing scheme in the disclosed technique according to at least one example embodiment.
[0049] FIG. 21 pictorially illustrates using various model complexities according to at least one example embodiment.
[0050] FIG. 22 pictorially illustrates a demonstration showing neural re-rendering according to at least one example embodiment.
[0051] FIG. 23 pictorially illustrates a running time breakdown of a system according to at least one example embodiment.
[0052] FIG. 24 shows an example of a computer device and a mobile computer device according to at least one example embodiment.
[0053] FIG. 25 illustrates a block diagram of an example output image providing content in a stereoscopic display, according to at least one example embodiment.
[0054] FIG. 26 illustrates a block diagram of an example of a 3D content system according to at least one example embodiment.
[0055] It should be noted that these Figures are intended to illustrate the general characteristics of methods, structure and/or materials utilized in certain example embodiments and to supplement the written description provided below. These drawings are not, however, to scale and may not precisely reflect the precise structural or performance characteristics of any given embodiment and should not be interpreted as defining or limiting the range of values or properties encompassed by example embodiments. For example, the relative thicknesses and positioning of layers, regions and/or structural elements may be reduced or exaggerated for clarity. The use of similar or identical reference numbers in the various drawings is intended to indicate the presence of a similar or identical element or feature.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0056] While example embodiments may include various modifications and alternative forms, embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit example embodiments to the particular forms disclosed, but on the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of the claims. Like numbers refer to like elements throughout the description of the figures.
[0057] A performance capture rig (i.e., performance capture system) may be used to capture a subject (e.g., person) and their movements in three dimensions (3D). The performance capture rig can include a volumetric capture system configured to capture data necessary to generate a 3D model and (in some cases) to render a 3D volumetric reconstruction (i.e., an image) using volumetric reconstruction of a view. A variety of volumetric capture systems can be implemented, including (but not limited to) active stereo cameras, time of flight (TOF) systems, lidar systems, passive stereo cameras and the like. Further, in some implementations a single volumetric capture system is utilized, while in others a plurality of volumetric capture systems may be used (e.g., in a coordinated capture).
[0058] The volumetric reconstruction may render a video stream of images (e.g., in real time) and may render separate images corresponding to a left-eye viewpoint and a right-eye viewpoint. The left-eye viewpoint and right eye-viewpoint 2D images may be displayed on a stereo display. The stereo display may be a fixed viewpoint stereo display (e.g., 3D movie) or a head-tracked stereo display. A variety of stereo displays may be implemented, including (but not limited to) augmented reality (AR) glasses display, virtual reality (VR) headset display, auto-stereo displays (e.g., head-tracked auto-stereo displays).
[0059] Imperfections (i.e., artifacts) may exist in the rendered 2D image(s) and/or in their presentation on the stereo display. The artifacts may include graphic artifacts such as intensity noise, low resolution textures, and off colors. The artifacts may also include time artifacts such as flicker in a video stream. The artifacts may further include stereo artifacts such as inconsistent left/right views. The artifacts may be due limitations/problems associated with performance capture rig. For example, due to complexity or cost constraints the performance capture rig may be limited in the data collected. Additionally, the artifacts may be due to limitations associated with transferring data over a network (e.g., bandwidth). The disclosure describes systems and methods to reduce or eliminate the artifacts regardless of their source. Accordingly, the disclosed systems and methods are not limited to any particular performance capture system or stereo display.
[0060] In one possible implementation, technical problems associated with existing performance capture systems can result in the 3D volumetric reconstructed images containing holes, noise, low resolution textures, and color artifacts. These technical problems can result in a less than desirable user experience in VR and AR applications.
[0061] Technical solutions to the above-mentioned technical problem implements machine learning to enhance volumetric videos in real-time. Geometric non-rigid reconstruction pipelines can be combined with deep learning to produce higher quality images. The disclosed system can focus on visually salient regions (e.g., human faces), discarding non-relevant information, such as the background. The described solution can produce temporally stable renderings for implementation in VR and AR applications, where left and right views should be consistent for an optimal user experience.
[0062] The technical solutions can include real-time performance capture (i.e., image and/or video capture) to obtain approximate geometry and texture in real time. The final 2D rendered output of such systems can be low quality due to geometric artifacts, poor texturing, and inaccurate lighting. Therefore, example implementations can use deep learning to enhance the final rendering to achieve higher quality results in real-time. For example, a deep learning architecture that takes, as input, a deferred shading deep buffer and/or the final 2D rendered image from a single or multiview performance capture system, and learns to enhance such imagery in real-time, producing a final high-quality re-rendering (see FIGS. 7A and 7B) can be used. This approach can be referred to as neural re-rendering.
[0063] Described herein is a neural re-rendering technique. Technical advantages of using the neural re-rendering technique include learning to enhance low-quality output from performance capture systems in real-time, where images contain holes, noise, low resolution textures, and color artifacts. Some examples of low-quality images are shown in FIG. 8. In addition, a binary segmentation mask can be predicted that isolates the user from the rest of the background. Technical advantages of using the neural re-rendering technique also include a method for reducing the overall bandwidth and computation required of such a deep architecture, by forcing the network to learn the mapping from low-resolution input images to high-resolution output renderings in a learning phase and then using low-resolution images (e.g., enhanced) from the live performance capture system.
[0064] Technical advantages of using the neural re-rendering technique also include a specialized loss function can use semantic information to produce high quality results on faces. To reduce the effect of outliers a saliency reweighing scheme that focuses the loss on the most relevant regions can be used. The loss function is design for VR and AR headsets, where the goal is to predict two consistent views of the same object. Technical advantages of using the neural re-rendering technique also include temporally stable re-rendering by enforcing consistency between consecutive reconstructed frames.
[0065] FIG. 1 illustrates a block diagram of a performance capture system (i.e., capture system) according to at least one example embodiment. As shown in FIG. 1, the capture system 100 includes a 3D camera rig with witness cameras 110, an encoder 120, a decoder 130, a rendering module 140 and a learning module 150. The camera rig with witness cameras 110 include a first set of cameras used to capture 3D video, as video data 5, and at least one witness camera used to capture high quality (e.g., as compared to the first set of cameras) images, as ground truth image data 30, from at least one viewpoint. A ground truth image can be an image including more detail (e.g., higher definition, higher resolution, higher number of pixels, addition of more/better depth information, and/or the like) and/or an image including post-capture processing to improve image quality as compared to a frame or image associated with the 3D video. Ground truth image data can include (a set of) the ground truth image, a label for the image, image segmentation information, image and/or segment classification information, location information and/or the like. The ground truth image data 30 is used by the learning module 150 to train a neural network model(s). Each image of the ground truth image data 30 can have a corresponding frame of the video data 5.
[0066] The encoder 120 can be configured to compress the 3D video captured by the first set of cameras. The encoder 120 can be configured to receive video data 5 and generate compressed video data 10 using a standard compression technique. The decoder 130 can be configured to receive compressed video data 10 and generate reconstructed video data 15 using the inverse of the standard compression technique. The dashed/dotted line shown in FIG. 1 indicates that, in an alternate implementation, the encoder 120 and the decoder 130 can be bypassed and the video data 5 can be input directly into the rendering module 140. This can reduce the processing resources used by the capture system 100. However, the learning module 150 may not include errors introduced by compression and decompression in a training process.
[0067] The rendering module 140 is configured to generate a left eye view 20 and a right eye view 25 based on the reconstructed video data 15 (or the video data 5). The left eye view 20 can be an image for display on a left eye display of a head-mounted display (HMD). The right eye view 25 can be an image for display on a right eye display of a HMD. Rendering can include processing scene (e.g., a 3D model) associated the reconstructed video data 15 (or the video data 5) to generate a digital image. The 3D model can include, for example, shading information, lighting information, texture information, geometric information and the like. Rendering can include implementing a rendering algorithm by a graphical processing unit (GPU). Therefore, rendering can include passing the 3D model to the GPU.
[0068] The learning module 150 can be configured to train a neural network or model to generate a high-quality image based on a low-quality image. In an example implementation, an image is iteratively predicted based on the left eye view 20 (or the right eye view 25) using the neural network or model. Then each iteration of the predicted image is compared to a corresponding image selected from the ground truth image data 30 using a loss function until the loss function is minimized (or below a threshold value). The learning module 150 is described in more detail below.
[0069] FIG. 2 illustrates a block diagram of a rendering system according to at least one example embodiment. As shown in FIG. 2, the rendering system 200 includes the decoder 130, the rendering module 140 and a neural re-rendering module 210. As shown in FIG. 2, compressed video data 10 is decompressed by the decoder 130 to generate the reconstructed video data 15. The rendering module 140 then generates the left eye view 20 and the right eye view 25 based on the reconstructed video data 15.
[0070] The neural re-rendering module 210 is configured to generate a re-rendered left eye view 35 based on the left eye view 20 and to generate a re-rendered right eye view 40 based on the right eye view 25. The neural re-rendering module 210 is configured to use the neural network or model trained by the learning module 150 to generate the re-rendered left eye view 35 as a higher quality representation of the left eye view 20. The neural re-rendering module 210 is configured to use the neural network or model trained by the learning module 150 to generate the re-rendered right eye view 40 as a higher quality representation of the right eye view 25. The neural re-rendering module 210 is described in more detail below.
[0071] The capture system 100 shown in FIG. 1 can be a first phase (or phase 1) and the rendering system 200 shown in FIG. 2 can be a second phase (or phase 2) of an enhanced video rendering technique. FIGS. 3A (phase 1) and 3B (phase 2) illustrate a method for rendering a frame of 3D video according to at least one example embodiment. The steps described with regard to FIGS. 3A and 3B may be performed due to the execution of software code stored in a memory associated with an apparatus and/or service (e.g., a cloud computing service) and executed by at least one processor associated with the apparatus and/or service. However, alternative embodiments are contemplated such as a system embodied as a special purpose processor. Although the steps described below are described as being executed by a processor, the steps are not necessarily executed by a same processor. In other words, at least one processor may execute the steps described below with regard to FIGS. 3A and 3B.
[0072] As shown in FIG. 3A, in step S305 a plurality of frames of a first three-dimensional (3D) video are captured using a camera rig including at least one witness camera. For example, the camera rig (e.g., 3D camera rig with witness cameras 110) can include a first set of cameras used to capture 3D video (e.g., as video data 5) and at least one witness camera used to capture high quality (e.g., as compared to the first set of cameras) images (e.g., ground truth image data 30). The plurality of frames of the first 3D video can be video data captured by the first set of cameras.
[0073] In step S310 at least one two-dimensional (2D) ground truth image is captured for each of the plurality of frames of the first 3D video using the at least one witness camera. For example, the at least one 2D ground truth image can be a high-quality image captured by the at least one witness camera. The at least one 2D ground truth image can be captured at substantially the same moment in time as a corresponding one of the plurality of frames of the first 3D video.
[0074] In step S315 at least one of the plurality of frames of the first 3D video is compressed. For example, the at least one of the plurality of frames of the first 3D video is compressed using a standard compression technique. In step S320 the at least one frame of the plurality of frames of the first 3D video is decompressed. For example, the at least one of the plurality of frames of the first 3D video is decompressed using a standard decompression technique corresponding to the standard compression technique.
[0075] In step S325 at least one first 2D left eye view image is rendered based on the decompressed frame and at least one first 2D right eye view image is rendered based on the decompressed frame. For example, a 3D model of a scene corresponding to a frame of the decompressed first 3D video (e.g., reconstructed video data 15) is communicated to a GPU. The GPU can generate digital images (e.g., left eye view 20 and right eye view 25) based on the 3D model of a scene and return the digital images as the first 2D left eye view and the first 2D right eye view.
[0076] In step S330 a model for a left eye view of a head mount display (HMD) is trained based on the rendered first 2D left eye view image and the corresponding 2D ground truth image and a model for a right eye view of the HMD is trained based on the rendered first 2D right eye view image and the corresponding 2D ground truth image. For example, an image is iteratively predicted based on the first 2D left eye view using a neural network or model. Then each iteration of the predicted image is compared to the corresponding 2D ground truth image using a loss function until the loss function is minimized (or below a threshold value). In addition, an image is iteratively predicted based on the first 2D right eye view using a neural network or model. Then each iteration of the predicted image is compared to the corresponding 2D ground truth image using a loss function until the loss function is minimized (or below a threshold value).
[0077] As shown in FIG. 3B, in step S335 compressed video data corresponding to a second 3D video is received. For example, video data captured using a standard 3D camera rig is captured, compressed and communicated as second 3D video at a remote device (e.g., by a computing device at a remote location). This compressed second 3D video is received by a local device. The second 3D video can be different than the first 3D video.
[0078] In step S340 the video data corresponding to the second 3D video is decompressed. For example, the second 3D video (e.g., compressed video data 10) is decompressed using a standard decompression technique corresponding to the standard compression technique used by the remote device.
[0079] In step S345 a frame of the second 3D video is selected. For example, a next frame of the decompressed second 3D video can be selected for display on a HMD playing back the second 3D video. Alternatively, or in addition to, playing back the second 3D video can utilize a buffer or queue of video frames. Therefore, selecting a frame of the second 3D video can include selecting a frame from the queue based on a buffering or queueing technique (e.g., FIFO, LIFO, and the like).
[0080] In step S350 a second 2D left eye view image is rendered based on the selected frame and a second 2D right eye view image is rendered based on the selected frame. For example, a 3D model of a scene corresponding to a frame of the decompressed second 3D video (e.g., reconstructed video data 15) is communicated to a GPU. The GPU can generate digital images (e.g., left eye view 20 and right eye view 25) based on the 3D model of a scene and return the digital images as the second 2D left eye view and the second 2D right eye view.
[0081] In step S355 the second 2D left eye view image is re-rendered using a convolutional neural network architecture and the trained model for the left eye view of the HMD, and the second 2D right eye view image is re-rendered using the convolutional neural network architecture and the trained model for the right eye view of the HMD. For example, the neural network or model trained in phase 1 can be used to generate the re-rendered second 2D left eye view (e.g., re-rendered left eye view 35) as a higher quality representation of the second 2D left eye view (e.g., left eye view 20). The neural network or model trained in phase 1 can be used to generate the re-rendered second 2D right eye view (e.g., re-rendered right eye view 35) as a higher quality representation of the second 2D right eye view (e.g., right eye view 25). Then, in step S360, the re-rendered second 2D left eye view image and the re-rendered second 2D right eye view image are displayed on at least one display of the HMD.
[0082] FIG. 4 illustrates a block diagram of a learning module system according to at least one example embodiment. The learning module 150 may be, or include, at least one computing device and can represent virtually any computing device configured to perform the methods described herein. As such, the learning module 150 can include various components which may be utilized to implement the techniques described herein, or different or future versions thereof. By way of example, the learning module 150 is illustrated as including at least one processor 405, as well as at least one memory 410 (e.g., a non-transitory computer readable medium).
[0083] As shown in FIG. 4, the learning module 150 includes the at least one processor 405 and the at least one memory 410. The at least one processor 405 and the at least one memory 410 are communicatively coupled via bus 415. The at least one processor 405 may be utilized to execute instructions stored on the at least one memory 410, so as to thereby implement the various features and functions described herein, or additional or alternative features and functions. The at least one processor 405 and the at least one memory 410 may be utilized for various other purposes. In particular, the at least one memory 410 can represent an example of various types of memory and related hardware and software which might be used to implement any one of the modules described herein.
[0084] The at least one memory 410 may be configured to store data and/or information associated with the learning module system 150. For example, the at least one memory 410 may be configured to store model(s) 420, a plurality of coefficients 425 and a plurality of loss functions 430. The at least one memory 410 further includes a metrics module 435 and an enumeration module 450. The metrics module 435 includes a plurality of error definitions 440 and an error calculator 445.
[0085] In an example implementation, the at least one memory 410 may be configured to store code segments that when executed by the at least one processor 405 cause the at least one processor 405 to select and communicate one or more of the plurality of coefficients 425. Further, the at least one memory 410 may be configured to store code segments that when executed by the at least one processor 405 cause the at least one processor 405 to receive information used by the learning module 150 system to generate new coefficients 425 and/or update existing coefficients 425. The at least one memory 410 may be configured to store code segments that when executed by the at least one processor 405 cause the at least one processor 405 to receive information used by the learning module 150 to generate a new model 420 and/or update an existing model 420.
[0086] The model(s) 420 represent at least one neural network model. A neural network model can define the operations of a neural network, the flow of the operations and/or the interconnections between the operations. For example, the operations can include normalization, padding, convolutions, rounding and/or the like. The model can also define an operation. For example, a convolution can be defined by a number of filters C, a spatial extent (or filter size) K.times.K, and a stride S. A convolution does not have to be square. For example, the spatial extent can be K.times.L. In a convolutional neural network context (see FIGS. 6A and 6B) each neuron in the convolutional neural network can represent a filter. Therefore, a convolutional neural network with 8 neurons per layer can have 8 filters using one (1) layer, 16 filters using two (2) layers, 24 filters using three (3) layers … 64 filters using 8 layers … 128 filters using 16 layers and so forth. A layer can have any number of neurons in the convolutional neural network.
[0087] A convolutional neural network can have layers with differing numbers of neurons. The K.times.K spatial extent (or filter size) can include K columns and K (or L) rows. The K.times.K spatial extent can be 2.times.2, 3.times.3, 4.times.4, 5.times.5, (K.times.L) 2.times.4 and so forth. Convolution includes centering the K.times.K spatial extent on a pixel and convolving all of the pixels in the spatial extent and generating a new value for the pixel based on all (e.g., the sum of) the convolution of all of the pixels in the spatial extent. The spatial extent is then moved to a new pixel based on the stride and the convolution is repeated for the new pixel. The stride can be, for example, one (1) or two (2) where a stride of one moves to the next pixel and a stride of two skips a pixel.
[0088] The coefficients 425 represent variable value that can be used in one or more of the model(s) 420 and/or the loss function(s) 430 for using and/or training a neural network. A unique combination of a model(s) 420, a coefficients 425 and loss function(s) can define a neural network and how to train the unique neural network. For example, a model of the model(s) 420 can be defined to include two convolution operations and an interconnection between the two. The coefficients 425 can include a corresponding entry defining the spatial extent (e.g., 2.times.4, 2.times.2, and/or the like) and a stride (e.g., 1, 2, and/or the like) for each convolution. In addition, the loss function(s) 430 can include a corresponding entry defining a loss function to train the model and a threshold value (e.g., min, max, min change, max change, and/or the like) for the loss.
[0089] The metrics module 435 includes the plurality of error definitions 440 and the error calculator 445. Error definitions can include, for example, functions or algorithms used to calculate an error and a threshold value (e.g., min, max, min change, max change, and/or the like) for an error. The error calculator 445 can be configured to calculate an error between two images based on a pixel-by-pixel difference between the two images using the algorithm. Types of errors can include photometric error, peak signal-to-noise ratio (PSNR), structural similarity (SSIM), multiscale SSIM (MS-SSIM), mean squared error, perceptual error, and/or the like. The enumeration module 450 can be configured to iterate one or more of the coefficients 425.
[0090] In an example implementation, one of the coefficients is changed for a model of the model(s) 420 by the enumeration module 450 while holding the remainder of the coefficients constant. During each iteration (e.g., an iteration to train the left eye view), the processor 405 predicts an image using the model with the view (e.g., left eye view 20) as input and calculates the loss (possibly using the ground truth image data 30) until the loss function is minimized and/or a change in loss is minimized. Then the error calculator 445 calculates an error between the predicted image and the corresponding image of the ground truth image data 30. If the error is unacceptable (e.g., greater than a threshold value or greater than a threshold change compared to a previous iteration) another of the coefficients is changed by the enumeration module 450. In an example implementation, two or more loss functions can be optimized. In this implementation, the enumeration module 450 can be configured to select between the two or more loss functions.
[0091] According to an example implementation, an image I (e.g., left eye view 20 and right eye view 25) rendered from a volumetric reconstruction (e.g., reconstructed video data 15), an enhanced version of I, denoted as I.sub.e can be generated or computed. The transformation function between I and I.sub.e should target VR and AR applications. Therefore, the following principles should be considered: a) the user typically focuses more on salient features, like faces, and artifacts in those areas should be highly penalized, b) when viewed in stereo, the outputs of the network have to be consistent between left and right pairs to prevent user discomfort, and c) in VR applications, the renderings are composited into the virtual world, requiring accurate segmentation masks. Further, enhanced images should be temporally consistent. A synthesis function F(I) used to generate a predicted image I.sub.pred and a segmentation mask M.sub.pred that indicates foreground pixels can be defined as I.sub.e=I.sub.pred.circle-w/dot.M.sub.pred where 573 is the element-wise product, such that background pixels in I.sub.e are set zero.
[0092] At training time, a body part semantic segmentation algorithm can be used to generate I.sub.seg, the semantic segmentation of the ground-truth image I.sub.gt captured by the witness camera, as illustrated in FIG. 9 (Segmentation). To obtain improved segmentation boundaries for the subject, the predictions of this algorithm can be refined using a pairwise CRF. This semantic segmentation can be useful for AR/VR rendering.
[0093] The training of a neural network that computes F(I) can include training a neural network to optimize the loss function:
=W.sub.1.sub.rec+W.sub.2.sub.mask+W.sub.3.sub.head+W.sub.4.sub.temporal+- W.sub.1.sub.stereo (1)
where the weights w.sub.i are empirically chosen such that all the losses can provide a similar contribution.
[0094] Instead of using standard .sub.2 or .sub.1 losses in the image domain, the .sub.1 loss can be computed in the feature space of a 16 layer network (e.g., VGG16) trained on an image database (e.g., ImageNet). The loss can be computed as the -1 distance of the activations of conv1 through conv5 layers. This gives very comparable results to using a Generative adversarial networks (GAN) loss, without the overhead of employing a GAN architecture during training. Reconstruction Loss .sub.rec can be computed as:
L.sub.rec=.SIGMA..sub.i=1.sup.5.parallel.VGG.sub.i(M.sub.gt.circle-w/dot- .I.sub.gt)-VGG.sub.i(M.sub.pred.circle-w/dot.I.sub.pred).parallel.* (2)
where M.sub.gt=(I.sub.seg.noteq.background) is a binary segmentation mask that turns off background pixels (see FIG. 9) M.sub.pred is the predicted binary segmentation mask, VGG.sub.i() maps an image to the activations of the cony-i layer of VGG and .parallel..parallel.* is a “saliency re-weighted” -norm defined later in this section. To speed-up color convergence, we optionally add a second term to .sub.rec defined as the .sub.1 norm between I.sub.gt and I.sub.pred that is weighed to contribute 1/10 of the main reconstruction loss. An example of the reconstruction loss is shown in FIG. 10A.
[0095] Mask loss .sub.mask can cause the model to predict an accurate foreground mask M.sub.pred. This can be seen as a binary classification task. For foreground pixels the value y.sup.+=1 is assigned, whereas for background pixels y.sup.-=0 is used. The final loss can be defined as:
.sub.mask=.parallel.M.sub.gt-M.sub.pred.parallel.* (3)
where .parallel..parallel.* is the saliency re-weighted .sub.1 loss. Other classification losses such as a logistic loss can be considered. However, they can produce very similar results. An example of the mask loss is shown in FIG. 10B.
[0096] The head loss .sub.head can focus the neural network on the head to improve the overall sharpness of the face. Similar to the body loss, a 16 layer network (e.g., VGG16) can be used to compute the loss in the feature space. In particular, the crop I.sup.C can be defined for an image I as a patch cropped around the head pixels as given by the segmentation labels of I.sub.seg and resized to 512.times.512 pixels. The loss can be computed as:
.sub.head.SIGMA..sub.i=1.sup.5.parallel.VGG.sub.i(M.sub.gt.sup.C.circle– w/dot.I.sub.gt.sup.C)-VGG.sub.i(M.sub.pred.sup.C.circle-w/dot.I.sub.pred.s- up.C).parallel.* (4)
An example of the head loss is shown in FIG. 10C.
[0097] Temporal Loss .sub.temporal can be used to minimize the amount of flickering between two consecutive frames. The temporal loss between a frame I.sup.t and I.sup.t-1 can be used. Minimizing the difference between I.sup.t and I.sup.t-1 would produce temporally blurred results. Therefore, a loss that tries to match the temporal gradient of the predicted sequence, i.e.I.sub.pred.sup.t-I.sub.pred.sup.t-1, with the temporal gradient of the ground truth sequence, i.e.I.sub.gt.sup.t-I.sub.gt.sup.t-1 can be used. The loss can be computed as:
.sub.temporal=.parallel.(I.sub.pred.sup.t-I.sub.pred.sup.t-1)-(I.sub.gt.- sup.t-I.sub.gt.sup.t-1).parallel..sub.1 (5)
An example of the computed temporal loss is shown in FIG. 10E.
[0098] Stereo Loss .sub.stereo can be designed for VR and AR applications, when the neural network is applied on the left and right eye views. In this case, inconsistencies between both eyes may limit depth perception and result in discomfort for the user. Therefore, a loss that ensures self-supervised consistency in the output stereo images can be used. A stereo pair of the volumetric reconstruction can be rendered and each eye’s image can be used as input to the neural network, where the left image I.sup.L matches ground-truth camera viewpoint and the right image I.sup.r is rendered at an offset distance (e.g., 65 mm) along the x-coordinate. The right prediction I.sub.pred.sup.R is then warped to the left viewpoint using the (known) geometry of the mesh and compared to the left prediction I.sub.pred.sup.R. A warp operator I.sub.warp can be defined using a Spatial Transformer Network (STN), which uses a bi-linear interpolation of 4 pixels and fixed warp coordinates. The loss can be computed as:
.sub.stereo=.parallel.I.sub.pred.sup.L-I.sub.warp(I.sub.pred.sup.R).para- llel..sub.1 (6)
An example of the stereo loss is shown in FIG. 10D.
[0099] The above losses receive a contribution from every pixel in the image (with the exception of the masked pixels). However, imperfections in the segmentation mask, may bias the network towards unimportant areas. Pixels with the highest loss can be outliers (e.g., next to the boundary of the segmentation mask). These outlier pixels can dominate the overall loss (see FIG. 10F). Therefore, down-weighting these outlier pixels to discard them from the loss, while also down-weighing pixels that are easily reconstructed (e.g. smooth and texture-less areas) can be desirable. To do so, given a residual image x of size W.times.H.times.C, y can be set as the per-pixel .sub.1 norm along channels of x, and minimum and maximum percentiles p.sub.min and p.sub.max can be defined over the values of y. A pixel’s p component of a saliency reweighing matrix of the residual y can be defined as:
.gamma. p .function. ( y ) = { 1 if .times. .times. y .di-elect cons. [ .GAMMA. .function. ( p min , y ) , .GAMMA. .function. ( p max , y ) ] 0 otherwise ( 7 ) ##EQU00001##
where .GAMMA.(i, y) extracts the i’th percentile across the set of values in y and p.sub.min, p.sub.max, .alpha..sub.i are empirically chosen and depend on the task at hand.
[0100] This saliency as a weight on each pixel of the residual y computed for .sub.rec and .sub.head can be defined as:
.parallel.y.parallel.*=.parallel..gamma.(y).circle-w/dot.y.parallel..sub- .1 (8)
where .circle-w/dot. is the element-wise product.
[0101] A continuous formulation of .gamma..sub.p (y) defined by the product of a sigmoid and an inverted sigmoid can also be used. Gradients with respect to the re-weighing function are not computed. Therefore, the re-weighing function does not need to be continuous for SGD to work. The effect of saliency reweighing is shown in FIG. 10F. The reconstruction error is along the boundary of the subject when no saliency re-weighing is used. Conversely, the application of the proposed outlier removal technique forces the network to focus on reconstructing the actual subject. Finally, as byproduct of the saliency re-weighing a cleaner foreground mask can be predicted when compared to the one obtained with a semantic segmentation algorithm. The saliency re-weighing scheme may only be applied to the reconstruction, mask, and head losses.
[0102] FIG. 5 illustrates a block diagram of a neural re-rendering module according to at least one example embodiment. The neural re-rendering module 210 may be, or include, at least one computing device and can represent virtually any computing device configured to perform the methods described herein. As such, the neural re-rendering module 210 can include various components which may be utilized to implement the techniques described herein, or different or future versions thereof. By way of example, the neural re-rendering module 210 is illustrated as including at least one processor 505, as well as at least one memory 510 (e.g., a non-transitory computer readable medium).
[0103] As shown in FIG. 5, the neural re-rendering module includes the at least one processor 505 and the at least one memory 410. The at least one processor 505 and the at least one memory 510 are communicatively coupled via bus 515. The at least one processor 505 may be utilized to execute instructions stored on the at least one memory 510, so as to thereby implement the various features and functions described herein, or additional or alternative features and functions. The at least one processor 505 and the at least one memory 510 may be utilized for various other purposes. In particular, the at least one memory 510 can represent an example of various types of memory and related hardware and software which might be used to implement any one of the modules described herein.
[0104] The at least one memory 510 may be configured to store data and/or information associated with the neural re-rendering module 210. For example, the at least one memory 510 may be configured to store model(s) 420, a plurality of coefficients 425, and a neural network 520. In an example implementation, the at least one memory 510 may be configured to store code segments that when executed by the at least one processor 505 cause the at least one processor 505 to select one of the models 420 and/or one or more of the plurality of coefficients 425.
[0105] The neural network 520 can include a plurality of operations (e.g., convolution 530-1 to 530-9). The plurality of operations, interconnections and the data flow between the plurality of operations can be a model selected from the model(s) 420. The model (as operations, interconnects and data flow) illustrated in the neural network is an example implementation. Therefore, other models can be used to enhance images as described herein.
[0106] In the example implementation shown in FIG. 5, the neural network 520 operations include convolutions 530-1, 530-2, 530-3, 530-4, 530-5, 530-6, 530-7, 530-8 and 530-9, convolution 535 and convolutions 540-1, 540-2, 540-3, 540-4, 540-5, 540-6, 540-7, 540-8 and 540-9. Optionally (as illustrated with dashed lines), the neural network 520 operations can include a pad 525, a clip 545 and a super-resolution 550. The pad 525 can be configured to pad or add pixels to the input image at the boundary of the image if the input image needs to be made larger. Padding can include using pixels adjacent to the boundary of the image (e.g., mirror-padding). Padding can include adding a number of pixels with a value of R=0, G=0, B=0 (e.g., zero padding). The clip 545 can be configured to clip any value for R, G, B above 255 to 255 and any value below 0 to 0. The clip 545 can be configured to clip for other color systems (e.g., YUV) based on the max/min for the color system.
[0107] The super-resolution 550 can include upscaling the resultant image (e.g., x2, x4, x6, and the like) and applying a neural network as a filter to the upscaled image to generate a high-quality image from the relatively lower quality upscaled image. In an example implementation, the filter is selectively applied to each pixel from a plurality of trained filters.
[0108] In the example implementation shown in FIG. 5, the neural network 520 uses a U-NET like architecture. This model can implement viewpoint synthesis from 2D images in real-time on GPUs architectures. The example implementation uses a fully convolutional model (e.g., without max pooling operators). Further, the implementation can use bilinear upsampling and convolutions to minimize or eliminate checkerboard artifacts.
[0109] As is shown, the neural network 520 architecture includes 18 layers. Nine (9) layers are used for encoding/compressing/contracting/downsampling and nine (9) layers are used for decoding/decompressing/expanding/upsampling. For example, convolutions 530-1, 530-2, 530-3, 530-4, 530-5, 530-6, 530-7, 530-8 and 530-9 are used for encoding and convolutions 540-1, 540-2, 540-3, 540-4, 540-5, 540-6, 540-7, 540-8 and 540-9 are used for decoding. Convolution 535 can be used as a bottleneck. A bottleneck can be a 1.times.1 convolution layer configured to decrease the number of input channels for K.times.K filters. The neural network 520 architecture can include skip connections between the encoder and decoder blocks. For example, skip connections are shown between convolution 530-1 and convolution 540-9, convolution 530-3 and convolution 540-7, convolution 530-5 and convolution 540-5, and convolution 530-7 and convolution 540-3.
[0110] In the example implementation, the encoder begins with convolution 530-1 configured with a 3.times.3 convolution with N.sub.init filters followed by a sequence of downsampling blocks including convolutions 530-2, 530-3, 530-4, and 530-5. Convolutions 530-2, 530-3, 530-4, 530-5, 530-6, and 530-7 where i .di-elect cons.{1, 2, 3, 4} can include two convolutional layers each with N.sub.i filters. The first layer, 530-2, 530-4, and 530-6, can have a filter size 4.times.4, stride 2 and padding 1, whereas the second layer, 530-3, 530-5, and 530-7 can have a filter size of 3.times.3 and stride 1. Thus, each of the convolutions can reduce the size of the input by a factor of 2 due to the strided convolution. Finally, two dimensionality preserving convolutions, 530-8, and 530-9, are performed. The outputs of the convolutions are can pass through a ReLU activation function. In an example implementation, set N.sub.init=32 and N.sub.i=G.sup.iN.sub.init, where G is the filter size growth factor after each downsampling block.
[0111] The decoder includes upsampling blocks 540-3, 540-4, 540-5, 540-6, 540-7, 540-8 and 540-9 that mirror the downsampling blocks but in reverse. Each such block i .di-elect cons. {4, 3, 2, 1} consists of two convolutional layers. The first layer 540-3, 540-5, and 540-7 bilinearly upsamples its input, performs a convolution with N.sub.i filters, and leverages a skip connection to concatenate the output with that of its mirrored encoding layer. The second layer 540-4, 540-6 and 540-8 performs a convolution using 2N.sub.i filters of size 3.times.3. The final network output is produced by a final convolution 540-9 with 4 filters, whose output is passed through a ReLU activation function to produce the reconstructed image and a single channel binary mask of the foreground subject. To produce stereo images for VR and AR headsets, both left and right views are enhanced using the same neural network (with shared weights). The final output is an improved stereo output pair. Data (e.g., filter size, stride, weights, N.sub.init, N.sub.i, G.sup.i and/or the like) associated with neural network 520 can be stored in model(s) 420 and coefficients 425.
[0112] Returning to FIG. 4, the model associated with the neural network 520 architecture can be trained as described above. The neural network can be trained using Adam and weight decay algorithms until convergence (e.g., until the point where losses no longer consistently drop). In a test environment, typically around 3 millions iterations resulted in convergence. Training in the test environment utilized Tensorflow on 16 NVIDIA V100 GPUs with a batch size of 1 per GPU takes 55 hours.
[0113] Random crops of images were used for training, ranging from 512.times.512 to 960.times.896. These images can be crops from the original resolution of the input and output pairs. In particular, the random crop can contain the head pixels in 75% of the samples, and for which the head loss is computed. Otherwise, the head loss may be disabled as the network might not see it completely in the input patch. This can result in high quality results for the face, while not ignoring other parts of the body. Using random crops along with standard l-2 regularization on the weights of the network may be sufficient to prevent over-fitting. When high resolution witness cameras are employed the output can be twice the input size.
[0114] The percentile ranges for the saliency re-weighing can be empirically set to remove the contribution of the imperfect mask boundary and other outliers without affecting the result otherwise. When p.sub.max=98, p.sub.min values in range [25, 75] can be acceptable. In particular, p.sub.min=50 for the reconstruction loss and p.sub.min=25 for the head loss and .alpha..sub.1=.alpha..sub.2=1.1 may be set.
Evaluation
[0115] The system was evaluated on two different datasets one for single camera (upper body reconstruction) and one for multiview, full body capture. The single camera dataset includes 42 participants of which 32 are used for training. For each participant, four 10 second sequences were captured, where they a) dictate a short text, with and without eyeglasses, b) look in all directions, and c) gesticulate extremely.
[0116] For the full body capture data, a diverse set of 20 participants were recorded. Each performer was free to perform any arbitrary movement in the capture space (e.g. walking, jogging, dancing, etc.) while simultaneously performing facial movements and expressions.
[0117] For each subject 10 sequences of 500 frames were recorded. Five (5) subjects were left out from the training datasets to assess the performances of the algorithm on unseen people. Moreover, for some participants in the training set 1 sequence (i.e. 500 or 600 frames) was left out for testing purposes.
[0118] A core component of the framework is a volumetric capture system that can generate approximate textured geometry and render the result from any arbitrary viewpoint in real-time. For upper bodies, a high-quality implementation of a standard rigid-fusion pipeline was used. For full bodies, a non-rigid fusion setup where multiple cameras provide a full 360.degree. coverage of the performer was used. Upper Body Capture (Single View). The upper body capture setting uses a single 1500.times.1100 active stereo camera paired with a 1600.times.1200 RGB view. To generate high quality geometry, a method that extends PatchMatch Stereo to spacetime matching, and produces depth images at 60 Hz was used. Meshes were computed by applying volumetric fusion and texture map the mesh with the color image as shown in FIG. 7A.
[0119] In the upper body capture scenario, a single camera was mounted at a 25.degree. degree angle to the side from where the subject is looking at, of the same resolution as the capture camera. See FIG. 9, top row, for an example of input/output pair. Full Body Capture (Multi View) was implemented a system with 16 IR cameras and 8 low
resolution (1280.times.1024) RGB cameras located as to surround the user to be captured. The 16 IR cameras are built as 8 stereo pairs together with an active illuminator as to simplify the stereo matching problem (see FIG. 11 top right image for a breakdown of the hardware). A fast, state of art disparity estimation algorithms was used to estimate accurate depth. The stages of the non-rigid tracking pipeline are performed in real-time. The output of the system consists of temporally consistent meshes and per-frame texture maps. In FIG. 11, the overall capture system and some results obtained are shown.
[0120] In the full body capture rig, 8 high resolution (4096.times.2048) witness cameras were mounted (see FIG. 11, top left image). Training examples are shown in FIG. 9, bottom row. Both studied capture setups can span a large number of use cases. The single-view capture rig may not allow for large viewpoint changes, but might be more practical, as it requires less processing and only needs to transmit a single RGBD stream, while the multiview capture rig may be limited to studio-type captures but allows for complete free viewpoint video experiences.
[0121] The performance of the system was tested, analyzing the importance of each component. A first analysis can be qualitative seeking to assess the viewpoint robustness, generalization to different people, sequences and clothing. A second analysis can be a quantitative evaluation on the architectures. Multiple perceptual measurements such as PSNR, Multi Scale-SSIM, Photometric Error, e.g. l1-loss, and Perceptual Loss were used. The experimental evaluation supports each design choice of the system and also shows the trade-offs between quality and model complexity.
[0122] Qualitative results were determined for different test sequences and under different conditions. Upper Body Results (Single View). In the single camera case, the network has to learn mostly to in-paint missing areas and fix missing fine geometry details such as eyeglasses frames. Some results are shown in FIG. 12, top two rows. The method appears to preserve the high quality details that are already in the input image and is able to in-paint plausible texture for those unseen regions. Further, thin structures such as the eyeglass frames get reconstructed in the network output.
[0123] Full Body Results (Multi View). The multi view case carries the additional complexity of blending together different images that may have different lighting conditions or have small calibration imprecisions. This affects the final rendering results as shown in FIG. 12, bottom two rows. The input images appear to have distorted geometry and color artifacts. The system learns how to generate high quality renderings with reduced artifacts, while at the same time adjusting the color balance to the one of the witness cameras.
[0124] Although the ground truth viewpoints are limited to a sparse set of cameras, the system can be shown to be robust to unseen camera poses. Viewpoint robustness can be demonstrated by simulating a camera trajectory around the subject. Results are shown in FIG. 13. The super-resolution model is able to produce more details compared to the input images. Results can be appreciated in FIG. 14, where the predicted output at the same input resolution contains more subtle details like facial hair. Increasing the output resolution by a factor of 2 can leads to slightly sharper results and better up-sampling especially around the edges.
[0125] Generalization across different subjects (e.g., people, clothing) is shown in FIG. 15. For the single view case, substantial degradation was not observed in the results. For the full body case, although there is still a substantial improvement from the input image, the final results look less sharp possibly indicating that more diverse training data is needed to achieve better generalization performance on unseen participants.
[0126] The behavior of the system was assessed with different clothes or accessories. Examples shown in FIG. 16 include a subject wearing different clothes, and another with and without eyeglasses. The system correctly recovers most of the eyeglasses frame structure even though they are barely reconstructed by the traditional geometrical approach due to their fine structures.
[0127] The main quantitative results are summarized in Table 1, where multiple statistics were calculated for the proposed model and all its variants. As shown in Table 1, Quantitative evaluations on test sequences of subjects seen in training and subjects unseen in training. Photometric error is measured as the l1-norm, and perceptual is the same loss based on VGG16 used for training. The architecture was fixed and the proposed loss function was compared with the same loss minus a specific loss term indicated in each columns. On seen subjects all the models perform similarly, whereas on new subjects the proposed loss has better generalization performances. Notice how the output of the volumetric reconstruction, i.e. the input to the network is outperformed by all variants of the neural network.
TABLE-US-00001 TABLE 1 Rendered Proposed - .sub.head - .sub.mask -Saliency - .sub.stereo - .sub.temp Input Seen Photometric 0.0363 0.0357 0.0371 0.0369 0.0355 0.0355 0.0700 subjects Error PSNR 29.2 29.2 28.2 28.5 29.0 29.2 25.0 MS-SSIM 0.956 0.958 0.954 0.954 0.957 0.957 0.93 Perceptual 0.0658 0.121 0.121 0.103 0.0963 0.110 0.1748 Unseen Photometric 0.0464 0.0498 0.0506 0.0510 0.0465 0.0504 0.0783 subjects Error PSNR 26.2 25.9 25.5 25.5 26.0 25.8 24.05 MS-SSIM 0.94 0.938 0.929 0.932 0.937 0.936 0.9107 Perceptual 0.0795 0.168 0.167 0.136 0.133 0.157 0.1996
[0128] The following summarizes the findings. The segmentation mask plays an important role in in-painting missing parts, discarding the background and preserving input regions. As shown in FIG. 17, the model without the foreground mask hallucinates parts of the background and does not correctly follow the silhouette of the subject. This behavior is also confirmed in the quantitative results in Table 1, where the model without the .sub.mask performs worse compared to the proposed model. The head loss on the cropped head regions encourages sharper results on faces. Artifacts in the face region are more likely to disturb the viewer as compared to other regions. The described loss can be used to improve this region. Although the numbers in Table 1 are comparable, there is a huge visual gap between the two losses, as shown in FIG. 18. Without head loss the results are shown to be oversmoothed and facial details are lost. Whereas the described loss not only upgrades the quality of the input, but it also recovers unseen features.
[0129] Stable results across multiple viewpoints have already been shown in FIG. 13. The metrics in Table 1 show that removing temporal and stereo consistency from the optimization may outperform the model trained with the full loss function. However, this may be expected because the metrics used do not take into account factors such as temporal and spatial flickering. The effects of the temporal and stereo loss are visualized in FIG. 19. The saliency reweighing can reduce the effect of outliers as shown in FIG. 10F. This can also be appreciated in all the metrics in Table 1 where the models trained without the saliency reweighing perform consistently worse. FIG. 20 shows how the model trained with the saliency reweighing is more robust to outliers in the ground truth mask.
[0130] The importance of the model size was assessed. Three different network models were trained, starting with N.sub.init=16, 32, 64 filters respectively. In FIG. 21 qualitative examples of the three different models are shown. As expected, the biggest network achieves the better and sharper results on this task, showing that the capacity of the other two architectures is limited for this problem.
Real-Time Free Viewpoint Neural re-Rendering
[0131] A real-time demonstration of the system was implemented as shown in FIG. 22. The scenario includes of a user wearing a VR headset watching volumetric reconstructions. Left and right views were rendered with the head pose given by the headset and feed them as input to the network. The network generates the enhanced re-renderings that are then shown in the headset display. Latency is an important factor when dealing with real-time experiences. Instead of running the neural re-rendering sequentially with the actual display update, a late stage reprojection phase was implemented. In particular, the computational stream of the network was decoupled from the actual rendering, and the current head pose was used to warp the final images accordingly.
[0132] The run-time of the system was assessed using a single NVIDIA Titan V. The model with N.sub.init=32 filters was implemented where input and output are generated at the same resolution (512.times.1024). Using the standard TensorFlow graph export tool, the average running time to produce a stereo pair with neural re-rendering is around 92 ms, which may not be sufficient for real-time applications. Therefore, NVIDIA TensorRT, which performs inference optimization for a given deep architecture, was used. This resulted in a standard export with 32 bits floating point weight which brings the computational time down to 47 ms. Finally, the optimizations implemented on the NVIDIA Titan V were used, and the network weights were quantized using a 16-bit floating point. This resulted in the final run-time of 29 ms per stereo pair, with no loss in accuracy, hitting the real-time requirements.
[0133] Each block of the network was profiled to determine potential bottlenecks. The analysis is shown in FIG. 23. The encoder phase needs less than 40% of the total computational resources. As expected, most of the time is spent in the decoder layers, where the skip connections (e.g., the concatenation of encoder features with the matched decoder), leads to large convolution kernels.
[0134] A small qualitative user study on was performed on the results of the output system. Ten (10) subjects were recruited and 12 short video sequences were prepared showing the renderings of the capture system, the predicted results and the target witness views masked with the semantic segmentation as described above. The order of the videos was randomized and sequences were selected that included both seen subjects and unseen subjects.
[0135] The participants were asked whether they preferred the renders of the performance capture system (e.g., the input to the enhancement algorithm), the re-rendered versions using neural re-rendering, or the masked ground truth image (e.g., M.sub.gt.circle-w/dot.I.sub.gt). A vast majority (most if not all) of the users agreed that the output of the neural re-rendering was better compared to the renderings from the volumetric capture systems. Also, the users did not seem to notice substantial differences between seen and unseen subjects. Unexpectedly, most (greater than 50%) of the subjects preferred the output of the system even compared to the ground truth. The participants found the predicted masks using the network to be more stable than the ground truth masks used for training, which suffers from more inconsistent predictions between consecutive frames. However, a vast majority (most if not all) of the subjects agreed that ground truth is still sharper indicating a higher resolution than the neural re-rendering output, and more must be done in this direction to improve the overall quality.
[0136] FIG. 6A illustrates layers in a convolutional neural network with no sparsity constraints. FIG. 6B illustrates layers in a convolutional neural network with sparsity constraints. An example implementation of a layered neural network is shown in FIG. 6A as having three layers 605, 610, 615. Each layer 605, 610, 615 can be formed of a plurality of neurons 620. No sparsity constraints have been applied to the implementation illustrated in FIG. 6A, therefore all neurons 620 in each layer 605, 610, 615 are networked to all neurons 620 in any neighboring layers 605, 610, 615. The neural network shown in FIG. 6A is not computationally complex because of the small number of neurons 620 and layers 605, 610, 615. However, the arrangement of the neural network shown in FIG. 6A may not scale up to a larger network size (e.g., the connections between neurons/layers) easily as the computational complexity becomes large as the size of the network scales and scales in a non-linear fashion because of the density of connections.
[0137] Where neural networks are to be scaled up to work on inputs with a relatively high number of dimensions, it can therefore become computationally complex for all neurons 620 in each layer 605, 610, 615 to be networked to all neurons 620 in the one or more neighboring layers 605, 610, 615. An initial sparsity condition can be used to lower the computational complexity of the neural network, for example when the neural network is functioning as an optimization process, by limiting the number of connection between neurons and/or layers thus enabling a neural network approach to work with high dimensional data such as images.
[0138] An example of a neural network is shown in FIG. 6B with sparsity constraints, according to at least one embodiment. The neural network shown in FIG. 6B is arranged so that each neuron 620 is connected only to a small number of neurons 620 in the neighboring layers 625, 630, 635 thus creating a neural network that is not fully connected and which can scale to function with, higher dimensional data, for example, as an enhancement process for images. The smaller number of connections in comparison with a fully networked neural network allows for the number of connections between neurons to scale in a substantially linear fashion.
[0139] Alternatively, in some embodiments neural networks can be use that are fully connected or not fully connected but in different specific configurations to that described in relation to FIG. 6B.
[0140] Further, in some embodiments, convolutional neural networks are used, which are neural networks that are not fully connected and therefore have less complexity than fully connected neural networks. Convolutional neural networks can also make use of pooling or max-pooling to reduce the dimensionality (and hence complexity) of the data that flows through the neural network and thus this can reduce the level of computation required.
[0141] FIG. 24 shows an example of a computer device 2400 and a mobile computer device 2450, which may be used with the techniques described here. Computing device 2400 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 2450 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
[0142] Computing device 2400 includes a processor 2402, memory 2404, a storage device 2406, a high-speed interface 2408 connecting to memory 2404 and high-speed expansion ports 2410, and a low speed interface 2412 connecting to low speed bus 2414 and storage device 2406. Each of the components 2402, 2404, 2406, 2408, 2410, and 2412, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 2402 can process instructions for execution within the computing device 2400, including instructions stored in the memory 2404 or on the storage device 2406 to display graphical information for a GUI on an external input/output device, such as display 2416 coupled to high speed interface 2408. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 2400 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
[0143] The memory 2404 stores information within the computing device 2400. In one implementation, the memory 2404 is a volatile memory unit or units. In another implementation, the memory 2404 is a non-volatile memory unit or units. The memory 2404 may also be another form of computer-readable medium, such as a magnetic or optical disk.
[0144] The storage device 2406 is capable of providing mass storage for the computing device 2400. In one implementation, the storage device 2406 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid-state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2404, the storage device 2406, or memory on processor 2402.
[0145] The high-speed controller 2408 manages bandwidth-intensive operations for the computing device 2400, while the low speed controller 2412 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 2408 is coupled to memory 2404, display 2416 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 2410, which may accept various expansion cards (not shown). In the implementation, low-speed controller 2412 is coupled to storage device 2406 and low-speed expansion port 2414. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
[0146] The computing device 2400 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 2420, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 2424. In addition, it may be implemented in a personal computer such as a laptop computer 2422. Alternatively, components from computing device 2400 may be combined with other components in a mobile device (not shown), such as device 2450. Each of such devices may contain one or more of computing device 2400, 2450, and an entire system may be made up of multiple computing devices 2400, 2450 communicating with each other.
[0147] Computing device 2450 includes a processor 2452, memory 2464, an input/output device such as a display 2454, a communication interface 2466, and a transceiver 2468, among other components. The device 2450 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 2450, 2452, 2464, 2454, 2466, and 2468, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
[0148] The processor 2452 can execute instructions within the computing device 2450, including instructions stored in the memory 2464. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 2450, such as control of user interfaces, applications run by device 2450, and wireless communication by device 2450.
[0149] Processor 2452 may communicate with a user through control interface 2458 and display interface 2456 coupled to a display 2454. The display 2454 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 2456 may comprise appropriate circuitry for driving the display 2454 to present graphical and other information to a user. The control interface 2458 may receive commands from a user and convert them for submission to the processor 2452. In addition, an external interface 2462 may be provide in communication with processor 2452, to enable near area communication of device 2450 with other devices. External interface 2462 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
[0150] The memory 2464 stores information within the computing device 2450. The memory 2464 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 2474 may also be provided and connected to device 2450 through expansion interface 2472, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 2474 may provide extra storage space for device 2450 or may also store applications or other information for device 2450. Specifically, expansion memory 2474 may include instructions to carry out or supplement the processes described above and may include secure information also. Thus, for example, expansion memory 2474 may be provide as a security module for device 2450 and may be programmed with instructions that permit secure use of device 2450. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[0151] The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2464, expansion memory 2474, or memory on processor 2452, that may be received, for example, over transceiver 2468 or external interface 2462.
[0152] Device 2450 may communicate wirelessly through communication interface 2466, which may include digital signal processing circuitry where necessary. Communication interface 2466 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 2468. In addition, short-range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 2470 may provide additional navigation- and location-related wireless data to device 2450, which may be used as appropriate by applications running on device 2450.
[0153] Device 2450 may also communicate audibly using audio codec 2460, which may receive spoken information from a user and convert it to usable digital information. Audio codec 2460 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 2450. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 2450.
[0154] The computing device 2450 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 2480. It may also be implemented as part of a smart phone 2482, personal digital assistant, or other similar mobile device.
[0155] Although the above description describes experiencing traditional three-dimensional (3D) content including accessing a head-mounted display (HMD) device to properly view and interact with such content, described techniques can also be used for rendering to 2D displays (e.g., a left view and/or right view displayed on one or more 2D displays), mobile AR, and to 3D TVs. Further, the use of HMD devices can be cumbersome for a user to continually wear. Accordingly, the user may utilize autostereoscopic displays to access user experiences with 3D perception without requiring the use of the HMD device (e.g., eyewear or headgear). The autostereoscopic displays employ optical components to achieve a 3D effect for a variety of different images on the same plane and providing such images from a number of points of view to produce the illusion of 3D space.
[0156] Autostereoscopic displays can provide imagery that approximates the three-dimensional (3D) optical characteristics of physical objects in the real world without requiring the use of a head-mounted display (HMD) device. In general, autostereoscopic displays include flat panel displays, lenticular lenses (e.g., microlens arrays), and/or parallax barriers to redirect images to a number of different viewing regions associated with the display.
[0157] In some example autostereoscopic displays, there may be a single location that provides a 3D view of image content provided by such displays. A user may be seated in the single location to experience proper parallax, little distortion, and realistic 3D images. If the user moves to a different physical location (or changes a head position or eye gaze position), the image content may begin to appear less realistic, 2D, and/or distorted. The systems and methods described herein may reconfigure the image content projected from the display to ensure that the user can move around, but still experience proper parallax, low rates of distortion, and realistic 3D images in real time. Thus, the systems and methods described herein provide the advantage of maintaining and providing 3D image content to a user regardless of user movement that occurs while the user is viewing the display.
[0158] FIG. 25 illustrates a block diagram of an example output image providing content in a stereoscopic display, according at least one example embodiment. In an example implementation, the content may be displayed by interleaving a left image 2504A with a right image 2504B to obtain an output image 2505. The autostereoscopic display assembly 2502 shown in FIG. 25 represents an assembled display that includes at least a high-resolution display panel 2507 coupled to (e.g., bonded to) a lenticular array of lenses 2506. In addition, the assembly 2502 may include one or more glass spacers 2508 seated between the lenticular array of lenses and the high-resolution display panel 2507. In operation of display assembly 2502, the array of lenses 2506 (e.g., microlens array) and glass spacers 2508 may be designed such that, at a particular viewing condition, the left eye of the user views a first subset of pixels associated with an image, as shown by viewing rays 2510, while the right eye of the user views a mutually exclusive second subset of pixels, as shown by viewing rays 2512.
[0159] A mask may be calculated and generated for each of a left and right eye. The masks 2500 may be different for each eye. For example, a mask 2500A may be calculated for the left eye while a mask 2500B may be calculated for the right eye. In some implementations, the mask 2500A may be a shifted version of the mask 2500B. Consistent with implementations described herein, the autostereoscopic display assembly 2502 may be a glasses-free, lenticular, three-dimensional display that includes a plurality of microlenses. In some implementations, an array 2506 may include microlenses in a microlens array. In some implementations, 3D imagery can be produced by projecting a portion (e.g., a first set of pixels) of a first image in a first direction through the at least one microlens (e.g., to a left eye of a user) and projecting a portion (e.g., a second set of pixels) of a second image in a second direction through the at least one other microlens (e.g., to a right eye of the user). The second image may be similar to the first image, but the second image may be shifted from the first image to simulate parallax to thereby simulating a 3D stereoscopic image for the user viewing the autostereoscopic display assembly 2502.
[0160] FIG. 26 illustrates a block diagram of an example of a 3D content system according at least one example embodiment. The 3D content system 2600 can be used by multiple people. Here, the 3D content system 2600 is being used by a person 2602 and a person 2604. For example, the persons 2602 and 2604 are using the 3D content system 2600 to engage in a 3D telepresence session. In such an example, the 3D content system 2600 can allow each of the persons 2602 and 2604 to see a highly realistic and visually congruent representation of the other, thereby facilitating them to interact with each other similar to them being in the physical presence of each other.
[0161] Each of the persons 2602 and 2604 can have a corresponding 3D pod. Here, the person 2602 has a pod 2606 and the person 2604 has a pod 2608. The pods 2606 and 2608 can provide functionality relating to 3D content, including, but not limited to: capturing images for 3D display, processing and presenting image information, and processing and presenting audio information. The pod 2606 and/or 2608 can constitute processor and a collection of sensing devices integrated as one unit.
[0162] The 3D content system 2600 can include one or more 3D displays. Here, a 3D display 2610 is provided for the pod 2606, and a 3D display 2612 is provided for the pod 2608. The 3D display 2610 and/or 2612 can use any of multiple types of 3D display technology to provide a stereoscopic view for the respective viewer (here, the person 2602 or 2604, for example). In some implementations, the 3D display 2610 and/or 2612 can include a standalone unit (e.g., self-supported or suspended on a wall). In some implementations, the 3D display 2610 and/or 2612 can include wearable technology (e.g., a head-mounted display). In some implementations, the 3D display 2610 and/or 2612 can include an autostereoscopic display assembly such as autostereoscopic display assembly 2502 described above.
[0163] The 3D content system 2600 can be connected to one or more networks. Here, a network 2614 is connected to the pod 2606 and to the pod 2608. The network 2614 can be a publicly available network (e.g., the internet), or a private network, to name just two examples.
[0164] The network 2614 can be wired, or wireless, or a combination of the two. The network 2614 can include, or make use of, one or more other devices or systems, including, but not limited to, one or more servers (not shown).
[0165] The pod 2606 and/or 2608 can include multiple components relating to the capture, processing, transmission or reception of 3D information, and/or to the presentation of 3D content. The pods 2606 and 2608 can include one or more cameras for capturing image content for images to be included in a 3D presentation. Here, the pod 2606 includes cameras 2616 and 2618. For example, the camera 2616 and/or 2618 can be disposed essentially within a housing of the pod 2606, so that an objective or lens of the respective camera 2616 and/or 2618 captured image content by way of one or more openings in the housing. In some implementations, the camera 2616 and/or 2618 can be separate from the housing, such as in form of a standalone device (e.g., with a wired and/or wireless connection to the pod 2606). The cameras 2616 and 2618 can be positioned and/or oriented so as to capture a sufficiently representative view of (here) the person 2602. While the cameras 2616 and 2618 should preferably not obscure the view of the 3D display 2610 for the person 2602, the placement of the cameras 2616 and 2618 can generally be arbitrarily selected. For example, one of the cameras 2616 and 2618 can be positioned somewhere above the face of the person 2602 and the other can be positioned somewhere below the face. For example, one of the cameras 2616 and 2618 can be positioned somewhere to the right of the face of the person 2602 and the other can be positioned somewhere to the left of the face. The pod 2608 can in an analogous way include cameras 2620 and 2622, for example.
[0166] The pod 2606 and/or 2608 can include one or more depth sensors to capture depth data to be used in a 3D presentation. Such depth sensors can be considered part of a depth capturing component in the 3D content system 2600 to be used for characterizing the scenes captured by the pods 2606 and/or 2608 in order to correctly represent them on a 3D display. Also, the system can track the position and orientation of the viewer’s head, so that the 3D presentation can be rendered with the appearance corresponding to the viewer’s current point of view. Here, the pod 2606 includes a depth sensor 2624. In an analogous way, the pod 2608 can include a depth sensor 2626. Any of multiple types of depth sensing or depth capture can be used for generating depth data. In some implementations, an assisted-stereo depth capture is performed. The scene can be illuminated using dots of lights, and stereomatching can be performed between two respective cameras. This illumination can be done using waves of a selected wavelength or range of wavelengths. For example, infrared (IR) light can be used. Here, the depth sensor 2624 operates, by way of illustration, using beams 2628A and 2628. The beams 2628A and 2628B can travel from the pod 2606 toward structure or other objects (e.g., the person 2602) in the scene that is being 3D captured, and/or from such structures/objects to the corresponding detector in the pod 2606, as the case may be. The detected signal(s) can be processed to generate depth data corresponding to some or the entire scene. As such, the beams 2628A-B can be considered as relating to the signals on which the 3D content system 2600 relies in order to characterize the scene(s) for purposes of 3D representation. For example, the beams 2628A-B can include IR signals. Analogously, the pod 2608 can operate, by way of illustration, using beams 2630A-B.
[0167] Depth data can include or be based on any information regarding a scene that reflects the distance between a depth sensor (e.g., the depth sensor 2624) and an object in the scene. The depth data reflects, for content in an image corresponding to an object in the scene, the distance (or depth) to the object. For example, the spatial relationship between the camera(s) and the depth sensor can be known, and can be used for correlating the images from the camera(s) with signals from the depth sensor to generate depth data for the images.
[0168] In some implementations, depth capturing can include an approach that is based on structured light or coded light. A striped pattern of light can be distributed onto the scene at a relatively high frame rate. For example, the frame rate can be considered high when the light signals are temporally sufficiently close to each other that the scene is not expected to change in a significant way in between consecutive signals, even if people or objects are in motion. The resulting pattern(s) can be used for determining what row of the projector is implicated by the respective structures. The camera(s) can then pick up the resulting pattern and triangulation can be performed to determine the geometry of the scene in one or more regards.
[0169] The images captured by the 3D content system 2600 can be processed and thereafter displayed as a 3D presentation. Here, 3D image 2604’ is presented on the 3D display 2610. As such, the person 2602 can perceive the 3D image 2604’ as a 3D representation of the person 2604, who may be remotely located from the person 2602. 3D image 2602’ is presented on the 3D display 2612. As such, the person 2604 can perceive the 3D image 2602’ as a 3D representation of the person 2602. Examples of 3D information processing are described below.
[0170] The 3D content system 2600 can allow participants (e.g., the persons 2602 and 2604) to engage in audio communication with each other and/or others. In some implementations, the pod 2606 includes a speaker and microphone (not shown). For example, the pod 2608 can similarly include a speaker and a microphone. As such, the 3D content system 2600 can allow the persons 2602 and 2604 to engage in a 3D telepresence session with each other and/or others.
Additional Work
[0171] Generating high quality output from textured 3D models is the ultimate goal of many performance capture systems. Below briefly review methods including image-based approaches, full 3D reconstruction systems and finally learning based solutions.
[0172] Image-based Rendering (IBR). IBR techniques warp a series of input color images to novel viewpoints of a scene using geometry as a proxy. These methods can be expanded to video inputs, where a performance is captured with multiple RGB cameras and proxy depth maps are estimated for every frame in the sequence. This work is limited to a small 30.degree. coverage, and its quality strongly degrades when the interpolated view is far from the original cameras.
[0173] Recent works introduced optical flow methods to IBR, however their accuracy is usually limited by the optical flow quality. Moreover these algorithms are restricted to off-line applications. Another limitation of IBR techniques is their use of all input images in the rendering stage, making them ill-suited for real-time VR or AR applications as they require transferring all camera streams, together with the proxy geometry. However, IBR techniques have been successfully applied to constrained applications like 360.degree. degree stereo video which produce two separate video panoramas, one for each eye, but are constrained to a single viewpoint.
[0174] Volumetric capture systems can use more than 100 cameras to generate high quality offline volumetric performance capture. A controlled environment with green screen and carefully adjusted lighting conditions can be used to produce high quality renderings. Methods can produce rough point clouds via multi-view stereo, that is then converted into a mesh using Poisson Surface Reconstruction. Based on the current topology of the mesh, a keyframe is selected which is tracked over time to mitigate inconsistencies between frames. The overall processing time is .about.28 minutes per frame. Some examples can be extended to support texture tracking. These frameworks then deliver high quality volumetric captures at the cost of sacrificing real-time capability.
[0175] Methods can use single RGB-D sensors to either track a template mesh or reference volume. However, these systems require careful motions and none support high quality texture reconstruction. The systems can use fast correspondence tracking to extend the single view non-rigid tracking pipeline to handle topology changes robustly. This method however, can suffer from both geometric and texture inconsistency.
[0176] Even in the latest state of the art reconstruction can suffer from geometric holes, noise, and low quality textures. A realtime texturing method that can be applied on top of the volumetric reconstruction may improve quality. This is based on a simple Poisson blending scheme, as opposed to offline systems that use a Conditional Random Field (CRF) model. The final results are still coarse in terms of texture. Moreover these algorithms require streaming all of the raw input images, which means it does not scale with high resolution input images.
[0177] Learning-based solutions to generate high quality renderings have shown promising results. However, models only a few, explicit object classes, and the final results do not necessary resemble high-quality real objects. Follow-up work can use end-to-end encoder-decoder networks to generate novel views of an image starting from a single viewpoint. However, due to the large variability, the results are usually low resolution. Some systems employ some notion of 3D geometry in the end-to-end process to deal with the 2D-3D object mapping. For instance, an explicit flow that maps pixels from the input image to the output novel view can be used. In Deep View Morphing two input images and an explicit rectification stage, that roughly aligns the inputs, are used to generate intermediate views. Another trend explicitly employs multiview stereo in an end-to-end fashion to generate intermediate view of city landscapes.
[0178] 3D shape completion methods can use 3D filters to volumetrically complete 3D shapes. But given the cost of such filters both at training and at test time, these have shown low resolution reconstructions and performance far from real-time. PointProNets show results for denoising point clouds but again are computationally demanding, and do not consider the problem of texture reconstruction.
[0179] The problem considered herein can be related to the image-to-image translation task where the goal is to start from input images from a certain domain and “translate” them into another domain, e.g. from semantic segmentation labels to realistic images. The scenario described herein is similar, as we transform low quality 3D renderings into higher quality images. Despite the huge amount of work on the topic, it is still challenging to generate high quality renderings of people in real-time for performance capture. Contrary to previous work, we leverage recent advances in real-time volumetric capture and use these systems as input for our learning based framework to generate high quality, real-time renderings of people performing arbitrary actions.
[0180] In one aspect, the disclosure describes a system comprising a camera rig including at least one first camera configured to capture three dimensional (3D) video at a first quality, and at least one second camera configured to capture a two dimensional (2D) image at a second quality, the second quality being a higher quality than the first quality; and a processor configured to perform steps including: rendering a first digital image based on the captured 3D video, rendering a second digital image based on the captured 3D video, training a neural network to generate a third digital image based on the first digital image and the 2D image, the third digital image having a third quality, the third quality being a higher quality than the first quality, and training the neural network to generate a fourth digital image based on the second digital image and the 2D image, the third digital image having the third quality.
[0181] In another aspect, the disclosure describes A non-transitory computer-readable storage medium having stored thereon computer executable program code which, when executed on a computer system, causes the computer system to perform steps comprising: receiving a file including compressed three dimensional (3D) video data, the 3D video data including a plurality of frames of a 3D video; selecting a frame from the plurality of frames of the 3D video; decompressing the frame; rendering a first digital image based on the decompressed frame, the first digital image having a first quality; rendering a second digital image based on the decompressed frame, the second digital image having the first quality; generating a third digital image by re-rendering the first digital image using a trained neural network, the third digital image having a second quality, the second quality being a higher quality than the first quality; and generating a fourth digital image by re-rendering the second digital image using the trained neural network, the fourth digital image having the second quality.
[0182] In another aspect the disclosure describes a method comprising a first phase and a second phase. In a first phase: capturing a three dimensional (3D) video at a first quality; capturing a two dimensional (2D) image at a second quality, the second quality being a higher quality than the first quality, a frame of the 3D video and the 2D image being captured at substantially the same moment in time; rendering a first digital image based on the captured 3D video; rendering a second digital image based on the captured 3D video; training a neural network to generate a third digital image based on the first digital image and the 2D image, the third digital image having a third quality, the third quality being a higher quality than the first quality; and training the neural network to generate a fourth digital image based on the second digital image and the 2D image, the third digital image having the third quality. In a second phase: receiving a file including compressed three dimensional (3D) video data, the 3D video data including a plurality of frames of a received 3D video; selecting a frame from the plurality of frames of the received 3D video; decompressing the frame; rendering a fifth digital image based on the decompressed frame, the fifth digital image having the first quality; rendering a sixth digital image based on the decompressed frame, the sixth digital image having the first quality; generating a seventh digital image by re-rendering the fifth digital image using the trained neural network, the seventh digital image having the third quality; and generating an eighth digital image by re-rendering the sixth digital image using the trained neural network, the eighth digital image having the third quality.
[0183] Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. Various implementations of the systems and techniques described here can be realized as and/or generally be referred to herein as a circuit, a module, a block, or a system that can combine software and hardware aspects. For example, a module may include the functions/acts/computer program instructions executing on a processor (e.g., a processor formed on a silicon substrate, a GaAs substrate, and the like) or some other programmable data processing apparatus.
[0184] Some of the above example embodiments are described as processes or methods depicted as flowcharts. Although the flowcharts describe the operations as sequential processes, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations may be re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, subprograms, etc.
[0185] Methods discussed above, some of which are illustrated by the flow charts, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. A processor(s) may perform the necessary tasks.
[0186] Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. Example embodiments, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
[0187] It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term and/or includes any and all combinations of one or more of the associated listed items.
[0188] It will be understood that when an element is referred to as being connected or coupled to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being directly connected or directly coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., between versus directly between, adjacent versus directly adjacent, etc.).
[0189] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms a, an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms comprises, comprising, includes and/or including, when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
[0190] It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0191] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0192] Portions of the above example embodiments and corresponding detailed description are presented in terms of software, or algorithms and symbolic representations of operation on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
[0193] In the above illustrative embodiments, reference to acts and symbolic representations of operations (e.g., in the form of flowcharts) that may be implemented as program modules or functional processes include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and may be described and/or implemented using existing hardware at existing structural elements. Such existing hardware may include one or more Central Processing Units (CPUs), digital signal processors (DSPs), application-specific-integrated-circuits, field programmable gate arrays (FPGAs) computers or the like.
[0194] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as processing or computing or calculating or determining of displaying or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0195] Note also that the software implemented aspects of the example embodiments are typically encoded on some form of non-transitory program storage medium or implemented over some type of transmission medium. The program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or CD ROM), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art. The example embodiments not limited by these aspects of any given implementation.
[0196] Lastly, it should also be noted that whilst the accompanying claims set out particular combinations of features described herein, the scope of the present disclosure is not limited to the particular combinations hereafter claimed, but instead extends to encompass any combination of features or embodiments herein disclosed irrespective of whether or not that particular combination has been specifically enumerated in the accompanying claims at this time.