雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Intel Patent | Camera Feature Removal From Stereoscopic Content

Patent: Camera Feature Removal From Stereoscopic Content

Publication Number: 20200294209

Publication Date: 20200917

Applicants: Intel

Abstract

In stereoscopic cameras with wide fields of view, portions of the camera’s lenses can be captured in stereoscopic content captured by the camera. These lens artifacts can be distracting to a viewer of the captured stereoscopic content. Revised stereoscopic content can be generated in which the lens artifacts in the left and right images of the stereoscopic content are replaced with image content that blends in with the remainder of the images. With the lens artifacts removed, revised stereoscopic content provides for a more immersive viewing experience to a viewer. A lens artifact in one image of a stereoscopic image can be replaced by image data generated by an inpainting model, interpolated from a portion of the image region around the lens artifact, or based on the corresponding region of the other image in the stereoscopic image. Lens masks can define the portion of an image to be replaced.

BACKGROUND

[0001] Stereoscopic images create the perception of depth by presenting slightly different versions of an image to the left and right eyes of a user. Typically, the differences between the images are the horizontal location of objects in the images. When processed by the brain, these location differences create the perception of depth. Stereoscopic images and videos are often used in augmented reality and virtual reality applications.

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] FIG. 1 is a block diagram of a first exemplary computing device.

[0003] FIGS. 2A-2C illustrate top, front, and side views of an exemplary stereoscopic camera.

[0004] FIG. 3A illustrates an exemplary stereoscopic image.

[0005] FIG. 3B shows a magnified version of the inset of FIG. 3A.

[0006] FIG. 4 is a portion of a left image of an exemplary stereoscopic image.

[0007] FIGS. 5A & 5B illustrate exemplary lens masks.

[0008] FIG. 6 is a first exemplary stereoscopic content generation method.

[0009] FIG. 7 is a second exemplary stereoscopic content generation method.

[0010] FIG. 8 is a stereoscopic content display method.

[0011] FIG. 9 is a block diagram of a second exemplary computing device.

[0012] FIG. 10 is a block diagram of a third exemplary computing device.

[0013] FIG. 11 is a block diagram of an exemplary processor core.

DETAILED DESCRIPTION

[0014] Images or videos containing depth cues can make them appear more realistic. For example, a rendered virtual reality (VR) environment in which objects located further away from a viewing location appear further away from a viewer can result in a more immersive VR experience. Generation of 360-degree content for VR applications is of interest given its wide field of view, but some existing mechanisms for 360-degree VR content capture provide limited depth cues. VR content generated by such mechanisms can thus result in a less immersive experience. Such approaches typically have the additional drawback that individual images need to be stitched together before the VR content can be viewed.

[0015] Devices exist that can generate stereoscopic content with a wide field of view. For example, devices compliant with the VR180 format can generate stereoscopic content having a horizontal field of view of substantially 180 degrees. This horizontal field of view is wider than that of typical existing head-mounted devices (HMDs). As such, a user can move their head to the left or to the right to an extent to see new content while viewing such stereoscopic content on typical HMDs, similar to how new content is observed in the real world.

[0016] One consequence of using a wide-angle camera to generate stereoscopic content is that a portion of one stereoscopic lens can be captured in images taken using the other lens. That is, the left image of a stereoscopic image can contain a portion of the right lens of the camera, and the right image can contain a portion of the left lens. The lens portions captured in the left and right images can be referred to herein as lens artifacts. The presence of lens artifacts in stereoscopic images and videos can result in a less immersive experience to a viewer of the stereoscopic content. They can remind a viewer that they are viewing recorded content and the fact that a lens artifact is not captured in both the left and right images can leave a viewer feeling disoriented. Narrowing the field of view of a stereoscopic camera may reduce or eliminate the presence of lens artifacts, but this gain comes at the cost of a narrower field of view. The technologies described herein remove lens artifacts from stereoscopic images and replace them with image data that blends in with the remainder of the image while retaining the camera’s full field of view. The technologies described herein can remove lens artifacts from stereoscopic videos as well as stereoscopic images.

[0017] In the following description, specific details are set forth, but embodiments of the technologies described herein may be practiced without these specific details. Well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring an understanding of this description. “An embodiment,” “various embodiments,” “some embodiments,” and the like may include features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics.

[0018] Some embodiments may have some, all, or none of the features described for other embodiments. “First,” “second,” “third,” and the like describe a common object and indicate different instances of like objects being referred to. Such adjectives do not imply objects so described must be in a given sequence, either temporally or spatially, in ranking, or any other manner. “Connected” may indicate elements are in direct physical or electrical contact with each other and “coupled” may indicate elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact. Terms modified by the word “substantially” include arrangements, orientations, spacings, or positions that vary slightly from the meaning of the unmodified term. For example, a stereoscopic camera with a field of view of substantially 180 degrees includes cameras that have a field of view within a few degrees of 180 degrees.

[0019] The description may use the phrases “in an embodiment,” “in embodiments,” “in some embodiments,” and “in various embodiments,” each of which may refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.

[0020] Reference is now made to the drawings wherein similar or same numbers may be used to designate the same or similar parts in different figures. The use of similar or same numbers in different figures does not mean all figures including similar or same numbers constitute a single or same embodiment. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the embodiments can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives within the scope of the claims.

[0021] FIG. 1 is a block diagram of a first exemplary computing device in which the technologies described herein may be implemented. The computing device 100 comprises a stereoscopic camera 110, one or more processors 120, and computer-readable storage media 130. The computing device 100 can be any of a wide variety of devices, such as a smartphone, VR headset, HMD, optical head-mounted display (smart glasses), mobile laptop computer, desktop computer, tablet, smart display, security camera, drone, vehicle, or any other device comprising a stereoscopic camera. Although computing device 100 is shown as having the stereoscopic camera 110 as part of the computing device 100, the computing device 100 can be a standalone stereoscopic camera, such as the stereoscopic camera illustrated in FIGS. 2A & 2B. In such embodiments, the stereoscopic camera 110 refers to stereoscopic camera components (e.g., image sensors, lenses).

[0022] The computing device 100 can generate stereoscopic images and videos through the use of the stereoscopic camera 110. As used herein, the term “content” can refer to an image, a portion of an image, multiple images, a video, a portion of a video, or multiple videos. The stereoscopic camera 110 comprises a left image sensor 140, a left lens 145, a right image sensor 150, and a right lens 155. The left and right image sensors 140 and 150 produce left and right image sensor data, respectively. Left and right image sensor data are used to generate images from which stereoscopic images or videos can be generated. A stereoscopic image can comprise a left image and a right image. A stereoscopic video comprises a series of stereoscopic images (or frames) and an image in a stereoscopic video comprises a left image and a right image. The series of left images in a stereoscopic video can be referred to as a left video and the series of right images in a stereoscopic video can be referred to as a right video.

[0023] In some embodiments, stereoscopic images and videos are generated by the stereoscopic camera 110. In other embodiments, stereoscopic content is generated by the one or more processors 120 based on left and right image sensor data or left and right images provided to the one or more processors 120 by the stereoscopic camera 110. Reference to stereoscopic content that is generated by a stereoscopic camera, generated using a stereoscopic camera, or captured by a stereoscopic camera refers to stereoscopic content that is provided by a stereoscopic camera or stereoscopic content generated by components (e.g., one or more processors 120 of FIG. 1) other than the stereoscopic camera from information provided by a stereoscopic camera, such as image sensor data generated by the stereoscopic camera.

[0024] In some embodiments, stereoscopic content comprises digital images and videos and can conform to any image or video format, such as JPEG (Joint Photographic Experts Group), TIFF (Tagged Image File Format), GIF (Graphics Interchange Format), and PNG (Portable Network Graphics) image formats; and AVI (Audio Video Interleave), WMV (Windows Media Video), any of the various MPEG (Moving Picture Experts Group) formats (e.g., MPEG-1, MPEG-2, MPEG-4), QuickTime, and 3GPP (3.sup.rd Generation Partnership Project) video formats.

[0025] The stereoscopic camera 110 can be incorporated into the computing device 100 (such as in a head-mounted device (HMD), or smartphone) or communicatively coupled to the computing device 100 through a wired or wireless connection. In some embodiments the one or more processors 120 can comprise one or more artificial intelligence (AI) accelerators that implement inpainting models that can generate revised stereoscopic content.

[0026] In some embodiments, stereoscopic content can contain portions of the camera 110 that protrude from a camera 110 or device 100 surface. For example, in embodiments where the camera 110 is a wide-angle stereoscopic camera with left and right lenses 145 and 155 that protrude from a front surface of the camera, a portion of the left and right lenses 145 and 155 can be captured in stereoscopic images or videos captured by the camera 110. As will be discussed in greater detail below, the one or more processors 120 can take a stereoscopic image or video containing lens artifacts and generate a revised stereoscopic image or video in which the lens artifacts have been removed and replaced with content that blends in with the remainder of the stereoscopic image or video.

[0027] Stereoscopic content and revised stereoscopic content can be stored in the one computer-readable storage media 130, which can be a removable memory card (e.g., Secure Digital (SD) memory card), module, stick, or any other type of removable or non-removable computer-readable storage media described herein. The computing device 100 can further optionally comprise a display 160, a battery 170, a network interface 180, and an antenna 185. In some embodiments, the device 100 is an HMD comprising a screen upon which stereoscopic content can be displayed. In other embodiments, the device 100 is a smartphone and left and right images or left and right videos are shown on left and right portions of the smartphone display, respectively. In such embodiments, the smartphone can be positioned within a virtual reality viewer or other device that, when looked into by a viewer, limits the left eye to see only the left portion of the smartphone display and limits the right eye to see only the right portion. The network interface 180 allows the computing device 100 to communicate in wired or wireless fashion with other computing devices, such remote system 190 or a remote display device 199, using any communication interface, protocol, technology, or combinations thereof. The antenna 185 enables wireless communications between the device 100 and other computing devices.

[0028] The computing device 100 can send stereoscopic content to the remote system 190 for the generation of revised stereoscopic content. The remote system 190 can be a smartphone 192, laptop 194, personal computer 196, server 197, or any other computing device. The generation of revised stereoscopic images and videos by the remote system 190 can be performed during the post-processing of stereoscopic content captured by the device 100. That is, the remote system 190 can store the received stereoscopic content and generate revised stereoscopic content at any later time. Revised stereoscopic images and videos generated by the remote system 190 can be sent back to the capturing device (i.e., device 100) or the remote display device 199 for display or storage. Revised stereoscopic content can be sent to a remote storage 198 for later retrieval by any of the devices shown in FIG. 1. The remote display device 199 can be any type of device described or referenced herein that has a display. In some embodiments, the remote display device 199 comprises an accelerometer 188 and a gyroscope 189, which the remote display device 199 can use to determine its orientation.

[0029] FIG. 1 thus illustrates various combinations of devices that can be used to capture stereoscopic content, remove lens artifacts from the stereoscopic content to create revised stereoscopic content, and display the revised stereoscopic content. In some embodiments, the computing device 100 performs all three tasks. In other embodiments, the computing device 100 captures stereoscopic images and videos, generates revised stereoscopic images and videos, and sends the revised stereoscopic images and videos to the remote display device 199 for display. In yet other embodiments, the computing device 100 captures stereoscopic content and sends the captured content to the remote system 190. The remote system 190 then generates the revised stereoscopic images and videos and sends the revised stereoscopic content to the remote display device 199 for display. The remote display device 199 can access revised stereoscopic content stored at the remote storage 198 to view the stored stereoscopic content on demand. In some embodiments, revised stereoscopic content received at the remote display device 199 can be stored at the remote display device 199 for later viewing. In some embodiments, revised stereoscopic content can be generated from stereoscopic content stored at a playback device, such as device 100 or remote display device 199, upon playback.

[0030] In an embodiment involving all of the devices illustrated in FIG. 1, the computing device 100 can be a stereoscopic camera, the remote system 190 and the remote storage 198 can be part of a cloud-based storage and image processing service, and the remote display device 199 can be an HMD. The stereoscopic camera can capture stereoscopic content and deliver the captured content to the cloud-based service, which removes the lens artifacts and stores the revised stereoscopic content. The HMD retrieves the revised stereoscopic content from the remote storage service where it is displayed at the HMD for viewing.

[0031] The display of revised stereoscopic content can be done in real-time or at a later time after capture. As used herein, the term “real-time” when used in the context of displaying revised stereoscopic images refers to displaying revised stereoscopic content at a short enough time after the stereoscopic content has been captured such that a user is likely not to notice the delay between capture and display or that the user does not suffer from motion sickness or any other physical effects due to this delay. For example, in a “see-through” augmented reality (AR) embodiment, an HMD with an integrated stereoscopic camera can generate and display revised stereoscopic content in a short enough time after capture such that the user feels that they are seeing a live view of what the camera is viewing. Additional content can be added to the revised stereoscopic content before being shown on the display to enable augmented reality use cases.

[0032] FIGS. 2A-2C illustrate top, front, and side views of an exemplary stereoscopic camera. The stereoscopic camera 200 is a wide-field camera with a field of view of substantially 180 degrees and can generate stereoscopic images and videos. The camera 200 can capture images and videos in a variety of formats, including HDR (high dynamic range). The camera 200 comprises a right portion 210 and a left portion 220. The right portion 210 comprises a right face 230 from which a right lens 240 protrudes and the left portion 220 comprises a left face 250 from which a left lens 260 protrudes. The left lens 260 and the right lens 240 are spaced apart approximately the same distance as a pair of human eyes and the left face 250 and the right face 230 lie in the same plane. As such, the camera 200 captures stereoscopic content in a manner mimicking the way humans view the world. With the camera 200 having a field of view of substantially 180 degrees, a portion of the left lens 260 is included in images or videos captured using the right lens 240 and a portion of the right lens 240 is included in images or videos captured using the left lens 260. In some embodiments, the camera 200 can generate stereoscopic content without having to stitch together individual images or videos before the captured content can be viewed. In some embodiments, the camera 200 can produce stereoscopic content according to the VR180 format.

[0033] The camera 200 further comprises a left image sensor associated with the left lens 260 and a right image sensor associated with the right lens 240 (image sensors not shown). The camera 200 further accommodates removable storage media, such as Secure Digital (SD) (e.g., SD, SD High Capacity (SDHC), SD Extended Capacity (SDXC)) or Compact Flash (CF) memory cards for storing stereoscopic content, revised stereoscopic content, or any other content captured or generated by the camera 200. The camera 200 further comprises one or more processors that can process stereoscopic images and videos generated by the camera to produce revised stereoscopic images and videos using the technologies described herein. The camera 200 further comprises a network interface that allows the camera 200 to communicate with other devices via one or more wired or wireless interfaces. The camera 200 further comprises an antenna to enable wireless communication and a rechargeable battery.

[0034] FIG. 3A illustrates an exemplary stereoscopic image. The stereoscope image 300 comprises a left image 310 and a right image 320. A left lens artifact 330 can be seen at the left edge of the right image 320 and a right lens artifact 340 can be seen at the right edge of the left image 310. Inset 350 shows the lens artifacts 330 and 340. FIG. 3B shows a magnified version of the inset 350.

[0035] FIG. 4 is a portion of a left image of an exemplary stereoscopic image. The left image portion 400 comprises a right lens artifact 410. FIG. 4 illustrates that the size of a lens artifact can be perceived to be on the scale of adjacent features in the image. For example, the right lens artifact 410 is adjacent to a window of an office building located in the background. The window appears to span most of the height of the third story of the building, and the right lens artifact 410 can thus be perceived to be a meter-sized object. The presence of an artifact with such a large perceived size can be distracting to a viewer and remind them that they are viewing a reproduction of the real world. A viewer of stereoscopic content containing lens artifacts can be further distracted by the fact that artifacts are not captured stereoscopically. That is, the left lens artifact is only captured in the right image and the right lens artifact is only captured in the left image. Having an item in a stereoscopic image presented to one eye and not the other can be disorienting to a user.

[0036] The technologies described herein remove lens artifacts captured in stereoscopic images and videos and replace them with content that blends in with the remainder of the image to create revised stereoscopic images and videos. As used herein, the term “blends in” with reference to content that replaces a lens artifact in an image means that the content replacing the lens artifact more closely matches or better fits the image than the lens artifact. In some embodiments, the content replacing a lens artifact can be content that a user would have expected to see had the lens responsible for the artifact had not been in the way. For example, with reference to FIGS. 3A & 3B, the lens artifacts 330 and 340 can be replaced with content that resembles the exterior of a white office building with floor to ceiling windows. The term “blends in” does not mean that the content replacing a lens artifact needs to be a perfect, exact, or even very good match for the remainder of the stereoscopic image. Content replacing a lens artifact blends in with a stereoscopic image or video if it is a closer match or better fit for the stereoscopic image than the replaced lens artifact. As used herein, the term “left image content” refers to content replacing a rights lens artifact in a left image and the term “right image content” refers to content replacing a left lens artifact in a right image.

[0037] In some embodiments, a lens artifact is replaced with content that blends in with a stereoscopic image or video via inpainting. As used herein, the term “inpainting” refers to the process of filling in one or more missing portions or replacing one or more portions of an image or video with content generated based on the remainder of the image or video (or a portion thereof). In some embodiments, inpainting is performed using artificial intelligence approaches. For example, an image or image portion with one or more missing portions or portions marked for replacement can be provided as input to a model that can perform inpainting on images and the model can output a revised image with the missing portions filled in or the marked portions replaced with content that blends in with the remainder of the image. Such models can be referred to herein as inpainting models. In this way, revised stereoscopic content can be generated. In some embodiments, an inpainting model can be a trained machine learning model, such as a trained convolutional neural network. In other embodiments, the model can be based on a generative adversarial network (GAN).

[0038] In some embodiments, a device that captures stereoscopic content and generates revised stereoscopic content can utilize trained inpainting models to generate revised stereoscopic content. The inpainting models can be implemented in dedicated hardware, such as one or more GPUs (graphics processing units), FPGA (Field Programmable Gate Arrays), or AI accelerators. Such dedicated hardware can be located in any device described or referenced herein that generates revised stereoscopic content.

[0039] In some embodiments, performing inpainting on one image (left/right) of a stereoscopic image can be based on a portion of the other image in the stereoscopic image (right/left). This approach takes advantage of the fact that lens artifacts are not captured stereoscopically. For example, a right lens artifact in a left image can be replaced with content based on a portion of the right image that corresponds to where the right lens artifact resides in the left image. Referring back to FIGS. 3A & 3B, the content replacing the right lens artifact 340 can be based on a portion 360 of the right image 320. Similarly, content replacing the left lens artifact 330 can be based on a portion 370 of the left image 310.

[0040] In some embodiments, inpainting comprises copying information from one image to another. For example, the left lens artifact 330 in FIG. 3A can be replaced with content from the region 370. In other embodiments, inpainting can comprise transforming content from the image space of a source image (left/right) to the image space of the destination image (right/left). This is due to the left and right lens in a stereoscopic camera being physically offset, typically by a distance roughly equal to the space between the left and right eyes in a human. The transformation can take into account one or more intrinsic or extrinsic values of the stereoscopic lenses, such as focal length, optical center, radial distortion coefficients, and information indicating the orientation (e.g., rotation and translation) of the camera with respect to a world coordinate system. In some embodiments, intrinsic lens values related to distortion effects can be derived from images taken by the camera of calibration patterns, such as checkerboard patterns.

[0041] In some embodiments, inpainting comprises providing content taken from a source image to an inpainting model to generate the content to replace a lens artifact. For example, inpainting a left image of a stereoscopic image can comprise providing the left image (or a portion thereof) to an inpainting model and a portion of the right image corresponding to the region of the left image occupied by the right lens artifact.

[0042] In embodiments involving the use of inpainting models, only a portion of the image may be provided to the inpainting model. For example, with reference to FIG. 3A, only a portion of the left image 310 may be provided to an inpainting model to replace or remove the right lens artifact 340. In some embodiments, the size of the image portion to provide to an inpainting model can be based on the size of the lens artifact to be removed or replaced. The size of the image portion provided to an inpainting model can depend upon factors such as the size of a left or right image, characteristics of the model (e.g., speed, complexity), and the size of the portions of the image that are missing or are to be replaced to be inpainted. An inpainting model that takes as input a portion of a left or right image may be implemented using fewer computing resources or be faster than a model that takes an entire left or right image as input.

[0043] In some embodiments, the portion of an image to be inpainted can be defined by a mask. FIGS. 5A & 5B illustrate exemplary lens masks. Image portion 500 shows a portion of the right image 320 of FIGS. 3A & 3B and comprises a left lens mask 510. The left lens mask 510 identifies a region of the right image 320 to be inpainted.

[0044] In some embodiments, as the location, size, and shape of a lens artifact in stereoscopic content are fixed for a particular camera, the shape, size, and location of a lens mask for the particular camera can be fixed as well. Lens mask information can identify a lens mask in various fashions. For example, lens mask information can comprise a plurality of (x,y) coordinates that define a polygon. Although the mask 510 in FIG. 5 is a partial ellipse in shape and generally tracks the shape of the left lens artifact 330, a lens mask can take any shape. In some embodiments, the mask can be a rectangle defined by a set of (x,y) coordinates or by an origin (x,y) coordinates and height and width dimensions.

[0045] In other examples, a mask can be a lens mask image in which pixels having one or more specified characteristics specify the region to be filled in or replaced during inpainting. For example, mask 550 of FIG. 5B is an image in which dark pixels 560 indicate the pixels that are to be replaced during inpainting. In some embodiments, a lens mask image can be combined (e.g., via a logical OR operation) with an image to be inpainted and the resulting combined image can be provided to an inpainting model. In other embodiments, an image to be inpainted and a lens mask image can be provided separately to an inpainting model. In yet other embodiments, a region of an image can be removed based on a lens mask and the resulting image with a missing portion to be filled can be provided to an inpainting model.

[0046] In some embodiments, lens mask information can be stored with stereoscopic content for use during post-processing or playback. Lens mask information can be stored as part of the stereoscopic content as metadata or otherwise. In some embodiments, lens mask information can be provided to a device that is to generate revised stereoscopic content, such as the remote system 190 in FIG. 1.

[0047] In some embodiments, stereoscopic content is played back on a device with a display having a field of view less than that of the stereoscopic content. For example, existing HMDs or other VR viewers generally have a field of view (FOV) that is narrower than wide-angle stereoscopic cameras, such as cameras conforming to the VR180 format. If a user is viewing stereoscopic content having a FOV greater than the display FOV of the device showing the stereoscopic content, lens artifacts would be displayed when the viewer looks to the far left (and would see the left lens artifact in the right image with their right eye) or to the far right (and would see the right lens artifact in the left image with their left eye). Lens artifacts would not be displayed when a viewer is looking generally straight ahead or moves their head to the left or the right within a certain range from center. In such embodiments, a device can generate and display revised stereoscopic content if it determines that regions of stereoscopic content containing lens artifacts would otherwise be shown on the display. In some embodiments, a device can make such a determination based on the orientation of the viewing device. The orientation of a viewing device can be determined based on sensor data provided by one or more viewing device sensors, such as an accelerometer or gyroscope.

[0048] As previously discussed, the inpainting of stereoscopic content to remove and replace lens artifacts can be performed locally at a capturing device (e.g., device 100 of FIG. 1) or remotely at a remote device or system (e.g., remote system 190). The generation of revised stereoscopic content can comprise splitting a stereoscopic image into its constituent left and right images, generating revised left and right images using the technologies described herein and recombining the revised left and right images to generate revised stereoscopic content. In some embodiments, lens artifacts are removed from stereoscopic video by converting the video into individual frames, removing the lens artifacts from the individual frames, and converting the individual frames back to a video.

[0049] In some embodiments, the technologies described herein can be used to remove additional artifacts from stereoscopic content. These additional artifacts can be caused by the presence of camera or device features other than lenses, such as buttons, switches, latches, housing portions, or any other camera or device feature located in the camera’s field of view. The technologies disclosed herein can be used to revised stereoscopic content in which these additional artifacts are removed and replaced with content that blends with the stereoscopic content.

[0050] FIG. 6 illustrates an exemplary stereoscopic content generation method. The method 600 can be performed by, for example, an HMD with an integrated stereoscopic camera that conforms to the VR180 format. At 610, a stereoscopic image is generated using a stereoscopic camera. The stereoscopic image comprises a left image and a right image. The left image includes a right lens artifact and the right image includes a left lens artifact. At 620, a revised stereoscopic image is generated by replacing the right lens artifact with left image content that blends in with the left image and replacing the left lens artifact with right image content that blends in with the right image. In other embodiments, the exemplary methods illustrated in FIG. 6 can comprise fewer, alternative, or more actions than those shown and described above. For example, the method 600 can further comprise displaying the revised stereoscopic content on a display.

[0051] FIG. 7 is a second exemplary stereoscopic content generation method. The method 700 can be performed by, for example, an HMD that is displaying stereoscopic content. At 710, an orientation of a device comprising a display having a display field of view (FOV) is determined. At 720, a portion of stereoscopic content to be shown on the display is determined based on the orientation of the device, the stereoscopic content having a stereoscopic content FOV greater than the display FOV. At 730, if the portion of the stereoscopic image to be shown on the display includes at least a portion of a lens artifact, revised stereoscopic content is generated by replacing the lens artifact with content that blends in with the stereoscopic content and the revised stereoscopic content is shown on the display instead of the stereoscopic content. At 740, if the portion of the stereoscopic image to be shown on the display does not include at least a portion of the lens artifact, the stereoscopic image is shown on the display.

[0052] FIG. 8 is a stereoscopic content display method. The method 800 can be performed by, for example, a cloud-based image or video processing service. At 810, a revised stereoscopic image is generated from a stereoscopic image captured with a stereoscopic camera, the stereoscopic image comprising a left image and a right image, the left image including a right lens artifact, the right image including a left lens artifact. The generating comprises replacing the right lens artifact with left image content that blends in with the left image and replacing the left lens artifact with content that blends in with the right image. At 820, the revised stereoscopic image is stored.

[0053] FIG. 9 is a block diagram of a second exemplary computing device 900 in which the technologies described herein may be implemented. The computing device 900 comprises a stereoscopic camera 910 and various modules and additional components. The stereoscopic camera 910 comprises a left image sensor 915 that receives light that passes through a left lens 920 and a right image sensor 925 that receives light that passes through a right lens 930. A stereoscopic generation module 940 generates stereoscopic content based on image sensor data provided by the stereoscopic camera 910. A revised stereoscopic generation module 950 can replace lens artifacts in stereoscopic content with image content that blends in with the remainder of the stereoscopic content. The revised stereoscopic generation module 950 comprises an inpainting module 960 that can be utilized to replace the lens artifacts in stereoscopic content using any of the inpainting approaches described or referenced herein. The computing device 900 further comprises a display 970, a rechargeable battery 980, a network interface 990, an antenna 994, and a stereoscopic content storage 998. The storage 998 can store stereoscopic content generated by stereoscopic generation module 940, or revised stereoscopic generation module 950, lens mask information, or any other information generated or used by the computing device 900.

[0054] FIG. 9 illustrates one example of a set of modules that can be included in a computing device in which the technologies described herein may be implemented. In other embodiments, a computing device can have more or fewer modules than those shown in FIG. 9. Further, separate modules can be combined into a single module, and single module can be split into multiple modules. Moreover, any of the modules shown in FIG. 9 can be part of the operating system of the computing device 900, one or more software applications independent of the operating system, or operate at another software layer. The modules shown in FIG. 9 can be implemented in software, hardware, firmware, or combinations thereof. A computer device referred to as being programmed to perform a method can be programmed to perform the method via software, hardware, firmware, or combinations thereof.

……
……
……

您可能还喜欢...