空 挡 广 告 位 | 空 挡 广 告 位

Qualcomm Patent | Systems And Methods For Extracting A Surface Normal From A Depth Image

Patent: Systems And Methods For Extracting A Surface Normal From A Depth Image

Publication Number: 20200242779

Publication Date: 20200730

Applicants: Qualcomm

Abstract

A method performed by an electronic device is described. The method includes obtaining a two-dimensional (2D) depth image. The method also includes extracting a 2D subset of the depth image. The 2D subset includes a center pixel and a set of neighboring pixels. The method further includes calculating a normal corresponding to the center pixel by calculating a covariance matrix based on the 2D subset.

FIELD OF DISCLOSURE

[0001] The present disclosure relates generally to electronic devices. More specifically, the present disclosure relates to systems and methods for extracting a surface normal from a depth image.

BACKGROUND

[0002] Some electronic devices (e.g., cameras, video camcorders, digital cameras, cellular phones, smart phones, computers, televisions, automobiles, personal cameras, wearable cameras, virtual reality devices (e.g., headsets), augmented reality devices (e.g., headsets), mixed reality devices, action cameras, surveillance cameras, mounted cameras, connected cameras, robots, drones, healthcare equipment, set-top boxes, etc.) capture and/or utilize sensor data. For example, a smart phone may capture and/or process still and/or video images. Processing sensor data may demand a relatively large amount of time, memory, and energy resources. The resources demanded may vary in accordance with the complexity of the processing.

[0003] In some cases, processing sensor data may consume a large amount of resources. As can be observed from this discussion, systems and methods that improve sensor data processing may be beneficial.

SUMMARY

[0004] A method performed by an electronic device is described. The method includes obtaining a two-dimensional (2D) depth image. The method also includes extracting a 2D subset of the depth image. The 2D subset includes a center pixel and a set of neighboring pixels. The method further includes calculating a normal corresponding to the center pixel by calculating a covariance matrix based on the 2D subset.

[0005] The method may include removing one or more background pixels from the 2D subset to produce a trimmed 2D subset. The normal may be calculated based on the trimmed 2D subset. Calculating the normal may include performing sharpening by calculating a difference between a neighboring pixel value and a center pixel value and calculating the covariance matrix based on the difference. Calculating the covariance matrix may be based on the difference and a transpose of the difference.

[0006] Calculating the normal corresponding to the center pixel may include determining an eigenvector of the covariance matrix. The eigenvector may be associated with a smallest eigenvalue of the covariance matrix. Calculating the normal corresponding to the center pixel may include lifting the 2D subset into a three-dimensional (3D) space.

[0007] The method may include extracting a set of 2D subsets of the depth image that includes the 2D subset. The set of 2D subsets may correspond to foreground pixels of the depth image. The method may include calculating a set of normals corresponding to the set of 2D subsets. A time complexity of extracting the set of 2D subsets and calculating the set of normals may be on an order of a number of the 2D subsets multiplied by a time complexity of calculating an eigenvector.

[0008] The method may include generating a surface based on the normal corresponding to the center pixel. The method may include registering the 2D depth image with a second depth image based on the normal corresponding to the center pixel.

[0009] An electronic device is also described. The electronic device includes a memory. The electronic device also includes a processor coupled to the memory. The processor is configured to obtain a two-dimensional (2D) depth image. The processor is also configured to extract a 2D subset of the depth image. The 2D subset includes a center pixel and a set of neighboring pixels. The processor is further configured to calculate a normal corresponding to the center pixel by calculating a covariance matrix based on the 2D subset.

[0010] A non-transitory tangible computer-readable medium storing computer executable code is also described. The computer-readable medium includes code for causing an electronic device to obtain a two-dimensional (2D) depth image. The computer-readable medium also includes code for causing the electronic device to extract a 2D subset of the depth image. The 2D subset includes a center pixel and a set of neighboring pixels. The computer-readable medium further includes code for causing the electronic device to calculate a normal corresponding to the center pixel by calculating a covariance matrix based on the 2D subset.

[0011] An apparatus is also described. The apparatus includes means for obtaining a two-dimensional (2D) depth image. The apparatus also includes means for extracting a 2D subset of the depth image. The 2D subset includes a center pixel and a set of neighboring pixels. The apparatus further includes means for calculating a normal corresponding to the center pixel by calculating a covariance matrix based on the 2D subset.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] FIG. 1 is a block diagram illustrating one example of an electronic device in which systems and methods for extracting a surface normal from a depth image may be implemented;

[0013] FIG. 2 is a flow diagram illustrating one configuration of a method for extracting a surface normal from a depth image;

[0014] FIG. 3 is a diagram illustrating an example of two-dimensional (2D) subset of a depth image;

[0015] FIG. 4 is a flow diagram illustrating one configuration of another method for extracting a surface normal from a depth image;

[0016] FIG. 5 is a diagram illustrating another example of 2D subset of a depth image;

[0017] FIG. 6 is a flow diagram illustrating another configuration of a method for extracting a surface normal from a depth image;

[0018] FIG. 7 is a flow diagram illustrating another configuration of a method for extracting a surface normal from a depth image;

[0019] FIG. 8 is a flow diagram illustrating another configuration of a method for extracting a surface normal from a depth image;

[0020] FIG. 9 is a diagram illustrating an example of a depth image visualization and a surface normal visualization;* and*

[0021] FIG. 10 illustrates certain components that may be included within an electronic device configured to implement various configurations of the systems and methods disclosed herein.

DETAILED DESCRIPTION

[0022] Some configurations of the systems and methods disclosed herein may relate to fast surface normal extraction from a depth image. As used herein, a “normal” is a vector or an estimate of a vector that is perpendicular to a surface or plane. A “surface normal” is an estimate of a vector that is perpendicular to a surface. A depth image may be a two-dimensional (2D) set of depth values. For example, a depth sensor may capture a depth image by determining a distance between the depth sensor and the surface of one or more objects in an environment for a set of pixels. A surface normal may be estimated as a vector that is perpendicular to the surface. The surface normal may be utilized in computer vision, to render a representation of the surface (e.g., render a surface represented by the depth image), and/or to register depth images (e.g., register surfaces represented by the depth images), etc. One problem with extracting a surface normal is the time complexity and/or load utilized to determine the surface normal.

[0023] In some approaches, a surface normal may be calculated from a depth image as follows. A three-dimensional (3D) point cloud may be extracted from a depth image. For each point in the 3D point cloud, a local neighborhood is extracted via searching a k-dimensional (k-d) tree of the point cloud. A k-d tree is a data structure that organizes points in a space, where the points correspond to leaf nodes and non-leaf nodes of the tree represent divisions of the space. The local neighborhood may be extracted by searching the k-d tree for nearest neighbors. Then, a local plane may be fitted to the local neighborhood. Fitting a plane may include determining a plane corresponding to data. For example, fitting a plane to the local neighborhood may include determining a plane that minimizes a squared error between the plane and the points in the local neighborhood (e.g., a plane that best “fits” the points in the local neighborhood). The local plane may be utilized to find the normal of the surface or an average of the cross-products of local tangent vectors may be taken to find the normal of the surface. In these approaches, the time complexity may be expressed in big O notation as O(N log N+NM.sup..omega.), where N is a number of points in the 3D point cloud, M is a number of points in each local neighborhood, .omega. is a constant number, and M.sup..omega. is a time complexity to compute the normal via eigen decomposition. Eigen decomposition is a factorization of a matrix into eigenvalues and eigenvectors. In an example, QR decomposition is an algorithm that may be utilized to compute the eigen decomposition, where QR decomposition has a time complexity where .omega. is a constant number between 2 and 3. Other approaches may be utilized to compute the eigen decomposition. For example, the complexity of eigen decomposition may be related to the complexity of the algorithm utilized and the data utilized. For instance, depending on the algorithm utilized and the data type (e.g., whether the data matrix can be diagonalized or not), .omega. may vary between 2 and 3. One portion of the time complexity (denoted N log N) is due to extracting local neighborhoods from the 3D point cloud. Accordingly, one problem with this approach is that the local neighborhood extraction (e.g., the k-d searching) adds time complexity, which slows the surface normal calculation and/or consumes more processing resources.

[0024] In some approaches, calculating the surface normal may be performed from a gradient of the depth image as follows. A tangent vector may be calculated via two directional derivatives. Then, the surface normal may be calculated as the cross product of the two directional derivative vectors. One problem with this approach is that the surface normal calculated may not be accurate in comparison with other approaches, as it may be based on a single cross product rather than an average of cross products.

[0025] Some configurations of the systems and methods disclosed herein may address one or more of these problems. For example, some configurations of the systems and methods disclosed herein may provide improved speed and/or accuracy of a surface normal calculation. In some configurations, the systems and methods disclosed herein may provide improved speed and/or accuracy in generating (e.g., rendering) a surface and/or registering depth images. Accordingly, some configurations of the systems and methods disclosed herein improve the functioning of computing devices (e.g., computers) themselves by improving the speed at which computing devices are able to calculate a surface normal and/or by improving the accuracy with which computing devices are able to calculate a surface normal. Additionally or alternatively, some configurations of the systems and methods disclosed herein may provide improvements to various technologies and technical fields, such as automated environmental modeling, automated environmental navigation, scene rendering, and/or measurement fusion. In some configurations, the systems and methods disclosed herein may extract an accurate normal with improved speed. In some configurations, a local neighborhood may be extracted via a 2D depth image and a local plane may be fitted. The surface normal may be estimated as the normal of the local plane. In some configurations, a mask based trimmed window may be used along an object boundary to avoid error introduced by the background.

[0026] Various configurations are now described with reference to the Figures, where like reference numbers may indicate functionally similar elements. The systems and methods as generally described and illustrated in the Figures herein could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of several configurations, as represented in the Figures, is not intended to limit scope, as claimed, but is merely representative of the systems and methods.

[0027] FIG. 1 is a block diagram illustrating one example of an electronic device 102 in which systems and methods for extracting a surface normal from a depth image may be implemented. Examples of the electronic device 102 include cameras, video camcorders, digital cameras, cellular phones, smartphones, tablet devices, personal cameras, wearable cameras, virtual reality devices (e.g., headsets), augmented reality devices (e.g., headsets), mixed reality devices, action cameras, surveillance cameras, mounted cameras, connected cameras, vehicles (e.g., semi-autonomous vehicles, autonomous vehicles, etc.), automobiles, robots, aircraft, drones, unmanned aerial vehicles (UAVs), servers, computers (e.g., desktop computers, laptop computers, etc.), network devices, healthcare equipment, gaming consoles, appliances, etc. In some configurations, the electronic device 102 may be integrated into one or more devices (e.g., vehicles, drones, mobile devices, etc.). The electronic device 102 may include one or more components or elements. One or more of the components or elements may be implemented in hardware (e.g., circuitry), a combination of hardware and software (e.g., a processor with instructions), and/or a combination of hardware and firmware.

[0028] In some configurations, the electronic device 102 may include a processor 112, a memory 126, one or more displays 132, one or more image sensors 104, one or more optical systems 106, and/or one or more communication interfaces 108. The processor 112 may be coupled to (e.g., in electronic communication with) the memory 126, display(s) 132, image sensor(s) 104, optical system(s) 106, and/or communication interface(s) 108. It should be noted that one or more of the elements illustrated in FIG. 1 may be omitted in some configurations. In particular, the electronic device 102 may not include one or more of the elements illustrated in FIG. 1 in some configurations. For example, the electronic device 102 may or may not include an image sensor 104 and/or optical system 106. Additionally or alternatively, the electronic device 102 may or may not include a display 132. Additionally or alternatively, the electronic device 102 may or may not include a communication interface 108.

[0029] In some configurations, the electronic device 102 may be configured to perform one or more of the functions, procedures, methods, steps, etc., described in connection with one or more of FIGS. 1-10. Additionally or alternatively, the electronic device 102 may include one or more of the structures described in connection with one or more of FIGS. 1-10.

[0030] The memory 126 may store instructions and/or data. The processor 112 may access (e.g., read from and/or write to) the memory 126. Examples of instructions and/or data that may be stored by the memory 126 may include depth image data 128 (e.g., depth images, 2D arrays of depth measurements, etc.), normal data 130 (e.g., surface normal data, vector data indicating surface normals, etc.), sensor data obtainer 114 instructions, subset extractor 116 instructions, normal calculator 118 instructions, registration module 120 instructions, surface generator 122 instructions, and/or instructions for other elements, etc.

[0031] The communication interface 108 may enable the electronic device 102 to communicate with one or more other electronic devices. For example, the communication interface 108 may provide an interface for wired and/or wireless communications. In some configurations, the communication interface 108 may be coupled to one or more antennas 110 for transmitting and/or receiving radio frequency (RF) signals. For example, the communication interface 108 may enable one or more kinds of wireless (e.g., cellular, wireless local area network (WLAN), personal area network (PAN), etc.) communication. Additionally or alternatively, the communication interface 108 may enable one or more kinds of cable and/or wireline (e.g., Universal Serial Bus (USB), Ethernet, High Definition Multimedia Interface (HDMI), fiber optic cable, etc.) communication.

[0032] In some configurations, multiple communication interfaces 108 may be implemented and/or utilized. For example, one communication interface 108 may be a cellular (e.g., 3G, Long Term Evolution (LTE), Code-Division Multiple Access (CDMA), etc.) communication interface 108, another communication interface 108 may be an Ethernet interface, another communication interface 108 may be a universal serial bus (USB) interface, and yet another communication interface 108 may be a wireless local area network (WLAN) interface (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 interface). In some configurations, the communication interface(s) 108 may send information (e.g., normal data 130) to and/or receive information (e.g., depth image data 128) from another electronic device (e.g., a vehicle, a smart phone, a camera, a display, a robot, a remote server, etc.).

[0033] In some configurations, the electronic device 102 (e.g., sensor data obtainer 114) may obtain (e.g., receive) one or more frames (e.g., image frames, video, and/or depth image frames, etc.). The one or more frames may indicate data captured from an environment (e.g., one or more objects and/or background).

[0034] In some configurations, the electronic device 102 may include one or more image sensors 104 and/or one or more optical systems 106 (e.g., lenses). An optical system 106 may focus images of objects that are located within the field of view of the optical system 106 onto an image sensor 104. The optical system(s) 106 may be coupled to and/or controlled by the processor 112 in some configurations. The one or more image sensor(s) 104 may be used in conjunction with the optical system(s) 106 or without the optical system(s) 106 depending on the implementation. In some implementations, the electronic device 102 may include a single image sensor 104 and/or a single optical system 106. For example, a single depth camera with a particular resolution at a particular frame rate (e.g., 30 frames per second (fps), 60 fps, 120 fps, etc.) may be utilized. In other implementations, the electronic device 102 may include multiple optical system(s) 106 and/or multiple image sensors 104. For example, the electronic device 102 may include two or more lenses in some configurations. The lenses may have the same focal length or different focal lengths.

[0035] In some examples, the image sensor(s) 104 and/or the optical system(s) 106 may be mechanically coupled to the electronic device 102 or to a remote electronic device (e.g., may be attached to, mounted on, and/or integrated into the body of a vehicle, the hood of a car, a rear-view mirror mount, a side-view mirror, a bumper, etc., and/or may be integrated into a smart phone or another device, etc.). The image sensor(s) 104 and/or optical system(s) 106 may be linked to the electronic device 102 via a wired and/or wireless link in some configurations.

[0036] Examples of image sensor(s) 104 may include optical image sensors, depth image sensors, red-green-blue-depth (RGBD) sensors, etc. For example, the electronic device 102 may include one or more depth sensors (e.g., time-of-flight cameras, lidar sensors, etc.) and/or optical sensors (e.g., two-dimensional (2D) image sensors, 3D image sensors, etc.). The image sensor(s) 104 may capture one or more image frames (e.g., optical image frames, depth image frames, optical/depth frames, etc.). As used herein, the term “optical” may denote visual spectrum information. For example, an optical sensor may sense visual spectrum data. As used herein, the term “depth” may denote a distance between a depth sensor and an object. For example, a depth sensor may sense depth data (e.g., one or more distances between the depth sensor and an object). In some configurations, the depth image data 128 may include depth data (e.g., distance measurements) associated with one or more times or time ranges. For example, a “frame” may correspond to an instant of time or a range of time in which data corresponding to the frame is captured. Different frames may be separate or overlapping in time. Frames may be captured at regular periods, semi-regular periods, or aperiodically.

[0037] In some implementations, the electronic device 102 may include multiple optical system(s) 106 and/or multiple image sensors 104. Different lenses may each be paired with separate image sensors 104 in some configurations. Additionally or alternatively, two or more lenses may share the same image sensor 104. In some configurations, an image sensor 104 (e.g., depth image sensor) may not be paired with a lens and/or optical system(s) 106 may not be included in the electronic device 102. It should be noted that one or more other types of sensors may be included and/or utilized to produce frames in addition to or alternatively from the image sensor(s) 104 in some implementations.

[0038] In some configurations, a camera may include at least one sensor and at least one optical system. Accordingly, the electronic device 102 may be one or more cameras, may include one or more cameras, and/or may be coupled to one or more cameras in some implementations.

[0039] In some configurations, the electronic device 102 may request and/or receive the one or more depth images from another device (e.g., one or more external sensors coupled to the electronic device 102). In some configurations, the electronic device 102 may request and/or receive the one or more depth images via the communication interface 108. For example, the electronic device 102 may or may not include an image sensor 104 and may receive frames (e.g., optical image frames, depth image frames, etc.) from one or more remote devices.

[0040] The electronic device may include one or more displays 132. The display(s) 132 may present optical content (e.g., one or more image frames, video, still images, graphics, virtual environments, three-dimensional (3D) image content, 3D models, symbols, characters, etc.). The display(s) 132 may be implemented with one or more display technologies (e.g., liquid crystal display (LCD), organic light-emitting diode (OLED), plasma, cathode ray tube (CRT), etc.). The display(s) 132 may be integrated into the electronic device 102 or may be coupled to the electronic device 102. For example, the electronic device 102 may be a virtual reality headset with integrated displays 132. In another example, the electronic device 102 may be a computer that is coupled to a virtual reality headset with the displays 132. In some configurations, the content described herein (e.g., surfaces, depth image data, frames, 3D models, etc.) or a visualization thereof may be presented on the display(s) 132. For example, the display(s) 132 may present an image depicting a surface and/or 3D model of an environment (e.g., one or more objects). In some configurations, all or portions of the frames that are being captured by the image sensor(s) 104 may be presented on the display 132. Additionally or alternatively, one or more representative images (e.g., icons, cursors, virtual reality images, augmented reality images, etc.) may be presented on the display 132.

[0041] In some configurations, the electronic device 102 may present a user interface 134 on the display 132. For example, the user interface 134 may enable a user to interact with the electronic device 102. In some configurations, the display 132 may be a touchscreen that receives input from physical touch (by a finger, stylus, or other tool, for example). Additionally or alternatively, the electronic device 102 may include or be coupled to another input interface. For example, the electronic device 102 may include a camera and may detect user gestures (e.g., hand gestures, arm gestures, eye tracking, eyelid blink, etc.). In another example, the electronic device 102 may be linked to a mouse and may detect a mouse click. In yet another example, the electronic device 102 may be linked to one or more other controllers (e.g., game controllers, joy sticks, touch pads, motion sensors, etc.) and may detect input from the one or more controllers.

[0042] In some configurations, the electronic device 102 and/or one or more components or elements of the electronic device 102 may be implemented in a headset. For example, the electronic device 102 may be a smartphone mounted in a headset frame. In another example, the electronic device 102 may be a headset with integrated display(s) 132. In yet another example, the display(s) 132 may be mounted in a headset that is coupled to the electronic device 102.

[0043] In some configurations, the electronic device 102 may be linked to (e.g., communicate with) a remote headset. For example, the electronic device 102 may send information to and/or receive information from a remote headset. For instance, the electronic device 102 may send information (e.g., depth image data 128, normal data 130, surface information, frame data, one or more images, video, one or more frames, 3D model data, etc.) to the headset and/or may receive information (e.g., captured frames) from the headset.

[0044] The processor 112 may include and/or implement a sensor data obtainer 114, a subset extractor 116, a normal calculator 118, a registration module 120, and/or a surface generator 122. It should be noted that one or more of the elements illustrated in the electronic device 102 and/or processor 112 may be omitted in some configurations. For example, the processor 112 may not include and/or implement the registration module 120 and/or the surface generator 122 in some configurations. Additionally or alternatively, one or more of the elements illustrated in the processor 112 may be implemented separately from the processor 112 (e.g., in other circuitry, on another processor, on a separate electronic device, etc.).

[0045] The processor 112 may include and/or implement a sensor data obtainer 114. The sensor data obtainer 114 may obtain sensor data from one or more sensors. For example, the sensor data obtainer 114 may obtain (e.g., receive) one or more images (e.g., depth images and/or optical images, etc.). For instance, the sensor data obtainer 114 may receive depth image data 128 from one or more image sensors 104 included in the electronic device 102 and/or from one or more remote image sensors. A depth image may be a two-dimensional (2D) depth image. A 2D depth image may be a 2D array of depths (e.g., distance measurements). For example, a depth image may include a vertical (e.g., height) dimension and a horizontal (e.g., width dimension), where one or more pixels of the depth image includes a depth (e.g., distance measurement) to one or more objects (e.g., 3D objects) in an environment. In some configurations, each pixel of a depth image may indicate a depth (e.g., distance measurement) to an object or a background pixel. A background pixel may have a value (e.g., 0, -1, etc.) indicating that no object was detected (within a distance from the depth sensor, for example). In some configurations, a foreground pixel may be a pixel indicating that an object was detected (within a distance from the depth sensor, for example). For example, a non-zero pixel value of the depth image may indicate a point on a surface observed by the image sensor 104.

[0046] The processor 112 may include and/or implement a subset extractor 116. The subset extractor 116 may extract a 2D subset of a depth image. Each 2D subset of the depth image may include a center pixel and a set of neighboring pixels. As used herein, a “center pixel” may or may not be precisely in the center of the 2D subset. For example, a “center pixel” may be a pixel at the center of the 2D subset (e.g., halfway in one or both dimensions of the 2D subset), or may be a pixel offset from (e.g., next to or one or more pixels away from) the center of the 2D subset. Additionally or alternatively, the “center pixel” may be an anchor pixel relative to which the neighboring pixels may be determined. In some configurations, the center pixel may be selected and/or arbitrarily defined at any position of the 2D subset. The 2D subset may be uniform (e.g., rectangular, square, circular, symmetrical, etc.) or non-uniform (e.g., irregular, asymmetrical in one or more dimensions, etc.) in shape. The set of neighboring pixels may include all pixels in the 2D subset besides the center pixel and/or all pixels within a distance from the center pixel (e.g., all pixels within a range of .+-.1 pixel, .+-.2 pixels, .+-.3 pixels, etc., from the center pixel). In some configurations, a 2D subset may be extracted for each pixel in the depth image, for all foreground pixels in the depth image, and/or for another portion of pixels in the depth image. The 2D subset of a center pixel may correspond to a local neighborhood in three dimensions. For example, a three dimensional local neighborhood may be determined directly from the structure of the 2D subset of the 2D depth image (e.g., neighboring coordinate locations in 2D may dictate nearest neighbors in 3D without searching). Accordingly, a local neighborhood may be directly extracted from the 2D depth image as a 2D subset of the 2D depth image. In some configurations, this approach may avoid performing searching of a 3D point cloud for nearest neighbors, and thereby may reduce time complexity for extracting a surface normal.

[0047] In some configurations, the 2D subset may be extracted using a sliding window. For example, a sliding window may traverse the depth image (e.g., all pixels of the depth image, all foreground pixels of the depth image, or all pixels in a portion of the depth image). The sliding window at each pixel may include the 2D subset corresponding to that pixel (e.g., center pixel). The size of the sliding window may determine the number of neighboring pixels in each 2D subset. For example, the sliding window may have a range (e.g., .+-.1 pixel, .+-.2 pixels, .+-.3 pixels, etc.) within which the center pixel and the set of neighboring pixels are included.

[0048] In some configurations, the subset extractor 116 may remove any background pixel from the 2D subset to produce a trimmed 2D subset. The normal may be calculated based on the trimmed 2D subset in some cases and/or configurations. More detail regarding removing background pixels is given in connection with FIGS. 4-5.

[0049] The processor 112 may include and/or implement a normal calculator 118. The normal calculator 118 may calculate a normal corresponding to the center pixel based on the 2D subset (or trimmed 2D subset, for example). In some configurations, calculating the normal may include calculating a covariance matrix based on a center pixel value and neighboring pixel values.

[0050] A center pixel value may be a pixel value based on the center pixel. An example of the center pixel value may be a pixel value lifted to a 3D space from the center pixel of the 2D subset. As used herein, the term “lift” and variations thereof may denote a mapping or transformation from a space or coordinate system to another space or coordinate system (e.g., from a 2D coordinate system to a 3D coordinate system). A neighboring pixel value may be a pixel value based on a neighboring pixel. An example of the neighboring pixel value may be a pixel value lifted to a 3D space from a neighboring pixel of the 2D subset. An example of a lifting function for lifting a pixel value from a pixel of the 2D subset is given in Equation (1).

( u ) = [ ( u x - c x ) D ( u ) f x , ( u y - c y ) D ( u ) f y , D ( u ) ] ’ ( 1 ) ##EQU00001##

……
……
……

您可能还喜欢...