雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Facebook Patent | 3-D Head Mounted Display Based Environmental Modeling System

Patent: 3-D Head Mounted Display Based Environmental Modeling System

Publication Number: 10586343

Publication Date: 20200310

Applicants: Facebook

Abstract

A head mounted display (HMD) dynamically generates a model of an area. The HMD includes a depth camera assembly (DCA), a color camera, and a processing circuitry. The processing circuitry receives, from the DCA, a frame of depth image data, generates a depth map of a portion of the area based on the frame of the depth image data, receives a frame of color image data from the camera, determines a location in a model of the area that corresponds with the portion of the area of the depth map based on the frame of the color image data, and update the model of the area by combining the depth map of the portion of area with one or more other depth maps of one or more other portions of the area based on the location in the model.

BACKGROUND

The present disclosure generally relates to artificial reality systems, and more specifically relates to generating three dimensional models of a local area using a depth camera assembly on a head-mounted display (HMD).

Artificial reality systems, such as virtual reality (VR) systems, augmented reality (AR), or mixed reality (MR) systems, may provide rendered views of a local area or an object surrounding a user wearing an HMD. For example, the HMD may provide a rendered view of an object near the user, or overlay a virtual object on an object. To facilitate the rendering, there is a need to generate a model of the local area.

SUMMARY

An HMD included in an artificial reality system includes a depth camera assembly (DCA), a camera, and a circuitry connected to the DCA and PCA. The DCA generates depth image data of an area, and the camera generates color image data of the area. The processing circuitry is configured to: receive, from the DCA, a frame of the depth image data; generate a depth map of a portion of the area based on the frame of the depth image data; receive, from the camera, a frame of the color image data for the portion of the area; determine a location in a model of the area that corresponds with the portion of the area of the depth map based on the frame of the color image data; and update the model of the area by combining the depth map of the portion of area with one or more other depth maps of one or more other portions of the area based on the location in the model.

Some embodiments include a method for generating a model of an area with a HMD. The method includes: receiving, by a processing circuitry and from a depth camera assembly (DCA) of the HMD, a frame of depth image data captured by the DCA; generating, by the processing circuitry, a depth map of a portion of the area based on the frame of the depth image data; receiving, by the processing circuitry and from a camera of the HMD, a frame of color image data for the portion of the area; determining, by the processing circuitry, a location in a model of the area that corresponds with the portion of area of the depth map based on the frame of the color image data; and updating, by the processing circuitry, the model of the area by combining the depth map of the portion of area with one or more other depth maps of one or more other portions of the area based on the location in the model.

Other aspects include components, devices, systems, improvements, methods, processes, applications, computer readable mediums, and other technologies related to any of the above.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of a system, in accordance with an embodiment.

FIG. 2 is a block diagram of an asymmetric stereo compute module of FIG. 1, in accordance with an embodiment.

FIG. 3 is an example arrangement of a depth camera assembly projecting a structured light pattern into a local area in accordance with an embodiment.

FIG. 4 is a flow chart of a process for generating a depth map for a model of an area, in accordance with an embodiment.

FIG. 5 is a flow chart of a process for generating a model of an area, in accordance with an embodiment.

FIG. 6 is a high-level block diagram illustrating physical components of a computer, in accordance with an embodiment.

FIG. 7 is a schematic diagram of a near-eye display (NED), in accordance with an embodiment.

FIG. 8 is a cross-section of the NED illustrated in FIG. 7, in accordance with an embodiment.

FIG. 9 is an isometric view of a waveguide display,* in accordance with an embodiment*

FIG. 10 is a block diagram of a source assembly with a 1D source, the source assembly outputting a scanned light, in accordance with an embodiment.

The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.

DETAILED DESCRIPTION

* System Overview*

Embodiments may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.

FIG. 1 is a block diagram of a system 100, in accordance with an embodiment. The system 100 may operate in a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), or some combination thereof. The system 100 shown by FIG. 1 comprises a HMD 110 and an input/output (I/O) interface 115 that is coupled to a console 145. While FIG. 1 shows an example system 100 including one HMD 110 and one I/O interface 140, in other embodiments any number of these components may be included in the system 100. For example, there may be multiple HMDs 110 each having an associated I/O interface 140, with each HMD 110 and I/O interface 140 communicating with the console 145. In alternative configurations, different and/or additional components may be included in the system 100. Additionally, functionality described in conjunction with one or more of the components shown in FIG. 1 may be distributed among the components in a different manner than described in conjunction with FIG. 1 in some embodiments. For example, some or all of the functionality of the console 145 is provided by the HMD 110.

The HMD 110 is a head-mounted display that presents content to a user comprising augmented views of a physical, real-world local area with computer-generated elements (e.g., two dimensional (2D) or three dimensional (3D) images, 2D or 3D video, sound, etc.) In some embodiments, the presented content includes audio that is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the HMD 110, the console 145, or both, and presents audio data based on the audio information. The HMD 110 may comprise one or more rigid bodies, which may be rigidly or non-rigidly coupled to each other together. A rigid coupling between rigid bodies causes the coupled rigid bodies to act as a single rigid entity. In contrast, a non-rigid coupling between rigid bodies allows the rigid bodies to move relative to each other. In some embodiments, the HMD 110 may also act as a VR headset that presents virtual content to the user that is based in part on a real local area surrounding the user. For example, virtual content may be presented to a user of the HMD. The user physically may be in a room, and virtual walls and a virtual floor of the room are rendered as part of the virtual content.

The HMD 110 includes an electronic display 115, an optics block 120, one or more position sensors 125, a depth camera assembly (DCA) 130, an inertial measurement unit (IMU) 135, and a passive camera assembly (PCA) 195. Some embodiments of the HMD 110 have different components than those described in conjunction with FIG. 1. Additionally, the functionality provided by various components described in conjunction with FIG. 1 may be differently distributed among the components of the HMD 110 in other embodiments, or be captured in separate assemblies remote from the HMD 110.

The electronic display 115 displays 2D or 3D images to the user in accordance with data received from the console 145. In various embodiments, the electronic display 115 comprises a single electronic display or multiple electronic displays (e.g., a display for each eye of a user). Examples of the electronic display 115 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), some other display, or some combination thereof.

The optics block 120 magnifies image light received from the electronic display 115, corrects optical errors associated with the image light, and presents the corrected image light to a user of the HMD 110. In various embodiments, the optics block 120 includes one or more optical elements. Example optical elements included in the optics block 120 include: an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, a reflecting surface, or any other suitable optical element that affects image light. Moreover, the optics block 120 may include combinations of different optical elements. In some embodiments, one or more of the optical elements in the optics block 120 may have one or more coatings, such as partially reflective or anti-reflective coatings.

Magnification and focusing of the image light by the optics block 120 allows the electronic display 115 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase the field of view of the content presented by the electronic display 115. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., approximately 110 degrees diagonal), and in some cases all, of the user’s field of view. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.

In some embodiments, the optics block 120 may be designed to correct one or more types of optical error. Examples of optical error include barrel or pincushion distortion, longitudinal chromatic aberrations, or transverse chromatic aberrations. Other types of optical errors may further include spherical aberrations, chromatic aberrations, or errors due to the lens field curvature, astigmatisms, or any other type of optical error. In some embodiments, content provided to the electronic display 115 for display is pre-distorted, and the optics block 120 corrects the distortion when it receives image light from the electronic display 115 generated based on the content.

The IMU 135 is an electronic device that generates data indicating a position of the HMD 110 based on measurement signals received from one or more of the position sensors 125. A position sensor 125 generates one or more measurement signals in response to motion of the HMD 110. Examples of position sensors 125 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 135, or some combination thereof. The position sensors 125 may be located external to the IMU 135, internal to the IMU 135, or some combination thereof.

Based on the one or more measurement signals from one or more position sensors 125, the IMU 135 generates data indicating an estimated current position of the HMD 110 relative to an initial position of the HMD 110. For example, the position sensors 125 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll). In some embodiments, the IMU 135 rapidly samples the measurement signals and calculates the estimated current position of the HMD 110 from the sampled data. For example, the IMU 135 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated current position of a reference point on the HMD 110. Alternatively, the IMU 135 provides the sampled measurement signals to the console 145, which interprets the data to reduce error. The reference point is a point that may be used to describe the position of the HMD 110. The reference point may generally be defined as a point in space or a position related to an orientation and a position of the HMD 110.

The IMU 135 receives one or more parameters from the console 145. As further discussed below, the one or more parameters are used to maintain tracking of the HMD 110. Based on a received parameter, the IMU 135 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain parameters cause the IMU 135 to update an initial position of the reference point so it corresponds to a next position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the current position estimated the IMU 135. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to “drift” away from the actual position of the reference point over time. In some embodiments of the HMD 110, the IMU 135 may be a dedicated hardware component. In other embodiments, the IMU 135 may be a software component implemented in one or more processors.

The DCA 130 generates depth image data of the local area. Depth image data includes pixel values defining distance from the imaging device, and thus provides a (e.g., 3D) mapping of a locations captured in the depth image data. The DCA 130 includes a structured light (SL) projector 180, one or more imaging devices 185, and a controller 190. The SL projector 180 projects a structured light pattern that is reflected off objects in the local area, and captured by the imaging device 185 to generate the depth image data.

For example, the SL projector 180 projects a plurality of structured light (SL) elements of different types (e.g. lines, grids, or dots) onto a portion of a local area surrounding the HMD (e.g., a local area). In various embodiments, the SL projector 180 comprises an emitter and a pattern plate. Here the pattern plate is a diffractive optical element (DOE) associated with a specific pattern. In one or more embodiment a pattern in a pattern plate is defined by the specific arrangement, size, and shape of holes on the pattern plate. As noted herein, a pattern space is a rectilinear space associated with the pattern space. The emitter is configured to illuminate the pattern plate with light (e.g., infrared light). In various embodiments the illuminated pattern plate projects a SL pattern comprising a plurality of SL elements into the local area. For example, each of the SL elements projected by the illuminated pattern plate is a dot associated with a particular location on the pattern plate. That is, in an example embodiment, the SL projector 180 illuminates a local area with a pattern of dots associated with the pattern plate. The SL projector 180 including the emitter, and the pattern plate are further described below in conjunction with FIG. 3.

Each SL element projected by the DCA 130 comprises light in the infrared light part of the electromagnetic spectrum. In some embodiments, the illumination source is a laser configured to illuminate a pattern plate with infrared light such that it is invisible to a human. In some embodiments, the illumination source may be pulsed. In some embodiments, the illumination source may be visible and pulsed such that the light not visible to the eye.

The SL pattern projected into the local area by the DCA 130 deforms as it encounters various surfaces and objects in the local area. The one or more imaging devices 185 are each configured to capture one or more images of the local area. Each of the one or more images captured may include a plurality of SL elements (e.g., dots) projected by the SL projector 180 and reflected by the objects in the local area. Each of the one or more imaging devices 185 may be a detector array, a camera, or a video camera.

In various embodiments, one of the one or more imaging devices 185 is configured to capture images of the local area in the infrared spectrum, or some other spectrum of light emitted by the SL projector 180.

The controller 190 generates the depth image data based on light captured by the imaging device 185. The controller 190 may further provide the depth image data to the ASC module 165 or some other component.

The passive camera assembly (PCA) 195 includes one or more passive cameras that generate color (e.g., RGB) image data. Unlike the DCA 130 that uses active light emission and reflection, the PCA 195 captures light from the environment of a local area to generate image data. Rather than pixel values defining depth or distance from the imaging device, the pixel values of the image data may define the visible color of objects captured in the imaging data. In some embodiments, the PCA 195 includes a controller that generates the image data based on light captured by the passive imaging device.

In some embodiments, the DCA 130 and the PCA 195 share a controller. For example, the controller 190 may map each of the one or more images captured in the visible spectrum (e.g., image data) and in the infrared spectrum (e.g., depth image data) to each other. In one or more embodiments, the controller 190 is configured to, additionally or alternatively, provide the one or more images of the local area to the console 145.

The I/O interface 140 is a device that allows a user to send action requests and receive responses from the console 145. An action request is a request to perform a particular action. For example, an action request may be an instruction to start or end capture of image or video data, or an instruction to perform a particular action within an application. The I/O interface 140 may include one or more input devices. Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the action requests to the console 145. An action request received by the I/O interface 140 is communicated to the console 145, which performs an action corresponding to the action request. In some embodiments, the I/O interface 140 includes an IMU 135, as further described above, that captures calibration data indicating an estimated position of the I/O interface 140 relative to an initial position of the I/O interface 140. In some embodiments, the I/O interface 140 may provide haptic feedback to the user in accordance with instructions received from the console 145. For example, haptic feedback is provided when an action request is received, or the console 145 communicates instructions to the I/O interface 140 causing the I/O interface 140 to generate haptic feedback when the console 145 performs an action.

The console 145 provides content to the HMD 110 for processing in accordance with information received from one or more of: the DCA 130, the PCA 195, the HMD 110, and the I/O interface 140. In the example shown in FIG. 1, the console 145 includes an application store 150, a tracking module 155, an engine 160, and an ASC module 165. Some embodiments of the console 145 have different modules or components than those described in conjunction with FIG. 1. Similarly, the functions further described below may be distributed among components of the console 145 in a different manner than described in conjunction with FIG. 1. In some embodiments, the functionality discussed herein with respect to the console 145 may be implemented in the HMD 110.

The application store 150 stores one or more applications for execution by the console 145. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the HMD 110 or the I/O interface 140. Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications.

The tracking module 155 calibrates the local area of the system 100 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the HMD 110 or of the I/O interface 140. For example, the tracking module 155 communicates a calibration parameter to the DCA 130 to adjust the focus of the DCA 130 to more accurately determine positions of SL elements captured by the DCA 130. Calibration performed by the tracking module 155 also accounts for information received from the IMU 135 in the HMD 110 and/or an IMU 135 included in the I/O interface 140. Additionally, if tracking of the HMD 110 is lost (e.g., the DCA 130 loses line of sight of at least a threshold number of the projected SL elements), the tracking module 1550 may re-calibrate some or all of the system 100.

The tracking module 155 tracks movements of the HMD 110 or of the I/O interface 140 using information from the DCA 130, the one or more position sensors 125, the IMU 135 or some combination thereof. For example, the tracking module 155 determines a position of a reference point of the HMD 110 in a mapping of a local area based on information from the HMD 110. The tracking module 155 may also determine positions of the reference point of the HMD 110 or a reference point of the I/O interface 140 using data indicating a position of the HMD 110 from the IMU 135 or using data indicating a position of the I/O interface 140 from an IMU 135 included in the I/O interface 140, respectively. Additionally, in some embodiments, the tracking module 155 may use portions of data indicating a position or the HMD 110 from the IMU 135 as well as representations of the local area from the DCA 130 to predict a future location of the HMD 110. The tracking module 155 provides the estimated or predicted future position of the HMD 110 or the I/O interface 140 to the engine 160, and the asymmetric stereo compute (ASC) module 165.

The engine 160 generates a 3D mapping of the area surrounding the HMD 110 (i.e., the local area) based on information received from the HMD 110. In some embodiments, the engine 160 determines depth information for the 3D mapping of the local area based on information received from the DCA 130 that is relevant for techniques used in computing depth. The engine 160 may calculate depth information using one or more techniques to compute depth based on SL. A technique used depth based on SL may include, e.g., using triangulation and/or perceived deformation of a SL pattern that is projected onto a surface to determine depth and surface information of objects within the scene. In various embodiments, the engine 160 uses different types of information determined by the DCA 130 or a combination of types of information determined by the DCA 130.

The engine 160 also executes applications within the VR system local 100 and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof, of the HMD 110 from the tracking module 155. Based on the received information, the engine 160 determines content to provide to the HMD 110 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the engine 160 generates content for the HMD 110 that mirrors the user’s movement in a virtual local area or in a local area augmenting the local area with additional content. Additionally, the engine 160 performs an action within an application executing on the console 145 in response to an action request received from the I/O interface 140 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the HMD 110 or haptic feedback via the I/O interface 140.

The ASC module 165 generates a model of an area (or “local area”) based on color image data generated by the PCA 195 and depth image data generated by the DCA 130. Color image data refers to image data captured by a passive, color camera, such as the PCA 195. Color image data may include different colors that represent the color of the objects captured in the color image data. Depth image data refers to image data captured by the DCA 130, and may include SL elements reflected from objects. The ASC module 165 generates a depth map using the depth image data. The depth map for a frame of depth image data may include depth values. The ASC module 165 uses the color image data to locate the depth map in a model of the local area, and combine depth maps from multiple depth image data frames into a single depth map for the model of the local area. The model of the local area may also be texturized using, e.g., the color image data from the PCA 195. Some and/or all of the ASC module 165 may be on the HMD 110 or some other device (e.g., a server, console 145, etc.). The HMD 110 may use the model of the local area to render content to a user of the HMD.

In some embodiments, the ASC module 165 uses an exact asymmetric stereo (EAS) algorithm to match each of the plurality of SL elements (e.g., dots) detected in the image space to its corresponding location in pattern space. The EAS algorithm disambiguates the plurality of SL elements projected by the DCA 130 into the local area. For example, in embodiments where each of the plurality of SL elements projected by the DCA 130 corresponds with an aperture or dot of a pattern plate, the ASC module 165 maps each of the plurality of dots in an image of the local area captured by the DCA 130 to a dot or aperture on the pattern plate. The ASC module 165 is further described below in conjunction with FIG. 2. In some embodiments, the ASC module 165 is included in the HMD 110, such as in the DCA 130.

FIG. 2 is a block diagram of the ASC module 165 of FIG. 1, in accordance with an embodiment. The ASC module 165 matches SL elements captured by the DCA 130 with the apertures on a pattern plate to generate a depth map. The ASC module 165 includes a hash table compute module 210, a SL detection module 220, a transform module 230, a matching module 240, a depth triangulation module 250, a normal estimation module 260, and a hash table store 270. Some embodiments of the ASC module 165 may have different components than those described in conjunction with FIG. 2. Additionally, the functionality provided by various components described in conjunction with FIG. 2 may be differently distributed among the components of the ASC module 165 in other embodiments, or be captured in separate assemblies remote from the ASC module 165.

The hash table compute module 210 generates a hash table for a pattern of a pattern plate. Each aperture of the pattern plate corresponds with a structured light SL element that is projected by the DCA 130. A pattern of the pattern plate may include an arrangement of multiple apertures. In some embodiments, the pattern is defined by a 9.times.9 pixel grid. Each pixel of the pattern may be defined by a bit, with a 1 value defining an aperture and a 0 value defining a location on the pattern plate without an aperture. As such, a hash table for a 9.times.9 pixel pattern may be defined by an 81-bit binary code in raster row-major fashion. The 81-bit binary code may be stored in a 128-bit integer used as a key. Other sizes may be used for the pattern and hash table. For example, a 6.times.6 pixel pattern may be defined by a 36-bit code. The generated hash table is stored in the hash table store 270 and associated with a key. In some embodiments, the hash table associated with a pattern of the pattern plate is stored in the hash table store 270. In some embodiments, the hash table compute module 210 may compute a hash table associated with a particulate pattern of the pattern plate.

The SL detection module 220 receives one or more images (e.g., depth image data) of a local area from the DCA 130. Here, each of the one or more images is a 2D image of a local area and includes the plurality of SL elements projected by the DCA onto the local area. The SL detection module 220 may apply a full frame blur on each of the one or more received images. In an embodiment, the full frame blur is 2D Gaussian blur with a size that corresponds with the size of the pattern (e.g., of 9.times.9 pixels) of the pattern plate. The SL detection module 220 may sub-divide one or more received images into individual patterns with dimensions of 9.times.9 pixels and applies a Gaussian blur on each pattern. In various embodiments, the Gaussian blur is applied to remove noise from each of the one or more captured images and to enable dot detection. In other embodiments, the Gaussian blur is applied for patterns that are larger or smaller than that noted here. For example, a Gaussian blur may be performed on a pattern size of 3.times.3 pixels or, alternatively, a Gaussian blur may be performed on a pattern size of 15.times.15 pixels. In some embodiments, the size of the pattern to which a frame blur is applied is dependent upon the noise detected in the scene, and the resolution of the received image. Here, the computational cost of the Gaussian blur scales linearly with the number of pixels in each of the one or more received images.

The SL detection module 220 determines the center of each of the plurality of SL elements in each of the one or more captured images. Determining the center of each dot in a particular captured image comprises determining a set of pixels surrounding the dot and performing a threshold parabola fit on the determined set of pixels. In an embodiment, the set of pixels is a 2D array with a size of 3.times.3 pixels. Here, the center of the dot is the pixel in the set of pixels corresponding to the maximum of the fitted parabola. In an embodiment, the parabola fit is determined based on a local intensity gradient in the array of pixels. For example, the SL detection module 220 determines that the center of a dot is the pixel with the highest intensity in the set of pixels. Generally the size of the 2D array defining the set of pixels is determined based on the resolution of an imaging device associated with the DCA 130.

The SL detection module 220 may also determine a set of neighboring SL elements within a pattern for each of the plurality of SL elements in a received image. In an embodiment, the SL detection module 220 determines a 12.times.12 pixel window centered on a determined location of a first SL element. Each pixel in the determined window indicates a location of a neighboring dot. That is, the determined 12.times.12 pixel window indicates the locations, in image space, of one or more other SL elements adjacent to the first SL element within the determined window. Here, locations of neighboring SL elements may be determined by performing a parabola fit as described above. In other embodiments, different window sizes may be used to perform a threshold parabola fit and determine a list of neighbors.

The transform module 230 applies warp functions to a window around each SL element in a frame of depth image data to generate candidate warps. Each of the warp functions apply at least one of a stretch or a skew to the second window. In some embodiments, the transform module 230 generates a reverse transform from the image space to the pattern space for each dot and its neighbors. Here, the reverse transform maps each SL element and its neighbors within a window to a dots on the pattern plate. A reverse transform from image space to pattern space for a given 12.times.12 pixel window comprises a set of candidate warps. The set of candidate warps may comprise stretches, skews, rotations, and any combination thereof. In an example embodiment, the transform module 230 determines 18 candidate warps for each 12.times.12 window associated with a SL element. Here, a warp is a plane induced homography for each of the determined 12.times.12 windows and the number of compute operations associated with determining the reverse transform scales linearly with the number candidate warps. That is, the greater the number of candidate warps comprising a reverse transform, the slower the performance.

The matching module 240 determines binary codes for the warp candidates, and matches SL elements to apertures based on comparing the binary codes of the candidate warps with the hash table of an aperture. In some embodiments, the matching module 240 generates one or more codes for each of the one or more candidate warps generated by the transform module 230. In an embodiment, each of the one or more codes is a binary code. The one or more codes are used as keys to determine an exact match to the identified SL elements and its neighbors via a hash table associated with the DCA. In various embodiments, the hash table associated with the DCA is stored the hash table store 270.

In various embodiments, the matching module 240 may not find an exact match for each of the plurality of SL elements in the captured image. For example, the matching module 240 may determine an exact match for only a first subset of the plurality of SL elements in a captured image. If not all of the observed SL elements in a received image are mapped to an aperture on the pattern plate, the matching module 240 may iteratively augment the number of exact matches determined by the matching module 240 by performing a fringe growth. That is the matching module 240 may iteratively determine a match for a second subset of the plurality of SL elements by exploiting the fact that each exactly matched SL elements also, implicitly, indicates a location of its neighboring SL elements. For example, a matched SL element to an aperture may be used to match another SL element to another aperture based on the known distance or relative locations of the apertures. In some embodiments, additional SL elements may be matched based on the SL element that is matched from fringe growing in an iterative process. As such, the matching module 240 can increase the number of matches based on one or more determined exact matches. Here, each iteration results in a growth in the number of matched SL elements which are then recursive used to determine additional matches.

您可能还喜欢...