Apple Patent | Biometric authentication system
Patent: Biometric authentication system
Patent PDF: 20230379564
Publication Number: 20230379564
Publication Date: 2023-11-23
Assignee: Apple Inc
Abstract
Methods and apparatus for biometric authentication in which two or more biometric features or aspects are captured and analyzed individually or in combination to identify and authenticate a person. An imaging system captures images of a person's iris, eye, periorbital region, and/or other regions of the person's face, and two or more features from the captured images are analyzed individually or in combination to identify and authenticate the person and/or to detect attempts to spoof the biometric authentication. Embodiments may improve the performance of biometric authentication systems, and may help to reduce false positives and false negatives by the biometric authentication algorithms.
Claims
What is claimed is:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Description
This application is a 371 of PCT Application No. PCT/US2021/051615, filed Sep. 22, 2021, which claims benefit of priority to U.S. Provisional Patent Application No. 63/083,775, filed Sep. 25, 2020. The above applications are incorporated herein by reference. To the extent that any material in the incorporated application conflicts with material expressly set forth herein, the material expressly set forth herein controls.
BACKGROUND
An eye or gaze tracker is a device for estimating eye positions and eye movement. Eye tracking systems have been used in research on the visual system, in psychology, psycholinguistics, marketing, and as input devices for human-computer interaction. In the latter application, typically the intersection of a person's point of gaze with a desktop monitor is considered.
Biometric authentication technology uses one or more features of a person to identify that person, for example for secure, authenticated access to devices, systems, or rooms. In a typical scenario, in a registration process one or more images are captured of the features being tracked (e.g., images of a person's iris(es)), and the images are processed to generate a set or vector of metrics that are unique to, and thus uniquely identify, that person. When the person attempts to access the device, system, or room, images of the person's features are again captured and processed using a similar algorithm to the one used during registration. The extracted metrics are compared to the baseline metrics and, if the match is sufficiently good, the person is allowed access.
SUMMARY
Embodiments of imaging systems that implement flexible illumination methods are described. Embodiments may provide methods that improve the performance and robustness of an imaging system, and that make the imaging system adaptable to specific users, conditions, and setup for biometric authentication using the eyes and periorbital region, gaze tracking, and anti-spoofing. Embodiments of methods and apparatus for biometric authentication are described in which two or more biometric features or aspects are captured and analyzed individually or in combination to identify and authenticate a person.
In embodiments, an imaging system is used to capture images of a person's iris, eye, periorbital region, and/or other regions of the person's face, and two or more features from the captured images are analyzed individually or in combination to identify and authenticate the person (or to detect attempts to spoof the biometric authentication). Embodiments may improve the performance of biometric authentication systems, and may help to reduce false positives and false negatives by the biometric authentication algorithms, when compared to conventional systems that rely on only one feature for biometric authentication. Embodiments may be especially advantageous in imaging systems that have challenging hardware constrains (point of view, distortions, etc.) for individual biometric aspects or features (e.g., the iris) as additional biometric features (e.g., veins in the eye, portions or features of the periorbital region, or features of other parts of the face) may be used for biometric authentication if good images of one or more of the biometric features cannot be captured at a particular pose or under current conditions.
The biometric aspects that are used may include one or more of facial, periocular, or eye aspects. For each biometric aspect, one or more different features may be used to describe or characterize the aspect; the different features may, for example, include geometric features, qualitative features, and low-level, intermediate, or high-level 3D representations. The biometric aspects and features may include, but are not limited to, one or more of the eye surface, eye veins, eyelids, eyebrows, skin features, and nose features, as well as features of the iris such as color(s), pattern(s), and 3D musculature. In some embodiments, feature sizes and geometric relations to other features may be included as biometric aspects.
A similar method may be applied in a gaze tracking process in which two or more features of the eye are imaged and processed to obtain better information for gaze tracking at different poses and in different conditions.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A through 1D illustrate example eye camera systems, according to some embodiments.
FIG. 2 graphically illustrates tradeoffs between complexities in a biometric authentication system, according to some embodiments.
FIG. 3 is a block diagram of an imaging systems that implements a flexible illumination method, according to some embodiments.
FIG. 4 is a flowchart of a method for providing flexible illumination in an imaging system, according to some embodiments.
FIGS. 5A and 5B illustrate a biometric authentication system that combines different biometric aspects, according to some embodiments.
FIG. 6 is a flowchart of a method for performing biometric authentication using multiple biometric aspects, according to some embodiments.
FIG. 7 illustrates a biometric authentication system that uses multiple cameras, according to some embodiments.
FIG. 8A is a flowchart of a method for biometric authentication using multiple cameras, according to some embodiments.
FIG. 8B is a flowchart of another method for biometric authentication using multiple cameras, according to some embodiments.
FIG. 9A illustrates a system that includes at least one additional optical element on the light path between the user's eye and the eye camera, according to some embodiments.
FIG. 9B illustrates a system that includes a diffractive optical element on the light path between the user's eye and the eye camera to improve the viewing angle of the camera, according to some embodiments.
FIG. 10 is a flowchart of a method for processing images in a system that includes at least one additional optical element on the light path between the user's eye and the eye camera, according to some embodiments.
FIG. 11 is a flowchart of a method for capturing and processing images in a system that includes a diffractive optical element on the light path between the user's eye and the eye camera to improve the viewing angle of the camera, according to some embodiments.
FIGS. 12A through 12C illustrate a system that includes light sources that emit light at multiple wavelengths to sequentially capture images at the multiple wavelengths, according to some embodiments.
FIGS. 13A and 13B illustrate a system that includes a camera with a photosensor that concurrently captures multiple images at different wavelengths, according to some embodiments.
FIG. 14 is a flowchart of a method for sequentially capturing and processing images at multiple wavelengths, according to some embodiments.
FIG. 15 is a flowchart of a method for concurrently capturing and processing images at multiple wavelengths, according to some embodiments.
FIG. 16 illustrates a system that provides feedback to the user and/or control signals to the imaging system to manually or mechanically adjust the viewing angle of the camera with respect to the user's eye or periocular region, according to some embodiments.
FIG. 17 is a flowchart of a method for providing feedback to the user to manually adjust the viewing angle of the camera with respect to the user's eye or periocular region, according to some embodiments.
FIG. 18 is a flowchart of a method for providing control signals to the imaging system to mechanically adjust the viewing angle of the camera with respect to the user's eye or periocular region, according to some embodiments.
FIGS. 19A and 19B are block diagrams illustrating a device that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
FIG. 20 illustrates an example head-mounted device (HMD) that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
FIG. 21 is a block diagram illustrating an example system that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
“Comprising.” This term is open-ended. As used in the claims, this term does not foreclose additional structure or steps. Consider a claim that recites: “An apparatus comprising one or more processor units . . . .” Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.).
“Configured To.” Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112, paragraph (f), for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
“First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, a buffer circuit may be described herein as performing write operations for “first” and “second” values. The terms “first” and “second” do not necessarily imply that the first value must be written before the second value.
“Based On” or “Dependent On.” As used herein, these terms are used to describe one or more factors that affect a determination. These terms do not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While in this case, B is a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B.
“Or.” When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof.
DETAILED DESCRIPTION
Various embodiments of methods and apparatus for flexible illumination in imaging systems are described. An imaging system as described herein may include two or more illumination sources (e.g., point light sources such as light-emitting diodes (LEDs)) that illuminate an object to be imaged (e.g., a person's eye or eye region), and at least one camera configured to capture images of light from the illumination sources reflected by the object when illuminated.
Embodiments of the imaging system may, for example, be used for biometric authentication, for example using features of the user's eyes such as the iris, the eye region (referred to as the periocular region), or other parts of the user's face such as the eyebrows. A biometric authentication system uses one or more of the features to identify a person, for example for secure, authenticated access to devices, systems, or rooms. In a registration process one or more images are captured of the features being tracked (e.g., images of a person's iris(es), periocular region, etc.), and the images are processed to generate a set or vector of metrics that are unique to, and thus uniquely identify, that person. When the person attempts to access the device, system, or room, images of the person's features are again captured and processed using a similar algorithm to the one used during registration. The extracted metrics are compared to the baseline metrics and, if the match is sufficiently good, the person may be allowed access.
Another example use for embodiments of the imaging system is in gaze tracking. A gaze tracking system may, for example, be used to compute gaze direction and a visual axis using glints and eye features based on a three-dimensional (3D) geometric model of the eye.
Embodiments of the imaging system described herein may, for example, be used in a biometric authentic process, a gaze tracking process, or both. Another example is in anti-spoofing, which is related to biometric authentication in that “spoofing” refers to attempts to trick a biometric authentication system by, for example, presenting a picture or model of a valid user's eye, eye region, or face. More generally, embodiments of the imaging system may be implemented in any application or system in which images of an object illuminated by a light source are captured by one or more cameras for processing.
A non-limiting example application of the methods and apparatus for flexible illumination in imaging systems are in systems that include at least one eye camera (e.g., infrared (IR) cameras) positioned at each side of a user's face, and an illumination source (e.g., point light sources such as an array or ring of IR light-emitting diodes (LEDs)) that emit light towards the user's eyes. The imaging system may, for example, be a component of a head-mounted device (HMD), for example a HMD of an extended reality (XR) system such as a mixed or augmented reality (MR) system or virtual reality (VR) system. The HMD may, for example be implemented as a pair of glasses, googles, or helmet. Other example applications for the imaging system include mobile devices such as smartphones, pad or tablet devices, desktop computers, and notebook computers, as well as stand-alone biometric authentication systems mounted on walls or otherwise located in rooms or on buildings. In any of these example systems, the imaging system may be used for biometric authentication, gaze tracking, or both.
FIGS. 1A through 1D illustrate example imaging systems, according to some embodiments. The imaging system may include, but is not limited to, one or more cameras 140, an illumination source 130, and a controller 160. FIG. 1A shows an imaging system in in which the eye camera 140 images the eye 192 directly. However, in some embodiments the eye camera 140 may instead image a reflection of the eye 192 off of a hot mirror 150 as shown in FIG. 1B. In addition, in some embodiments, the eye camera 140 may image the eye through a lens 120 of an imaging system, for example as shown in FIG. 1C.
In some embodiments, a device (e.g., a head-mounted device (HMD)) may include an imaging system that includes at least one eye camera 140 (e.g., infrared (IR) cameras) positioned on one side or at each side of the user's face, and an illumination source 130 (e.g., point light sources such as an array or ring of IR light-emitting diodes (LEDs)) that emits light towards the user's eye(s) 192 or periorbital region.
FIG. 1D shows an example illumination source 130 that includes multiple LEDs 132. In this example, there are eight LEDs 132 arranged in a ring. Note, however, that the number and arrangement of the LEDS 132 in an illumination source 130 may be different.
In addition, in some embodiments other light-emitting elements than LEDs may be used. In some embodiments, the LEDs 132 may be configured to emit light in the IR (including SWIR or NIR) range, for example at 740, 750, 840, 850, 940, or 950 nanometers.
The eye camera 140 may be pointed towards the eye 192 to receive light from the illumination source 130 reflected from the eye 192, as shown in FIG. 1A. However, in some embodiments the eye camera 140 may instead image a reflection of the eye 192 off of a hot mirror 150 as shown in FIG. 1B. In addition, in some embodiments, the eye camera 140 may image the eye 192 through a lens 120 or other optical element of the device, for example as shown in FIG. 1C.
The device that includes the imaging system may include a controller 160 comprising one or more processors and memory. Controller 160 may include one or more of various types of processors, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), and/or other components for processing and rendering video and/or images. In some embodiments, the controller 160 may be integrated in the device. In some embodiments, at least some of the functionality of the controller 160 may be implemented by an external device coupled to the device by a wired or wireless connection. While not shown in FIGS. 1A through 1C, in some embodiments controller 160 may be coupled to an external memory for storing and reading data and/or software.
The controller 160 may send control signals to the illumination source 130 and camera 140 to control the illumination of the eye 192 and capture of images of the eye 192. The controller 160 may use input 142 (e.g., captured images of the eyes 192) from the eye cameras 140 for various purposes, for example for biometric authentication or gaze tracking. The controller 160 may implement algorithms that estimate the user's gaze direction based on the input 142. For example, the controller 160 may implement algorithms that process images captured by the cameras 140 to identify features of the eye 192 (e.g., the pupil, iris, and sclera) or periorbital region to be used in biometric authentication algorithms. As another example, the controller 160 may implement gaze tracking algorithms that process images captured by the cameras 140 to identify glints (reflections of the LEDs 130) obtained from the eye cameras 140. The information obtained from the input 142 may, for example, be used to determine the direction in which the user is currently looking (the gaze direction), and may be used to construct or adjust a 3D model of the eye 192.
However, in a device that implements the imaging system, components of the device may result in unwanted reflections and stray light on the final image captured by camera 140. As the imaging system becomes more complex, for example with optical surfaces (e.g., lenses 120 and/or mirrors 150) involved in the trajectory between the point light sources 130 and camera 140, the higher the likelihood of getting unwanted reflections and stray light on the final image captured by camera 140, for example caused by reflections in lenses, imperfections in lenses or optical surfaces, or dust on optical surfaces. When using the imaging for biometric authentication and/or for gaze tracking, components of the device (e.g., lenses) may block, refract, or reflect light, including a portion of the light from the illumination source 130 and ambient light, if present. In addition, position of the device and imaging system with respect to the user's head may shift during use. Other aspects of the device and imaging system may change. For example, a surface of a lens in the device may become smudged, or the user may add or change something such as clip-on lenses to the device. Thus, quality of the images captured with the imaging system may vary depending on the current lighting conditions, position of the device and imaging system with respect to the user's head, and other factors such as smudges or other changes to the device. The quality of the captured images may affect the efficiency and accuracy of algorithms used in various applications including but not limited to biometric authentication, anti-spoofing, and gaze tracking.
Embodiments of the methods and apparatus for flexible illumination in imaging systems as described herein may improve the performance and robustness of an imaging system, and may help to adapt the imaging system to specific users, conditions, and setup for applications including but not limited to biometric authentication, anti-spoofing, and gaze tracking.
FIG. 2 graphically illustrates tradeoffs between complexities in a biometric authentication system, according to some embodiments. Embodiments of an imaging system used for biometric authentication as described herein may trade off system complexity 210 for complexity in the enrollment 200 process. A more complex system 210 may reduce the complexity of the enrollment process for the user, for example by automating processes such as shifting the camera to get a better view of the eye rather than having the user move the device manually. Conversely, the enrollment 200 process could be made more complex to reduce system complexity 210. Similarly, biometric authentication may be improved by increasing the number of aspects 220 of the user's eyes and periorbital region that are used in the identification process at the expense of system complexity 210 and possibly enrollment complexity 200. Similar tradeoffs may apply in other applications such as gaze tracking.
Flexible Illumination for Imaging Systems
Embodiments of imaging systems that implement a flexible illumination method are described. Embodiments may provide methods that improve the performance and robustness of an imaging system, and that make the imaging system adaptable to specific users, conditions, and setup for biometric authentication using the eyes and periorbital region, gaze tracking, and anti-spoofing. While, conventional eye tracking systems focus on specular reflections or glints for gaze tracking, embodiments may focus on other aspects such as providing uniform, good contrast on the iris or other regions of interest, reducing or illuminating shadows on regions of interest, and other improvements for biometric authentication applications.
In embodiments, two or more different lighting configurations for the imaging system in a device are pre-generated. Each lighting configuration may specify one or more aspects of lighting including, but not limited to, which LEDs or group of LEDs to enable or disable, intensity/brightness, wavelength, shapes and sizes of the lights, direction, sequences of lights, etc. One or more lighting configurations may be generated for each of two or more poses, where a pose is a 3D geometrical relationship between the eye camera and the user's current eye position and gaze direction. A lookup table may be generated via which each pose is associated with its respective lighting configuration(s). The lookup table and lighting configurations may, for example be stored to memory of the device and/or to memory accessible to the device via a wired or wireless connection
In some embodiments, the lighting configurations may be pre-generated synthetically for a device and imaging system, for example using a 3D geometric model or representation of the device and imaging system to generate lighting configurations for a set of estimated poses. Alternatively, in some embodiments, the lighting configurations may be pre-generated using a data set of images of real-world user faces to obtain pose information. As another alternative, in some embodiments, the lighting configurations may be generated during an initialization process for a particular user. For example, in some embodiments, the user puts on or holds the device and moves their gaze around, and the system/controller runs through a process during which images are captured and processed with different light settings to determine optimal lighting configurations for this user when capturing images of the desired features at two or more different poses.
In some embodiments, after the lighting configurations and lookup table are generated, the user may put on, hold, or otherwise use the device. A biometric authentication process may be initiated in which different lighting configurations may be selected by the controller to capture optimal images of the desired features of the user's eye (e.g., iris, periorbital region, etc.) at different poses and in different conditions for use by the biometric authentication algorithms executed by the controller.
In some embodiments, the device may initiate a biometric authentication process when the user accesses the device. In some embodiments, the device's controller may begin the biometric authentication process with a default initial lighting configuration. One or more images may be captured by the imaging system using the respective setting for the illumination source, and the captured image(s) may be checked for quality. If the images are satisfactory for the algorithms that process the images to perform biometric authentication using one or more features of the user's eye, periorbital region, and/or other facial features, then the flexible illumination process may be done. Otherwise, the controller may select another lighting configuration, direct the illumination source to illuminate the subject according to the new lighting configuration, and direct the camera to capture one or more images that are checked for quality. This process may be repeated until a successful authentication has been achieved, or for a specified number of attempts until the authentication attempt is considered failed. In some embodiments, the user's current pose may be determined by the imaging system and controller, for example using a gaze tracking algorithm, and the user's current pose may be used to select an initial lighting configuration and, if necessary, one or more subsequent lighting configurations for the biometric authentication process.
A similar method may be applied in a gaze tracking process in which different lighting configurations are selected by the controller to obtain better images of the desired features of the user's eyes (e.g., glints) at different poses and in different conditions.
Embodiments of the flexible illumination method may improve the performance and robustness of an imaging system, and may help to adapt the imaging system to specific users, conditions, and setup for applications including but not limited to biometric authentication, anti-spoofing, and gaze tracking. Embodiments may capture and process images of the eye or periorbital region using one or more different lighting configurations until a lighting configuration is found that provides optimal (or at least good enough) images to perform a particular function (e.g., biometric authentication, gaze tracking, etc.), thus improving the performance and robustness of the device, system, and/or algorithm that uses image(s) of the eye or periorbital region in performing the function (e.g., biometric authentication, gaze tracking, etc.).
By dynamically searching for and finding a good or optimal lighting configuration for current conditions, embodiments of the flexible illumination method may help to make an imaging system adaptable to one or more of, but not limited to:
environmental/ambient lighting conditions;
reflections, streaks, ghosts, stray light, etc. that may be visible in captured images of the eye or periorbital region;
changes in the optical path between at least one of the LEDs in the illumination source, the eye or periorbital region, and at least one eye camera, including but not limited indirect optical paths with enclosures or other elements of a device that includes the imaging system, which may result in additional reflections or visual impairment of the captured images;
other changes in the device that includes the imaging system, such as the addition of clip-on lenses to the device; and
variations in prescriptions specific to particular users that may be used in optical elements of the device that or one the optical path between the LEDs of the illumination source and the eye camera(s).
Embodiments of the flexible illumination method may, for example, be implemented in any of the illumination systems as illustrated in FIGS. 1A through 1D. FIGS. 19A through 21 illustrate example devices and systems that may include imaging systems that implement embodiments of the flexible illumination method. An illumination system that implements the flexible illumination may include, but is not limited to:
an illumination source that includes multiple light-emitting elements which can be controlled individually or in groups (e.g., IR or NIR LEDs, or LEDs in other wavelengths).
In embodiments, a controller of the device that includes the imaging system may control one or more of, but not limited to, the following based on a current lighting configuration:
increasing or decreasing the intensity/current to individual, or groups of, the light-emitting elements; and
sequencing of individual, or groups of, the light-emitting elements.
In embodiments, the light-emitting elements, or groups of the light-emitting elements, may differ in one or more of, but not limited to, the following:
location and orientation (pose);
shape;
size; and
light emitting angular profile.
In some embodiments, individual light-emitting elements or groups of light-emitting elements may include additional optical elements, for example lenses, grids, etc., that affect light emitted by the elements or groups of light-emitting elements.
The following broadly describes a method for selecting a lighting configuration, according to some embodiments. One or more images of a user's eye or periorbital region may be captured using a first lighting configuration. Additional images may be captured using at least one additional lighting configuration. One or more objective criteria (e.g., contrast, shadows, edges, undesirable streaks, etc.) may be selected or determined for analyzing the images. Based on an analysis of the captured images using the objective criteria, one of the lighting configurations that corresponds to one or more image(s) that best satisfies the objective criteria for this user may be selected. In some embodiments, if a change in the conditions under which the lighting configuration was selected is detected (e.g., some change in the user's position or appearance, a change in ambient lighting, a change to the device that includes the imaging system, etc.), then the method for selecting a lighting configuration may be repeated.
The objective criteria used in selecting lighting configurations may differ based on the particular application. For example, in a biometric authentication process that uses the iris to authenticate users, the algorithm may need images of the iris with uniform, good contrast, no shadows, etc. In a gaze tracking process, the algorithm may need images that include specular reflections or glints in certain locations and/or of certain sizes and number.
In some embodiments, the objective criteria used in selecting lighting configurations may differ based on the environment (e.g., internal vs external ambient conditions). In some embodiments, the objective criteria used in selecting lighting configurations may differ based on varying gaze poses or adjustments to a user's face, for example eye relief (depth) and interpupillary distance (IPD).
FIG. 3 is a block diagram of an imaging systems that implements a flexible illumination method, according to some embodiments. Two or more lighting configurations 372 may be generated in a configuration generation 310 process. In some embodiments, the lighting configurations may be pre-generated synthetically for a device and imaging system, for example using a 3D geometric model or representation of the device and imaging system to generate lighting configurations for a set of estimated poses. Alternatively, in some embodiments, the lighting configurations may be pre-generated using a data set of images of real-world user faces to obtain pose information. As another alternative, in some embodiments, the lighting configurations may be generated during an initialization process for a particular user. For example, in some embodiments, the user puts on or holds the device and moves their gaze around, and the system/controller runs through a process during which images are captured and processed with different light settings to determine optimal lighting configurations for this user when capturing images of the desired features at two or more different poses.
The pre-generated lighting configurations 372 may be stored 320 to memory 370 accessible to controller 360. In some embodiments, a lookup table 374 may be generated and stored to memory 370 that, for example, maps particular poses to particular lighting configurations.
In some embodiments, after the lighting configurations 372 and lookup table 374 are generated and stored, a user may put on, hold, or otherwise use a device that includes the controller 360, illumination source 330, and eye camera(s) 340. A biometric authentication process may be initiated in which different lighting configurations 372 may be selected by the controller 360 to capture optimal images of the desired features of the user's eye (e.g., iris, periorbital region, etc.) at different poses and in different conditions for use by the biometric authentication algorithms executed by the controller 360.
In some embodiments, the device may initiate a biometric authentication process when the user accesses the device. In some embodiments, the device's controller 360 may begin a biometric authentication process by directing 344 the illumination source 330 to use a default initial lighting configuration 372. One or more images may be captured 342 by the eye camera(s) 340 using the respective lighting provided by the illumination source 330, and the captured image(s) may be checked for quality according to one or more objective criteria or measures as previously described. If the images are satisfactory for the biometric authentication algorithms that rely on one or more features of the user's eye, periorbital region, and/or other facial features captured in the images, then the flexible illumination process may be done. Otherwise, the controller 360 may select another lighting configuration 372, direct the illumination source 330 to illuminate the subject according to the new lighting configuration 372, and direct the camera to capture 342 one or more images with the new lighting configuration 372 that are checked for quality according to one or more objective criteria. This process may be repeated until a successful authentication has been achieved, or for a specified number of attempts until the authentication attempt is considered failed. In some embodiments, the user's current pose may be determined by the imaging system and controller 360, for example using a gaze tracking algorithm, and the user's current pose may be used to select an initial lighting configuration 372 and, if necessary, one or more subsequent lighting configurations 372 for the biometric authentication process.
A similar method may be applied in a gaze tracking process in which different lighting configurations 372 are selected by the controller 360 to obtain better images of the desired features of the user's eyes (e.g., glints) at different poses and in different conditions using one or more objective criteria.
FIG. 4 is a flowchart of a method for providing flexible illumination in an imaging system, according to some embodiments. As indicated at 400, two or more lighting configurations may be generated and stored to a memory. In some embodiments, a lookup table that maps poses to lighting configurations may also be generated and stored. As indicated at 410, an initial lighting configuration may be selected. As indicated at 420, one or more images may be captured with the current lighting configuration and analyzed according to one or more objective criterial. At 430, if the image quality is determined to be not good enough for the algorithm that uses the images (e.g., a biometric authentication algorithm) according to the objective criteria, then another lighting configuration may be selected as indicated at 440, and the method returns to element 420 to capture and check additional images. At 430, if the image quality is determined to be good for the algorithm that uses the images (e.g., a biometric authentication algorithm), then the images may be processed by the algorithm as indicated at 450. At 460, if more images need to be processed (e.g., if the biometric authentication algorithm could not make an identification based on the images at 450), then the method returns to element 420. Otherwise, the method is done.
Biometric Authentication Using Multiple Biometric Aspects
Embodiments of methods and apparatus for biometric authentication are described in which two or more biometric features or aspects are captured and analyzed individually or in combination to identify and authenticate a person. Conventionally, biometric authentication has been performed using a single biometric feature. For example, an image of a person's iris is captured and compared to a baseline image of the user's iris to identify and authenticate the person. In embodiments, an imaging system, for example as illustrated in FIGS. 1A through 1D, is used to capture images of a person's iris, eye, periorbital region, and/or other regions of the person's face, and two or more features from the captured images are analyzed individually or in combination to identify and authenticate the person (or to detect attempts to spoof the biometric authentication). Embodiments may improve the performance of biometric authentication systems, and may help to reduce false positives and false negatives by the biometric authentication algorithms, when compared to conventional systems that rely on only one feature for biometric authentication. Embodiments may be especially advantageous in imaging systems that have challenging hardware constrains (point of view, distortions, etc.) for individual biometric aspects or features (e.g., the iris) as additional biometric features (e.g., veins in the eye, portions or features of the periorbital region, or features of other parts of the face) may be used for biometric authentication if good images of one or more of the biometric features cannot be captured at a particular pose or under current conditions.
The biometric aspects that are used may include one or more of facial, periocular, or eye aspects. For each biometric aspect, one or more different features may be used to describe or characterize the aspect; the different features may, for example, include geometric features, qualitative features, and low-level, intermediate, or high-level 3D representations. The biometric aspects and features may include, but are not limited to, one or more of the eye surface, eye veins, eyelids, eyebrows, skin features, and nose features, as well as features of the iris such as color(s), pattern(s), and 3D musculature. In some embodiments, feature sizes and geometric relationships to other features may be included as biometric aspects.
FIGS. 5A and 5B illustrate a biometric authentication system that combines different biometric aspects, according to some embodiments. FIG. 5A illustrates an example imaging system that combines different biometric aspects, according to some embodiments. The imaging system may include, but is not limited to, one or more cameras 540, an illumination source 530, and a controller 560. In this example, the eye camera 540 is pointed towards the eye 592, periorbital region 580, and portions of the face 582 to receive reflected light from the illumination source 530. Note, however, in some embodiments, the eye camera 540 may image a reflection off a hot mirror as shown in FIG. 1B. Further, in some embodiments, the eye camera 540 may image the user's facial region including the eye 592 through one or more intermediate optical elements as shown in FIG. 1C. The eye camera(s) 540 may capture 542 individual images of, or images that include, two or more biometric aspects of the eye 592, periorbital region 580, and portions of the face 582. The captured image(s) may be processed by controller 560 to analyze the quality of two or more of the biometric aspects captured in the image(s). Depending on the particular application, the controller 560 may select a best biometric aspect or feature from the images to be used for biometric authentication, or may select two or more of the biometric aspects or features to be used in combination for biometric authentication.
FIG. 5B is an illustration of the iris 594 and pupil 596 of the eye. In some embodiments, features of the iris 594 such as color(s), pattern(s), and a 3D reconstruction of muscle patterns in the iris 594 based on two or more images may be used as biometric aspects or features. An iris 594 feature may be used alone, in combination with one or more iris 594 features, or in combination with one or more other features of the eye 592, periorbital region 580, or face 582 to perform biometric authorization.
FIG. 6 is a flowchart of a method for performing biometric authentication using multiple biometric aspects, according to some embodiments. As indicated at 600, one or more images of the user's eye region (e.g., iris 594, eye 592, periorbital region 580, and/or face 582) may be captured by one or more eye cameras. As indicated at 610, the images may be processed to extract two or more biometric aspects of the user's iris 594, eye 592, periorbital region 580, and/or face 582. As indicated at 620, one or more of the biometric aspects may be selected for authentication. For example, objective criteria may be applied to the extracted biometric aspects to determine whether the biometric aspects meet thresholds of quality for the biometric authentication algorithms. One or more of the biometric aspects that meet respective thresholds may then be selected. As indicated at 630, biometric authentication may then be performed using the selected biometric aspect(s).
Biometric Imaging System Using Multiple Cameras
Embodiments of methods and apparatus for biometric authentication are described in which two or more cameras are used to capture images of biometric features or aspects for analysis to identify and authenticate a person. Conventionally, biometric authentication has been performed using a single camera to capture images of biometric features. For example, an image of a person's iris is captured by a single eye camera and compared to a baseline image of the user's iris to identify and authenticate the person. In embodiments, an imaging system, for example as illustrated in FIGS. 1A through 1D, includes at least two cameras that are used to capture images of a person's iris, eye, periorbital region, and/or other regions of the person's face, and one or more features from the captured images are analyzed to identify and authenticate the person (or to detect attempts to spoof the biometric authentication).
Embodiments may, for example, be used to capture images of the user's iris using two or more eye cameras for biometric authentication. In some embodiments, instead of or in addition to the iris, two or more cameras may be used to capture biometric aspects or features of the eye, periorbital region, or user's face including but not limited to the eye surface, eye veins, eyelid, eye brows, skin, or nose, and use the biometrics alone or in combination to perform biometric authentication. In some embodiments, feature sizes and geometric relations to other features may be included as biometric aspects.
Embodiments of biometric systems or algorithms may use images from at least one of the two or more cameras (two or more per eye, in some systems) that capture images from different viewpoints of the user's eye, periorbital region, or face to perform biometric authentication. In conventional biometric systems, typically a single camera is pointed directly at the eye region. However, in some compact systems such as HMDs, with an eye camera, the optical path to the target region may be more complex, with other elements such as lenses or hot mirrors on or near the optical path, and thus the visibility of target aspects or features may be impaired, and the quality of the captured images may be less than optimal for the biometric authentication algorithms. Adding at least one additional camera per eye may, for example, allow the imaging system to capture images of the eye region from different angles, and allow for switching to a more favorable point of view (pose as location and orientation), and in some embodiments may allow for two or more images captured by two or more cameras to be combined for use in biometric authentication.
In some embodiments, an algorithm executing on a controller coupled to the two more cameras may dynamically determine which image(s) captured by the two or more cameras are to be used for biometric authentication, for example using one or more objective criteria to evaluate the quality of the captured images. The objective criteria may include one or more of, but are not limited to, exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, absence of undesired reflections, etc. In addition, properties of the region being captured by a camera may be evaluated to determine quality, for example an overlap of a part of the eye by an eyelid may obscure at least part of a feature in an image captured by one camera while the feature is more visible in an image captured by a second camera.
In some embodiments, an algorithm executing on a controller coupled to the two more cameras may combine information from two or more images of an eye, the periorbital region, or portions of the face captured by at least two cameras to process aspects and features extracted from the combined images. The combination of information from two or more images may be performed at different stages of processing. For example, in some embodiments, two or more images may be combined early in processing to enhance the image quality of the resulting combined image from which aspects or features are extracted and evaluated. As another example, two or more images may be processed to extract aspects, features or other information in an intermediate stage; the extracted information may then be processed in combination to determine a biometric authentication score. As yet another example, the information extracted from two or more images may be processed separately, and then combined in the computation of a final similarity/matching score.
FIG. 7 illustrates a biometric authentication system that uses multiple cameras, according to some embodiments. An imaging system may include, but is not limited to, two or more cameras 740, an illumination source 730, and a controller 760. In this example, the eye cameras 540 are each pointed towards the eye 792, periorbital region 780, and/or portions of the face 782 to receive reflected light from the illumination source 730. Each camera 740 has a different perspective or viewing angle. Also note that, while not shown, each camera 740 may center on or capture a different feature, aspect, or region of the user's face or eye 792. In some embodiments, at least one eye camera 740 may image a reflection off a hot mirror as shown in FIG. 1B. Further, in some embodiments, at least one eye camera 740 may image the user's facial region including the eye 792 through one or more intermediate optical elements as shown in FIG. 1C. Each eye camera 740 may capture 742 images of, or images that include, one or more biometric aspects of the eye 792, periorbital region 780, and portions of the face 782. The images captured by the two or more cameras 740 may be processed by controller 760 to analyze the quality the image(s). Depending on the particular application, the controller 560 may select one or more of the images to be used for biometric authentication, or may select two or more of the biometric aspects or features from one or more of the images to be used in combination for biometric authentication.
FIG. 8A is a flowchart of a method for biometric authentication using multiple cameras, according to some embodiments. As indicated at 800, two or more images of the user's eye, periorbital region, or portions of the user's face are captured by two or more cameras. As indicated at 802, the captured images are analyzed using one or more objective criteria to determine a best image to use for biometric authentication. As indicated at 804, biometric authentication is performed using the selected image.
FIG. 8B is a flowchart of another method for biometric authentication using multiple cameras, according to some embodiments. As indicated at 820, two or more images of the user's eye, periorbital region, or portions of the user's face are captured by two or more cameras. As indicated at 822, information from two or more of the images is merged or combined. As indicated at 824, biometric authentication is performed using the merged image information.
The merging of information from two or more images may be performed at different stages of processing. For example, in some embodiments, two or more images may be combined early in processing to enhance the image quality of the resulting combined image from which aspects or features are to be extracted and evaluated. As another example, two or more images may be processed to extract aspects, features or other information in an intermediate stage; the extracted information may then be processed in combination to determine a biometric authentication score. As yet another example, the information extracted from two or more images may be processed separately, and then combined in the computation of a biometric authentication score.
Biometric Imaging Systems Including Additional Optical Elements
Embodiments of methods and apparatus for biometric authentication are described in which one or more additional optical elements are on the optical path from the illumination system, to the eye or eye region, and then to the eye camera.
In some embodiments, one or more optical elements such as a lens 120 as shown in FIG. 1C may be on the optical path between the eye 192 and the camera 140. The optical element may have optical properties; in some embodiments the optical properties may be particular to a user, such as diopter. In some embodiments, a user may add an extra optical element, such as prescription clip-on lens, to the device's optical system. The intervening optical element(s) necessarily affect light that passes through the element(s) to the camera. In some embodiments, information about the optical properties of the intervening optical element(s) may be obtained and stored, and the controller may adjust images captured by the camera(s) according to the information to improve image quality for use in biometric authentication.
In some embodiments, one or more optical elements such as lenses, prisms or waveguides may be located on the optical path of the eye camera, for example in front of the camera and between the camera and the eye/eye region. In some devices, for example in a HMD with limitations for where eye cameras can be placed, an eye camera may view the eye or eye region from a non-optical angle due to the physical configuration and limitations of the device the imaging system is implemented in. An image plane formed at the camera at the non-optical angle may affect the quality of the captured images, for example by reducing pixel density. An optical element such as a lens, prism or waveguide on the optical path between the eye/eye region and the eye camera may, for example, be used to “bend” the light rays coming from the eye/eye region, and thus tilt the image plane, to obtain better pixel density at the eye camera. In other words, the intervening optical element may compensate for perspective distortion caused by the camera's position. The intervening optical element may thus increase or improve the image space properties of the imaging system.
FIG. 9A illustrates a system that includes at least one additional optical element on the light path between the user's eye and the eye camera, according to some embodiments. An imaging system may include, but is not limited to, one or more cameras 940, an illumination source 930, and a controller 960. In this example, the eye camera 940 is pointed towards the eye 992; note, however, that an eye camera 940 may also or instead capture images of the periorbital region or portions of the face to receive reflected light from the illumination source 930. Note, however, in some embodiments, the eye camera 940 may image a reflection off a hot mirror as shown in FIG. 1B. The eye camera 940 may image the user's facial region including the eye 992 through one or more intermediate optical elements 920A and 920B. Element 920A represents a lens that is a component of an optical system implemented in the device, and may, but does not necessarily, have optical properties particular to a user. Element 920B represents an optional optical element, such as a clip-on lens, that has been added to an optical system implemented in the device, and may, but does not necessarily, have optical properties particular to a user. The eye camera(s) 940 may capture 942 individual images of, or images that include, two or more biometric aspects of the eye 992, periorbital region 980, and portions of the face 982. However, the optical path from the eye region to the eye camera(s) 940 passes through the intervening optical element 920A and/or optical element 920B.
The intervening optical elements 920A and/or 920B necessarily affect light that passes through the element(s) to the camera 940. In some embodiments, information about the optical properties of the intervening optical element(s) (optical element description(s) 976) may be obtained and stored to memory 970, and the controller 960 may adjust images captured by the camera(s) 940 according to the information to improve image quality for use in biometric authentication.
The captured image(s) may be further processed by controller 960 to analyze the quality of one or more of the biometric aspects captured in the image(s). The image(s) or biometric aspect(s) or features(s) extracted from the image(s) may then be used in a biometric authentication process.
FIG. 9B illustrates a system that includes a diffractive optical element on the light path between the user's eye and the eye camera to improve the viewing angle of the camera, according to some embodiments. An imaging system may include, but is not limited to, one or more cameras 940, an illumination source 930, and a controller 960. In this example, the eye camera 940 is pointed towards the eye 992; note, however, that an eye camera 940 may also or instead capture images of the periorbital region or portions of the face to receive reflected light from the illumination source 930. Note, however, in some embodiments, the eye camera 940 may image a reflection off a hot mirror as shown in FIG. 1B. The eye camera 940 may, but does not necessarily image the user's facial region including the eye 992 through one or more intermediate optical elements 920. The eye camera(s) 940 may capture 942 individual images of, or images that include, two or more biometric aspects of the eye 992, periorbital region 980, and portions of the face 982.
One or more optical elements 924 such as lenses, prisms or waveguides may be located on the optical path of the eye camera 940, for example in front of the camera 940 and between the camera 940 and the eye 992. In some devices, for example in a HMD with limitations for where eye cameras 940 can be placed, an eye camera 940 may view the eye 992 or eye region from a non-optical angle due to the physical configuration and limitations of the device the imaging system is implemented in. An image plane formed at the camera 940 at the non-optical angle may affect the quality of the captured images, for example by reducing pixel density. An optical element 924 such as a lens, prism or waveguide on the optical path between the eye 992 and the eye camera 940 may, for example, be used to “bend” the light rays coming from the eye 992, and thus tilt the image plane, to obtain better pixel density at the eye camera 940. In other words, the intervening optical element 924 may compensate for perspective distortion caused by the camera 940's position. The intervening optical element 924 may thus increase or improve the image space properties of the imaging system.
The captured image(s) may be processed by controller 960 to analyze the quality of one or more of the biometric aspects captured in the image(s). The image(s) or biometric aspect(s) or features(s) extracted from the image(s) may then be used in a biometric authentication process.
FIG. 10 is a flowchart of a method for processing images in a system that includes at least one additional optical element on the light path between the user's eye and the eye camera, according to some embodiments. As indicated at 1000, properties of one or more additional optical elements on the optical path between the eye camera and the eye or eye region may be obtained and stored as optical element descriptions to memory. As indicated at 1010, one or more images of the eye or eye region may be captured with the eye camera. As indicated at 1020, the captured images may be processed by the controller; the optical element description(s) may be applied to the images to adjust the image processing according to the optical properties of the additional optical element(s). At 1030, if processing is done, the method ends. Otherwise the method returns to element 1010.
FIG. 11 is a flowchart of a method for capturing and processing images in a system that includes a diffractive optical element on the light path between the user's eye and the eye camera to improve the viewing angle of the camera, according to some embodiments. As indicated at 1100, light sources (e.g., LEDs) emit light towards the subject's face. As indicated at 1110, a portion of the light reflected off the subject's face is diffracted towards the camera by an optical element on the optical path between the subject's eye and the camera. As indicated at 1120, one or more images are captured by the camera. As indicated at 1130, the images are processed, for example by a biometric authentication algorithm on a controller of the device that includes the imaging system. At 1140, if processing is done, the method ends. Otherwise the method returns to element 1100.
Biometric Imaging System Using Multiple Wavelengths
Embodiments of methods and apparatus for biometric authentication and anti-spoofing are described in which two or more different wavelengths are used in the illumination system. In embodiments, the illumination source (e.g. a ring of LEDs) may be configured to emit light at two or more different wavelengths, either continuously or selectively. For example, in some embodiments, a wavelength in the mid-800 nm range may be used for biometric authentication using the iris, and a wavelength in the mid-900 mm range may be used for anti-spoofing. Anti-spoofing is related to biometric authentication in that “spoofing” refers to attempts to trick a biometric authentication system by, for example, presenting a picture or model of a valid user's eye, eye region, or face as an attempt to “spoof” the biometric authentication system.
In some embodiments, a method may be implemented in which a first wavelength is emitted by the illumination source for capturing an image or images for a first portion of algorithmic processing for biometric authentication, and a second wavelength is emitted by the illumination source for capturing another image or images for a second portion of algorithmic processing for biometric authentication.
FIGS. 12A through 12C illustrate a system that includes light sources that emit light at multiple wavelengths to sequentially capture images at the multiple wavelengths, according to some embodiments.
FIG. 12A shows an example illumination source 1230 that includes multiple LEDs 1232. In this example, there are eight LEDs 1232 arranged in a ring. Note, however, that the number and arrangement of the LEDS 1232 in an illumination source 1230 may be different. In addition, in some embodiments other light-emitting elements than LEDs may be used. In some embodiments, some of the LEDs 1232A, represented by the shaded circles, may be configured to emit light at a first wavelength in the IR (including SWIR or NIR) range, for example at 740, 750, 840, 850, 940, or 950 nanometers. The other LEDs 1232B, represented by the white circles, may be configured to emit light at a different wavelength in the IR (including SWIR or NIR) range. Note that, in some embodiments, more than two wavelengths may be used. Further, in some embodiments, individual lighting elements may be configured to selectively emit light at two or more different wavelengths.
FIGS. 12B and 12C illustrate an example imaging system that includes light sources (e.g., LEDs) that emit light at multiple wavelengths, according to some embodiments. The imaging system may include, but is not limited to, one or more cameras 1240, an illumination source 1230, and a controller 1260. In this example, the eye camera 1240 is pointed towards the eye 1292 to receive reflected light from the illumination source 1230. However, in some embodiments, the eye camera 1240 may instead or also capture images of the periorbital region and portions of the face. Note that in some embodiments, the eye camera 1240 may image a reflection off a hot mirror as shown in FIG. 1B. Further, in some embodiments, the eye camera 1240 may image the eye 1292 through one or more intermediate optical elements as shown in FIG. 1C.
In FIG. 12A, the eye camera(s) 1240 may capture 1242A individual images of the eye 1292 with LEDS 1232A illuminating the eye at a first wavelength under control 1244A of the controller 1260. In FIG. 12B, the eye camera(s) 1240 may capture 1242B individual images of the eye 1292 with LEDS 1232B illuminating the eye at a second wavelength under control 1244B of the controller 1260.
The captured images may be processed by controller 1260 to analyze the quality of one or more of the biometric aspects captured in the images. Depending on the particular application, the controller 1260 may select a best biometric aspect or feature from the images to be used for biometric authentication, or may select two or more biometric aspects or features to be used in combination for biometric authentication.
In some embodiments, the first wavelength may be emitted by the illumination source 1230 for capturing an image or images for a first portion of algorithmic processing for biometric authentication, and the second wavelength may be emitted by the illumination source 1230 for capturing another image or images for a second portion of algorithmic processing for biometric authentication. In some embodiments, the first wavelength may be used to capture images (e.g., of the iris) for use in an anti-spoofing process, and the second wavelength may be used to capture images (e.g., of the iris) for use in biometric authentication.
FIGS. 13A and 13B illustrate a system that includes a camera with a photosensor that concurrently captures multiple images at different wavelengths, according to some embodiments. As illustrated in FIG. 13A, in some embodiments, as an alternative to sequentially capturing images at different wavelengths, a camera sensor 1350 may be provided that is configured to concurrently capture two (or more) images at different wavelengths. In this example, every other pixel is configured to capture light at a particular wavelength. For example, the white pixels 1352A may be configured to capture light in the mid-800 nm range, and the shaded pixels may be configured to capture light in the mid-900 range. For example, individual filters may be applied to each pixel 1352, with a first filter applied to pixels 1352A and a second filter applied to pixels 1352B.
FIG. 13B illustrates an example imaging system that includes light sources (e.g., LEDs) that emit light at multiple wavelengths, and in which the camera includes a camera sensor 1350 that is configured to concurrently capture two (or more) images at different wavelengths, according to some embodiments. The imaging system may include, but is not limited to, one or more cameras 1340, an illumination source 1330, and a controller 1360. In this example, the eye camera 1340 is pointed towards the eye 1392 to receive reflected light from the illumination source 1330. However, in some embodiments, the eye camera 1340 may instead or also capture images of the periorbital region and portions of the face. Note that in some embodiments, the eye camera 1340 may image a reflection off a hot mirror as shown in FIG. 1B. Further, in some embodiments, the eye camera 1340 may image the eye 1392 through one or more intermediate optical elements as shown in FIG. 1C. The illumination source 1330 may be configured to emit light at multiple wavelengths, for example as illustrated in FIG. 12A. The eye camera(s) 1340 may concurrently capture at least two images 1342A and 1342B of the eye 1392 at the multiple wavelengths using a sensor 1350 as illustrated in FIG. 13A with LEDS 1332A and 1332B concurrently illuminating the eye 1392 at both wavelengths under control 1344 of the controller 1360.
FIG. 14 is a flowchart of a method for sequentially capturing and processing images at multiple wavelengths, according to some embodiments. As indicated at 1400, light sources emit light at a first wavelength towards the user's eyes. As indicated at 1410, the camera captures images at the first wavelength. As indicated at 1420, the light sources emit light at a second wavelength towards the user's eyes. As indicated at 1430, the camera captures images at the second wavelength. As indicated at 1440, the images are processed. At 1450, if the method is not done, then the method returns to element 1410. Otherwise, the method ends.
FIG. 15 is a flowchart of a method for concurrently capturing and processing images at multiple wavelengths, according to some embodiments. As indicated at 1500, light sources emit light at multiple wavelengths towards the user's eyes. As indicated at 1510, the camera concurrently captures images for each wavelength, for example using a photosensor 1350 as illustrated in FIG. 13A. As indicated at 1520, the images are processed. At 1530, if the method is not done, then the method returns to element 1510. Otherwise, the method ends.
Improving Eye Pose for Biometric Authentication
Embodiments of methods and apparatus for biometric authentication are described in which a current eye pose is determined and evaluated to determine if the current pose is satisfactory, and in which the eye pose may be improved by the user manually adjusting the device or their pose/gaze direction in response to a signal from the controller, and/or in which the imaging system is mechanically adjusted at the direction of the controller to improve the current view of the eye.
In embodiments, a method executed on the controller may identify the user's current eye location and/or orientation (pose), for example by capturing and evaluating one or more images of the eye(s). The controller may then evaluates how beneficial the current pose is for biometric authentication. In some embodiments, the controller may provide feedback to the user to prompt the user to adjust their pose (e.g., by changing the direction of their gaze) or to manually adjust the device (e.g., by manually moving the device's position in relation to their eyes). In some embodiments, instead or in addition to prompting the user to manually adjust their pose or the device, the controller may direct the imaging system hardware to mechanically adjust the imaging system, for example by slightly moving or tilting the camera, or by zooming in or out. Adjusting the pose of the user with respect to the imaging system manually or mechanically may ensure a desired level biometric authentication performance, as better images of the eye or eye region may be captured. Feedback to the user may be a haptic, audio, or visual signal, or a combination of two or more haptic, audio, or visual signals. The automatic adjustment of the imaging system directed by the controller may move a component or a combination of components, for example a module that includes at least the camera. The manual or automatic adjustments may be a single step in the biometric authentication process, or alternative may be performed in a control loop until certain qualities or objective criteria are achieved in the captured images.
FIG. 16 illustrates a system that provides feedback to the user and/or control signals to the imaging system to manually or mechanically adjust the viewing angle of the camera with respect to the user's eye or periocular region, according to some embodiments. The imaging system may include, but is not limited to, one or more cameras 1640, an illumination source 1630, and a controller 1660. In this example, the eye camera 1640 is pointed towards the eye 1692 to receive reflected light from the illumination source 1630. However, in some embodiments the eye camera 1640 may instead or also capture images of the periorbital region and/or portions of the face. Note, however, in some embodiments, the eye camera 1640 may image a reflection off a hot mirror as shown in FIG. 1B. Further, in some embodiments, the eye camera 1640 may image the user's eye 1692 through one or more intermediate optical elements as shown in FIG. 1C. The eye camera(s) 1640 may capture 1642 one or more images of the user's eye 1692. The captured image(s) may be processed by controller 1660 to determine a current eye pose and to determine if the current eye pose is satisfactory for the biometric authentication process. If the eye pose is not satisfactory, then the controller 1660 may provide feedback 1662 to the user to prompt the user to change their eye pose and/or to manually adjust the device. In some embodiments, instead of or in addition to the feedback 1662, the controller 1660 may signal 1646 the imaging system to mechanically adjust the imaging system, for example by moving or tilting the camera 1640.
FIG. 17 is a flowchart of a method for providing feedback to the user to manually adjust the viewing angle of the camera with respect to the user's eye or periocular region, according to some embodiments. The method may, for example, be performed in a biometric authentication process. As indicated at 1700, the camera captures image(s) of the user's eye region. As indicated at 1710, the controller determines from the image(s) if the alignment of the camera with the desired feature(s) is good. At 1720, if the alignment is not good, the controller may prompt the user to adjust the gaze and/or to manually adjust the device to obtain a better viewing angle, and the method returns to element 1700. At 1720, if the alignment is good, then one or more image(s) may be processed as indicated at 1740. At 1750, if not done processing, then the method returns to 1700. Otherwise, the method is done.
FIG. 18 is a flowchart of a method for providing control signals to the imaging system to mechanically adjust the viewing angle of the camera with respect to the user's eye or periocular region, according to some embodiments. The method may, for example, be performed in a biometric authentication process. As indicated at 1800, the camera captures image(s) of the user's eye region. As indicated at 1810, the controller determines from the image(s) if the alignment of the camera with the desired feature(s) is good. At 1820, if the alignment is not good, the controller may signal the imaging system to mechanically adjust the device/camera to obtain a better viewing angle, and the method returns to element 1800. At 1820, if the alignment is good, then one or more image(s) may be processed as indicated at 1840. At 1850, if not done processing, then the method returns to 1800. Otherwise, the method is done.
Example Systems
FIGS. 19A and 19B are block diagrams illustrating a device that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments. An example application of the methods for improving the performance of imaging systems used in biometric authentication processes as described herein is in a handheld device 3000 such as smartphone, pad, or tablet. FIG. 19A shows a side view of an example device 3000, and FIG. 19B shows an example top view of the example device 3000. Device 3000 may include, but is not limited to, a display screen (not shown), a controller 3060 comprising one or more processors, memory 3070, pose, motion, and orientation sensors (not shown), and one or more cameras or sensing devices such as visible light cameras and depth sensors (not shown). A camera 3080 and illumination source 3040 as described herein may be attached to or integrated in the device 3000, and the device 3000 may be held and positioned by the user so that the camera 3080 can capture image(s) of the user's eye or eye region while illuminated by the illumination source 3050. The captured images may, for example, be processed by controller 3060 to authenticate the person, for example via an iris authentication process.
Note that device 3000 as illustrated in FIGS. 19A and 19B is given by way of example, and is not intended to be limiting. In various embodiments, the shape, size, and other features of a device 3000 may differ, and the locations, numbers, types, and other features of the components of a device 3000 may vary.
FIG. 20 illustrates an example head-mounted device (HMD) that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments. The HMD 4000 may, for example be a component in a mixed or augmented reality (MR) system. Note that HMD 4000 as illustrated in FIG. 20 is given by way of example, and is not intended to be limiting. In various embodiments, the shape, size, and other features of an HMD 4000 may differ, and the locations, numbers, types, and other features of the components of an HMD 4000 may vary. In some embodiments, HMD 4000 may include, but is not limited to, a display and two optical lenses (eyepieces) (not shown), mounted in a wearable housing or frame. As shown in FIG. 20, HMD 4000 may be positioned on the user's head 4090 such that the display and eyepieces are disposed in front of the user's eyes 4092. The user looks through the eyepieces 4020 onto the display. HMD 4000 may also include sensors that collect information about the user's environment (video, depth information, lighting information, etc.) and about the user (e.g., eye tracking sensors). The sensors may include, but are not limited to one or more eye cameras 4040 (e.g., infrared (IR) cameras) that capture views of the user's eyes 4092, one or more scene (visible light) cameras (e.g., RGB video cameras) that capture images of the real world environment in a field of view in front of the user (not shown), and one or more ambient light sensors that capture lighting information for the environment (not shown).
A controller 4060 for the MR system may be implemented in the HMD 4000, or alternatively may be implemented at least in part by an external device (e.g., a computing system) that is communicatively coupled to HMD 4000 via a wired or wireless interface. Controller 4060 may include one or more of various types of processors, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), and/or other components for processing and rendering video and/or images. Controller 4060 may render frames (each frame including a left and right image) that include virtual content based at least in part on inputs obtained from the sensors, and may provide the frames to the display. FIG. 21 further illustrates components of an HMD and MR system, according to some embodiments.
In some embodiments, an imaging system for the MR system may include, but is not limited to, one or more eye cameras 4040 and an IR light source 4030. IR light source 4030 (e.g., IR LEDs) may be positioned in the HMD 4000 (e.g., around the eyepieces 4020, or elsewhere in the HMD 4000) to illuminate the user's eyes 4092 with IR light. At least one eye camera 4040 (e.g., an IR camera, for example a 400×400 pixel count camera or a 600×600 pixel count camera, that operates at 850 nm or 940 nm, or at some other IR wavelength or combination of wavelengths, and that captures frames, for example at a rate of 60-120 frames per second (FPS)), is located at each side of the user 4090's face. In various embodiments, the eye cameras 4040 may be positioned in the HMD 4000 on each side of the user 4090's face to provide a direct view of the eyes 4092, a view of the eyes 4092 through the eyepieces 4020, or a view of the eyes 4092 via reflection off hot mirrors or other reflective components. Note that the location and angle of eye camera 4040 is given by way of example, and is not intended to be limiting. While FIG. 20 shows a single eye camera 4040 located on each side of the user 4090's face, in some embodiments there may be two or more eye cameras 4040 on each side of the user 4090's face.
A portion of IR light emitted by light source(s) 4030 reflects off the user 4090's eyes and is captured by the eye cameras 4040 to image the user's eyes 4092. Images captured by the eye cameras 4040 may be analyzed by controller 4060 to detect features (e.g., pupil), position, and movement of the user's eyes 4092, and/or to detect other information about the eyes 4092 such as pupil dilation. For example, the point of gaze on the display may be estimated from the eye tracking; the estimated point of gaze may be used to cause the scene camera(s) of the HMD 4000 to expose images of a scene based on a region of interest (ROI) corresponding to the point of gaze As another example, the estimated point of gaze may enable gaze-based interaction with content shown on the display. As another example, in some embodiments, brightness of the displayed images may be modulated based on the user's pupil dilation as determined by the imaging system. The HMD 4000 may implement one or more of the methods for improving the performance of the imaging systems used in biometric authentication or gaze tracking processes as illustrated in FIGS. 1 through 18 to capture and process images of the user's eyes 4090.
Embodiments of an HMD 4000 as illustrated in FIG. 20 may, for example, be used in augmented or mixed (AR) applications to provide augmented or mixed reality views to the user 4090. HMD 4000 may include one or more sensors, for example located on external surfaces of the HMD 4000, which collect information about the user 4090's external environment (video, depth information, lighting information, etc.); the sensors may provide the collected information to controller 4060 of the MR system. The sensors may include one or more visible light cameras (e.g., RGB video cameras) that capture video of the user's environment that may be used to provide the user 4090 with a virtual view of their real environment. In some embodiments, video streams of the real environment captured by the visible light cameras may be processed by the controller 4060 of the HMD 4000 to render augmented or mixed reality frames that include virtual content overlaid on the view of the real environment, and the rendered frames may be provided to the HMD 4000's display system.
FIG. 21 is a block diagram illustrating an example MR system that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments. In some embodiments, a MR system may include an HMD 5000 such as a headset, helmet, goggles, or glasses. HMD 5000 may implement any of various types of display technologies. For example, the HMD 5000 may include a display system that displays frames including left and right images on screens or displays (not shown) that are viewed by a user through eyepieces (not shown). The display system may, for example, be a DLP (digital light processing), LCD (liquid crystal display), or LCoS (liquid crystal on silicon) technology display system. To create a three-dimensional (3D) effect in a 3D virtual view, objects at different depths or distances in the two images may be shifted left or right as a function of the triangulation of distance, with nearer objects shifted more than more distant objects. Note that other types of display systems may be used in some embodiments.
In some embodiments, HMD 5000 may include a controller 5060 configured to implement functionality of the MR system and to generate frames (each frame including a left and right image) that are provided to the HMD's displays. In some embodiments, HMD 5000 may also include a memory 5062 configured to store software (code 5064) of the MR system that is executable by the controller 5060, as well as data 5068 that may be used by the MR system when executing on the controller 5060. In some embodiments, HMD 5000 may also include one or more interfaces (e.g., a Bluetooth technology interface, USB interface, etc.) configured to communicate with an external device via a wired or wireless connection. In some embodiments, at least a part of the functionality described for the controller 5060 may be implemented by the external device. The external device may be or may include any type of computing system or computing device, such as a desktop computer, notebook or laptop computer, pad or tablet device, smartphone, hand-held computing device, game controller, game system, and so on.
In various embodiments, controller 5060 may be a uniprocessor system including one processor, or a multiprocessor system including several processors (e.g., two, four, eight, or another suitable number). Controller 5060 may include central processing units (CPUs) configured to implement any suitable instruction set architecture, and may be configured to execute instructions defined in that instruction set architecture. For example, in various embodiments controller 5060 may include general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, RISC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of the processors may commonly, but not necessarily, implement the same ISA. Controller 5060 may employ any microarchitecture, including scalar, superscalar, pipelined, superpipelined, out of order, in order, speculative, non-speculative, etc., or combinations thereof. Controller 5060 may include circuitry to implement microcoding techniques. Controller 5060 may include one or more processing cores each configured to execute instructions. Controller 5060 may include one or more levels of caches, which may employ any size and any configuration (set associative, direct mapped, etc.). In some embodiments, controller 5060 may include at least one graphics processing unit (GPU), which may include any suitable graphics processing circuitry. Generally, a GPU may be configured to render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame). A GPU may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operation, or hardware acceleration of certain graphics operations. In some embodiments, controller 5060 may include one or more other components for processing and rendering video and/or images, for example image signal processors (ISPs), coder/decoders (codecs), etc.
Memory 5062 may include any type of memory, such as dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM (including mobile versions of the SDRAMs such as mDDR3, etc., or low power versions of the SDRAMs such as LPDDR2, etc.), RAMBUS DRAM (RDRAM), static RAM (SRAM), etc. In some embodiments, one or more memory devices may be coupled onto a circuit board to form memory modules such as single inline memory modules (SIMMs), dual inline memory modules (DIMMs), etc. Alternatively, the devices may be mounted with an integrated circuit implementing system in a chip-on-chip configuration, a package-on-package configuration, or a multi-chip module configuration.
In some embodiments, the HMD 5000 may include one or more sensors that collect information about the user's environment (video, depth information, lighting information, etc.). The sensors 500 may provide the information to the controller 5060 of the MR system. In some embodiments, the sensors may include, but are not limited to, visible light cameras (e.g., video cameras) and ambient light sensors.
HMD 5000 may be positioned on the user's head such that the displays and eyepieces are disposed in front of the user's eyes 5092A and 5092B. IR light sources 5030A and 5030B (e.g., IR LEDs) may be positioned in the HMD 5000 (e.g., around the eyepieces, or elsewhere in the HMD 5000) to illuminate the user's eyes 5092A and 5092B with IR light. Eye cameras 5040A and 5040B (e.g., IR cameras, for example 400×400 pixel count cameras or 600×600 pixel count cameras that operate at 850 nm or 940 nm, or at some other IR wavelength, and that capture frames, for example at a rate of 60-120 frames per second (FPS)), may be located at each side of the user's face. In various embodiments, the eye cameras 5040 may be positioned in the HMD 5000 to provide a direct view of the eyes 5092, a view of the eyes 5092 through the eyepieces 5020, or a view of the eyes 5092 via reflection off hot mirrors or other reflective components. Note that the location and angle of eye cameras 5040A and 5040B is given by way of example, and is not intended to be limiting. In some embodiments, there may be a single eye camera 5040 located on each side of the user's face. In some embodiments there may be two or more eye cameras 5040 on each side of the user's face. For example, in some embodiments, a wide-angle camera 5040 and a narrower-angle camera 5040 may be used on each side of the user's face. A portion of IR light emitted by light sources 5030A and 5030B reflects off the user's eyes 5092A and 5092B is received at respective eye cameras 5040A and 5040B, and is captured by the eye cameras 5040A and 5040B to image the user's eyes 5092A and 5092B. Eye information captured by the cameras 5040A and 5040B may be provided to the controller 5060. The controller 5060 may analyze the eye information (e.g., images of the user's eyes 5092A and 5092B) to determine eye position and movement and/or other features of the eyes 5092A and 5092B. In some embodiments, to accurately determine the location of the user's eyes 5092A and 5092B with respect to the eye cameras 5040A and 5040B, the controller 5060 may perform a 3D reconstruction using images captured by the eye cameras 5040A and 5040B to generate 3D models of the user's eyes 5092A and 5092B. The 3D models of the eyes 5092A and 5092B indicate the 3D position of the eyes 5092A and 5092B with respect to the eye cameras 5040A and 5040, which allows eye tracking algorithms executed by the controller to accurately track eye movement. The HMD 4000 may implement one or more of the methods for improving the performance of the imaging systems used in biometric authentication or gaze tracking processes as illustrated in FIGS. 1 through 18 to capture and process images of the user's eyes 4090.
The eye information obtained and analyzed by the controller 5060 may be used by the controller in performing various VR or AR system functions. For example, the point of gaze on the displays may be estimated from images captured by the eye cameras 5040A and 5040B; the estimated point of gaze may be used to cause the scene camera(s) of the HMD 5000 to expose images of a scene based on a region of interest (ROI) corresponding to the point of gaze. As another example, the estimated point of gaze may enable gaze-based interaction with virtual content shown on the displays. As another example, in some embodiments, brightness of the displayed images may be modulated based on the user's pupil dilation as determined by the imaging system.
In some embodiments, the HMD 5000 may be configured to render and display frames to provide an augmented or mixed reality (MR) view for the user based at least in part according to sensor inputs. The MR view may include renderings of the user's environment, including renderings of real objects in the user's environment, based on video captured by one or more video cameras that capture high-quality, high-resolution video of the user's environment for display. The MR view may also include virtual content (e.g., virtual objects, virtual tags for real objects, avatars of the user, etc.) generated by MR system and composited with the displayed view of the user's real environment.
Embodiments of the HMD 5000 as illustrated in FIG. 21 may also be used in virtual reality (VR) applications to provide VR views to the user. In these embodiments, the controller 5060 of the HMD 5000 may render or obtain virtual reality (VR) frames that include virtual content, and the rendered frames may be displayed to provide a virtual reality (as opposed to mixed reality) experience to the user. In these systems, rendering of the VR frames may be affected based on the point of gaze determined from the imaging system.
Extended Reality
A person can interact with and/or sense a physical environment or physical world without the aid of an electronic device. A physical environment can include physical features, such as a physical object or surface. An example of a physical environment is physical forest that includes physical plants and animals. A person can directly sense and/or interact with a physical environment through various means, such as hearing, sight, taste, touch, and smell. In contrast, a person can use an electronic device to interact with and/or sense an extended reality (XR) environment that is wholly or partially simulated. The XR environment can include mixed reality (MR) content, augmented reality (AR) content, virtual reality (VR) content, and/or the like. With an XR system, some of a person's physical motions, or representations thereof, can be tracked and, in response, characteristics of virtual objects simulated in the XR environment can be adjusted in a manner that complies with at least one law of physics. For instance, the XR system can detect the movement of a user's head and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In another example, the XR system can detect movement of an electronic device that presents the XR environment (e.g., a mobile phone, tablet, laptop, or the like) and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In some situations, the XR system can adjust characteristic(s) of graphical content in response to other inputs, such as a representation of a physical motion (e.g., a vocal command).
Many different types of electronic systems can enable a user to interact with and/or sense an XR environment. A non-exclusive list of examples include heads-up displays (HUDs), head mountable systems, projection-based systems, windows or vehicle windshields having integrated display capability, displays formed as lenses to be placed on users' eyes (e.g., contact lenses), headphones/earphones, input systems with or without haptic feedback (e.g., wearable or handheld controllers), speaker arrays, smartphones, tablets, and desktop/laptop computers. A head mountable system can have one or more speaker(s) and an opaque display. Other head mountable systems can be configured to accept an opaque external display (e.g., a smartphone). The head mountable system can include one or more image sensors to capture images/video of the physical environment and/or one or more microphones to capture audio of the physical environment. A head mountable system may have a transparent or translucent display, rather than an opaque display. The transparent or translucent display can have a medium through which light is directed to a user's eyes. The display may utilize various display technologies, such as uLEDs, OLEDs, LEDs, liquid crystal on silicon, laser scanning light source, digital light projection, or combinations thereof. An optical waveguide, an optical reflector, a hologram medium, an optical combiner, combinations thereof, or other similar technologies can be used for the medium. In some implementations, the transparent or translucent display can be selectively controlled to become opaque. Projection-based systems can utilize retinal projection technology that projects images onto users' retinas. Projection systems can also project virtual objects into the physical environment (e.g., as a hologram or onto a physical surface).
The following clauses provide descriptions of example embodiments of the techniques described above:
a camera configured to capture images of an eye region of a user;
a controller comprising one or more processors configured to:access an optical element description that describes optical properties of an optical element located on an optical path between the eye region of the user and the camera, wherein the optical element affects light on the optical path between the eye region of the user and the camera;
receive one or more images of the eye region of the user from the camera;
adjust the one or more images according to the optical properties of the optical element to account for effects of the optical element on quality of the one or more images; and
perform biometric authentication for the user based at least in part on the adjusted one or more images.
Clause 2. The system as recited in clause 1, wherein the eye region includes one or more of an iris, an eye, a periorbital region, and a portion of the user's face.
Clause 3. The system as recited in clause 1, further comprising a memory that stores one or more different optical element descriptions for different optical elements, wherein the controller is configured to access the optical element description for the optical element from the memory upon detecting presence of the optical element on the optical path.
Clause 4. The system as recited in clause 1, wherein the optical element is a lens of an optical system in a device that includes the camera and the controller.
Clause 5. The system as recited in clause 1, wherein the optical element is a lens added to an optical system in a device that includes the camera and the controller.
Clause 6. The system as recited in clause 1, wherein the optical properties of the optical element are related to an optical prescription for the user.
Clause 7. The system as recited in clause 1, wherein, to perform biometric authentication for the user based at least in part on the adjusted one or more images, the controller is configured to:analyze quality of one or more biometric aspects captured in the one or more images according to one or more objective criteria;
select at least one biometric aspect according to the analysis; and
perform the biometric authentication process based at least in part on the selected at least one biometric aspect.
Clause 8. The system as recited in clause 7, wherein the objective criteria include one or more of exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, and absence of undesired reflections.
Clause 9. The system as recited in clause 7, wherein the biometric aspects include one or more of an eye surface, eye veins, eyelids, eyebrows, skin features, nose features, and iris features, wherein the iris features include one or more of colors, patterns, and musculature.
Clause 10. The system as recited in clause 1, further comprising an illumination source comprising a plurality of light-emitting elements configured to emit light towards the eye region to be imaged by the camera.
Clause 11. The system as recited in clause 10, wherein the light-emitting elements include light-emitting diodes (LEDs).
Clause 12. The system as recited in clause 10, wherein the light-emitting elements include infrared (IR) light sources, and wherein the camera is an infrared camera.
Clause 13. The system as recited in clause 1, wherein the system is a component of a head-mounted device (HMD), a handheld device, or a wall-mounted device.
Clause 14. A method, comprising:
performing, by a controller comprising one or more processors:accessing an optical element description that describes optical properties of an optical element located on an optical path between an eye region of a user and a camera, wherein the optical element affects light on the optical path between the eye region of the user and the camera;
receiving one or more images of the eye region of the user from the camera;
adjusting the one or more images according to the optical properties of the optical element to account for effects of the optical element on quality of the one or more images; and
performing biometric authentication for the user based at least in part on the adjusted one or more images.
Clause 15. The method as recited in clause 14, wherein the eye region includes one or more of an iris, an eye, a periorbital region, and a portion of the user's face.
Clause 16. The method as recited in clause 14, further comprising accessing the optical element description for the optical element from memory upon detecting presence of the optical element on the optical path, wherein the memory stores one or more different optical element descriptions for different optical elements.
Clause 17. The method as recited in clause 14, wherein the optical element is a lens of an optical system in a device that includes the camera and the controller.
Clause 18. The method as recited in clause 14, wherein the optical element is a lens added to an optical system in a device that includes the camera and the controller.
Clause 19. The method as recited in clause 14, wherein the optical properties of the optical element are related to an optical prescription for the user.
Clause 20. The method as recited in clause 14, wherein performing biometric authentication for the user based at least in part on the adjusted one or more images comprises:analyzing quality of one or more biometric aspects captured in the one or more images according to one or more objective criteria;
selecting at least one biometric aspect according to the analysis;
performing the biometric authentication process based at least in part on the selected at least one biometric aspect.
Clause 21. The method as recited in clause 20, wherein the objective criteria include one or more of exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, and absence of undesired reflections.
Clause 22. The method as recited in clause 20, wherein the biometric aspects include one or more of an eye surface, eye veins, eyelids, eyebrows, skin features, nose features, and iris features, wherein the iris features include one or more of colors, patterns, and musculature.
Clause 23. The method as recited in clause 13, further comprising a plurality of light-emitting elements emitting light towards the eye region that is imaged by the camera.
Clause 24. The method as recited in clause 23, wherein the light-emitting elements are light-emitting diodes (LEDs).
Clause 25. The method as recited in clause 23, wherein the light-emitting elements are infrared (IR) light sources, and wherein the camera is an infrared camera.
Clause 26. The method as recited in clause 13, wherein the camera and controller are components of a head-mounted device (HMD), a handheld device, or a wall-mounted device.
Clause 27. A system, comprising:
a camera configured to capture images of an eye region of a user;
an illumination source configured to emit light towards the eye region of the user to be imaged by the camera;
an optical element located on an optical path between the eye region of the user and the camera, wherein the optical element is configured to diffract the light reflected off of the eye region of the user towards the camera, wherein diffracting the light improves viewing angle of the camera with respect to the eye region; and
a controller comprising one or more processors configured to perform biometric authentication for the user based on one or more images of the eye region of the user captured by the camera.
Clause 28. The system as recited in clause 27, wherein the eye region includes one or more of an iris, an eye, a periorbital region, and a portion of the user's face.
Clause 29. The system as recited in clause 27, wherein the optical element is one of a prism, a lens, a waveguide, and a diffraction grating.
Clause 30. The system as recited in clause 27, wherein, to perform biometric authentication for the user based on one or more images of the eye region of the user captured by the camera, the controller is configured to:
process the one or more images of the eye region captured by the camera to select one or more biometric aspects of the eye region; and
perform the biometric authentication for the user based at least in part on the selected one or more biometric aspects.
Clause 31. The system as recited in clause 27, wherein the illumination source comprises a plurality of light-emitting elements configured to emit light towards the eye region to be imaged by the camera.
Clause 32. The system as recited in clause 31, wherein the light-emitting elements include light-emitting diodes (LEDs).
Clause 33. The system as recited in clause 31, wherein the light-emitting elements include infrared (IR) light sources, and wherein the camera is an infrared camera.
Clause 34. The system as recited in clause 27, wherein the system is a component of a head-mounted device (HMD), a handheld device, or a wall-mounted device.
Clause 35. A method, comprising:
emitting, by an illumination source, light towards an eye region of a user to be imaged by a camera;
diffracting, by an optical element located on an optical path between the eye region of the user and the camera, a portion of the light reflected off of the eye region of the user towards the camera, wherein diffracting the light improves viewing angle of the camera with respect to the eye region; and
performing by a controller comprising one or more processors, biometric authentication for the user based on one or more images of the eye region of the user captured by the camera.
Clause 36. The method as recited in clause 35, wherein the eye region includes one or more of an iris, an eye, a periorbital region, and a portion of the user's face.
Clause 37. The method as recited in clause 35, wherein the optical element is one of a prism, a lens, a waveguide, and a diffraction grating.
Clause 38. The method as recited in clause 35, wherein performing biometric authentication for the user based on one or more images of the eye region of the user captured by the camera comprises:
processing the one or more images of the eye region captured by the camera to select one or more biometric aspects of the eye region; and
performing the biometric authentication for the user based at least in part on the selected one or more biometric aspects.
Clause 39. The method as recited in clause 35, wherein the illumination source comprises a plurality of light-emitting elements that emit the light towards the eye region to be imaged by the camera.
Clause 40. The method as recited in clause 39, wherein the light-emitting elements include light-emitting diodes (LEDs).
Clause 41. The method as recited in clause 39, wherein the light-emitting elements include infrared (IR) light sources, and wherein the camera is an infrared camera.
Clause 42. The method as recited in clause 35, wherein the system is a component of a head-mounted device (HMD), a handheld device, or a wall-mounted device.
The methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of the blocks of the methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. The various embodiments described herein are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the example configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of embodiments as defined in the claims that follow.