Magic Leap Patent | Augmented Reality Systems And Methods For Tracking Biometric Data
Patent: Augmented Reality Systems And Methods For Tracking Biometric Data
Publication Number: 20160358181
Publication Date: 20161208
Applicants: Magic Leap
Abstract
A method of conducting a transaction through an augmented reality display device comprises capturing biometric data from a user, determining, based at least in part on the captured biometric data, an identity of the user, and authenticating the user for the transaction based on the determined identity.
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims priority to U.S. Provisional Application Ser. No. 62/161,665 filed on May 14, 2015 entitled “AUGMENTED REALITY SYSTEMS AND METHOD FOR TRACKING BIOMETRIC DATA TO CONDUCT BUSINESS TRANSACTIONS,” under attorney docket number ML.30027.00. The contents of the aforementioned patent application are hereby expressly incorporated by reference in its entirety for all purposes as through set forth in full.
FIELD OF THE INVENTION
[0002] The present disclosure relates to systems and methods for utilizing biometric data to facilitate business transactions conducted through an augmented reality (AR) device.
BACKGROUND
[0003] Modern computing and display technologies have facilitated the development of systems for so called “virtual reality” or “augmented reality” experiences, wherein digitally reproduced images or portions thereof are presented to a user in a manner wherein they seem to be, or may be perceived as, real. A virtual reality, or “VR”, scenario typically involves presentation of digital or virtual image information without transparency to other actual real-world visual input. An augmented reality, or “AR”, scenario typically involves presentation of digital or virtual image information as an augmentation to visualization of the actual world around the user.
[0004] For example, referring to FIG. 1, an augmented reality scene is depicted wherein a user of an AR technology sees a real-world park-like setting featuring people, trees, buildings in the background, and a concrete platform 1120. In addition to these items, the user of the AR technology also perceives a robot statue 1110 standing upon the real-world platform 1120, and a cartoon-like avatar character 2 flying by, even though these elements (2, 1110) do not exist in the real world. The human visual perception system is very complex, and producing such an augmented reality scene that facilitates a comfortable, natural-feeling, rich presentation of virtual image elements amongst other virtual or real-world imagery elements is challenging.
[0005] It is envisioned that such an AR device may be used to present all types of virtual content to the user. In one or more embodiments, the AR devices may be used in the context of various gaming applications, enabling users to participate in single-player or multi-player video/augmented reality games that mimic real-life situations. For example, rather than playing a video game at a personal computer, the AR user may play the game on a larger scale in conditions that very closely resemble real life (e.g., “true-to-scale” 3D monsters may appear from behind a real building when the AR user is taking a walk in the park, etc.). Indeed, this greatly enhances the believability and enjoyment of the gaming experience.
[0006] While FIG. 1 illustrates the possibility of AR devices in the context of gaming applications, AR devices may be used in a myriad of other applications, and may be anticipated to take the place of everyday computing devices (e.g., personal computers, cell phones, tablet devices etc.). By strategically placing virtual content in the field of view of the user, the AR device may be thought of as a walking personal computer that allows the user to perform a variety of computing tasks (e.g., check email, look up a term on the web, tele-conference with other AR users, watch a movie, etc.) while at the same time being connected to the user’s physical environment. For example, rather than being constrained to a physical device at a desk, the AR user may be “on the go” (e.g., on a walk, on a daily commute, at a physical location other than his/her office, be away from his/her computer, etc.), but still be able to pull up a virtual email screen to check email, for example, or have a video conference with a friend by virtually populating a screen on the AR device, or in another example, be able to construct a virtual office at a make-shift location. A myriad of similar virtual reality/augmented reality scenarios may be envisioned.
[0007] This shift in the nature of the underlying computing technology comes with its share of advantages and challenges. To present an augmented reality scene such as the ones described above that is sensitive to the physiological limitations of the human visual system, the AR device must be aware of the user’s physical surroundings in order to project desired virtual content in relation to one or more real objects in the user’s physical environment. To this end, the AR device is typically equipped with various tracking devices (e.g., eye-tracking devices, GPS, etc.), cameras (e.g., field-of view cameras, infrared cameras, depth cameras, etc.) and sensors (e.g., accelerometers, gyroscopes, etc.) to assess the user’s position, orientation, distance, etc. in relation to various real objects in the user’s surroundings, to detect and identify objects of the real world and other such functionalities.
[0008] Given that the AR device is configured to track various types of data about the AR user and his/her surroundings, in one or more embodiments, this data may be advantageously leveraged to assist users with various types of transactions, while ensuring that minimal input is required from the user, and causing minimal or no interruption to the user’s AR experience.
[0009] To elaborate, traditional transactions (financial or otherwise) typically require users to physically carry some form of monetary token (e.g., cash, check, credit card, etc.) and in some cases, identification (e.g., driver’s license, etc.) and authentication (e.g., signature, etc.) to partake in business transactions. Consider a user walking into a department store: to make any kind of purchase, the user typically picks up the item(s), places the item in a cart, walks over to the register, waits in line for the cashier, waits for the cashier to scan a number of items, retrieves a credit card, provides identification, signs the credit card receipt, and stores the receipt for a future return of the item(s). In traditional financial transactions, these steps, although necessary, are time-consuming and inefficient, and in some cases discourage or prohibit a user from making a purchase (e.g., the user does not have the monetary token on his person or the identification card on his person, etc.). However, in the context of AR devices, these steps are redundant and unnecessary. In one or more embodiments, the AR devices may be configured to allow users to seamlessly perform many types of transactions without requiring the user to perform the onerous procedures described above.
[0010] There, thus, is a need for a better solution to assist AR users to participate in everyday transactions.
SUMMARY
[0011] Embodiments of the present invention are directed to devices, systems and methods for facilitating virtual reality and/or augmented reality interaction for one or more users.
[0012] In one aspect, a method of conducting a transaction through an augmented reality device comprises capturing biometric data from a user, determining, based at least in part on the captured biometric data, an identity of the user, and authenticating the user for the transaction based on the determined identity.
[0013] In one or more embodiments, the method further comprises transmitting a set of data regarding the transaction to a financial institution. In one or more embodiments, the biometric data is an iris pattern. In one or more embodiments, the biometric data is a voice recording of the user. In one or more embodiments, the biometric data is a retinal signature. In one or more embodiments, the biometric data is a characteristic associated with the user’s skin.
[0014] In one or more embodiments, the biometric data is captured through one or more eye tracking cameras that capture a movement of the user’s eyes. In one or more embodiments, the biometric data is a pattern of movement of the user’s eyes. In one or more embodiments, the biometric data is a blinking pattern of the user’s eyes.
[0015] In one or more embodiments, the augmented reality device is head mounted, and the augmented reality device is individually calibrated for the user. In one or more embodiments, the biometric data is compared to a predetermined data pertaining to the user. In one or more embodiments, the predetermined data is a known signature movement of the user’s eyes.
[0016] In one or more embodiments, the predetermined data is a known iris pattern. In one or more embodiments, the predetermined data is a known retinal pattern. In one or more embodiments, the method further comprises detecting a desire of the user to make a transaction, requesting the biometric data from the user based at least in part on the detected desire, and comparing the biometric data with a predetermined biometric data to generate a result, wherein the user is authenticated based at least in part on the result.
[0017] In one or more embodiments, the transaction is a business transaction. In one or more embodiments, the method further comprises communicating an authentication of the user to a financial institution associated with the user, wherein the financial institution releases payment on behalf of the user based at least in part on the authentication. In one or more embodiments, the financial institution transmits the payment to one or more vendors indicated by the user.
[0018] In one or more embodiments, the method further comprises detecting an interruption event or transaction event associated with the augmented reality device. In one or more embodiments, capturing new biometric data from the user in order to re-authenticate the user based at least in part on the detected event. In one or more embodiments, the interruption of activity is detected based at least in part on a removal of the augmented reality device from the user’s head.
[0019] In one or more embodiments, the interruption of activity is detected based at least in part on a loss of connectivity of the augmented reality device with a network. In one or more embodiments, the transaction event is detected based at least in part on an express approval of a transaction by the user. In one or more embodiments, the transaction event is detected based at least in part on a heat map associated with the user’s gaze.
[0020] In one or more embodiments, the transaction event is detected based at least in part on user input received through the augmented reality device. In one or more embodiments, the user input comprises an eye gesture. In one or more embodiments, the user input comprises a hand gesture.
[0021] In another aspect, an augmented reality display system comprises a biometric data tracking device to capture biometric data from a user, a processor operatively coupled to the biometric data tracking device to process the captured biometric data, and to determine an identity of the user based at least in part on the captured biometric data, and a server to communicate with at least a financial institution to authenticate the user for a transaction.
[0022] In one or more embodiments, the biometric data is eye movement data. In one or more embodiments, the biometric data corresponds to an image of an iris of the user. In one or more embodiments, the server also transmits a set of data regarding the transaction to a financial institution. In one or more embodiments, the biometric data is an iris pattern.
[0023] In one or more embodiments, the biometric data is a voice recording of the user. In one or more embodiments, the biometric data is a retinal signature. In one or more embodiments, the biometric data is a characteristic associated with the user’s skin. In one or more embodiments, the biometric tracking device comprises one or more eye tracking cameras to capture a movement of the user’s eyes. In one or more embodiments, the biometric data is a pattern of movement of the user’s eyes.
[0024] In one or more embodiments, the biometric data is a blinking pattern of the user’s eyes. In one or more embodiments, the augmented reality display system is head mounted, and the augmented reality display system is individually calibrated for the user. In one or more embodiments, the processor also compares the biometric data to a predetermined data pertaining to the user. In one or more embodiments, the predetermined data is a known signature movement of the user’s eyes. In one or more embodiments, the predetermined data is a known iris pattern. In one or more embodiments, the predetermined data is a known retinal pattern. In one or more embodiments, the processor detects that a user desires to make a transaction, and further comprising a user interface to request the biometric data from the user based at least in part on the detection, the processor comparing the biometric data with a predetermined biometric data, and authenticating the user based at least in part on the comparison.
[0025] In one or more embodiments, the transaction is a business transaction. In one or more embodiments, the processor communicates the authentication of the user to a financial institution associated with the user, and wherein the financial institution releases payment on behalf of the user based at least in part on the authentication. In one or more embodiments, the financial institution transmits the payment to one or more vendors indicated by the user.
[0026] In one or more embodiments, the processor detects an interruption event or transaction event associated with the augmented reality device, and wherein the biometric tracking device captures new biometric data from the user in order to re-authenticate the user based at least in part on the detected event. In one or more embodiments, the interruption of activity is detected based at least in part on a removal of the augmented reality device from the user’s head.
[0027] In one or more embodiments, the interruption of activity is detected based at least in part on a loss of connectivity of the augmented reality device with a network. In one or more embodiments, the transaction event is detected based at least in part on an express approval of a transaction by the user. In one or more embodiments, the transaction event is detected based at least in part on a heat map associated with the user’s gaze. In one or more embodiments, the transaction event is detected based at least in part on user input received through the augmented reality device. In one or more embodiments, the user input comprises an eye gesture. In one or more embodiments, the user input comprises a hand gesture.
[0028] In one or more embodiments, the biometric tracking device comprises an eye tracking system. In one or more embodiments, the biometric tracking device comprises a haptic device. In one or more embodiments, the biometric tracking device comprises a sensor that measures physiological data pertaining to a user’s eye.
[0029] Additional and other objects, features, and advantages of the invention are described in the detail description, figures and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] The drawings illustrate the design and utility of various embodiments of the present invention. It should be noted that the figures are not necessarily drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. In order to better appreciate how to obtain the above-recited and other advantages and objects of various embodiments of the invention, a more detailed description of the present invention briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
[0031] FIG. 1 illustrates an example augmented reality scene being displayed to a user.
[0032] FIG. 2A-2D illustrates various configurations of an example augmented reality device.
[0033] FIG. 3 illustrates an augmented reality device communicating with one or more servers in the cloud, according to one embodiment.
[0034] 4A-4D illustrates various eye and head measurements taken in order to configure the augmented reality device for a particular user.
[0035] FIG. 5 shows a plan view of various components of an augmented reality device according to one embodiment.
[0036] FIG. 6 shows a system architecture of the augmented reality system for conducting business transactions, according to one embodiment.
[0037] FIG. 7 is an example flowchart depicting a method for conducting a business transaction through the augmented reality device.
[0038] FIG. 8A and 8B illustrate an example eye-identification method to identify a user, according to one embodiment.
[0039] FIG. 9 illustrates an example flowchart depicting a method of using eye-movements to authenticate a user, according to one embodiment.
[0040] FIG. 10A-10I illustrates a series of process flow diagrams depicting an example scenario of conducting a business transaction using an augmented reality device.
DETAILED DESCRIPTION
[0041] Various embodiments of the invention are directed to methods, systems, and articles of manufacture for implementing multi-scenario physically-aware design of an electronic circuit design in a single embodiment or in multiple embodiments. Other objects, features, and advantages of the invention are described in the detailed description, figures, and claims.
[0042] Various embodiments will now be described in detail with reference to the drawings, which are provided as illustrative examples of the invention so as to enable those skilled in the art to practice the invention. Notably, the figures and the examples below are not meant to limit the scope of the present invention. Where certain elements of the present invention may be partially or fully implemented using known components (or methods or processes), only those portions of such known components (or methods or processes) that are necessary for an understanding of the present invention will be described, and the detailed descriptions of other portions of such known components (or methods or processes) will be omitted so as not to obscure the invention. Further, various embodiments encompass present and future known equivalents to the components referred to herein by way of illustration.
[0043] Disclosed are methods and systems for tracking biometric data associated with AR users and utilizing the biometric data to assist in business transactions. In one or more embodiments, the AR device may utilize eye identification techniques (e.g., iris patterns, eye vergence, eye motion, patterns of cones and rods, patterns in eye movements, etc.) to authenticate a user for a purchase. Advantageously, this type of user authentication minimizes friction costs in conducting business transactions, and allows the user to make purchases (e.g., brick and mortar stores, online stores, in response to an advertisement, etc.) seamlessly with minimal effort and/or interruption. Although the following disclosure will mainly focus on authentication based on eye-related biometric data, it should be appreciated that other types of biometric data may be similarly used for authentication purposes in other embodiments as well. Various embodiments as will be described below discuss the new paradigm of conducting business in the context of augmented reality (AR) systems, but it should be appreciated that the techniques disclosed here may be used independently of any existing and/or known AR systems. Thus, the examples discussed below are for example purposes only and the invention should not be read to be limited to AR systems.
[0044] Referring to FIGS. 2A-2D, some general componentry options are illustrated. In the portions of the detailed description which follow the discussion of FIGS. 2A-2D, various systems, subsystems, and components are presented for addressing the objectives of providing a high-quality, comfortably-perceived display system for human VR and/or AR.
[0045] As shown in FIG. 2A, an AR system user 60 is depicted wearing a frame 64 structure coupled to an AR display system 62 positioned in front of the eyes of the user. A speaker 66 is coupled to the frame 64 in the depicted configuration and positioned adjacent the ear canal of the user (in one embodiment, another speaker, not shown, is positioned adjacent the other ear canal of the user to provide for stereo/shapeable sound control). The display 62 is operatively coupled 68, such as by a wired lead or wireless connectivity, to a local processing and data module 70 which may be mounted in a variety of configurations, such as fixedly attached to the frame 64, fixedly attached to a helmet or hat 80 as shown in the embodiment of FIG. 2B, embedded in headphones, removably attached to the torso 82 of the user 60 in a backpack-style configuration as shown in the embodiment of FIG. 2C, or removably attached to the hip 84 of the user 60 in a belt-coupling style configuration as shown in the embodiment of FIG. 2D.
[0046] The local processing and data module 70 may comprise a power-efficient processor or controller, as well as digital memory, such as flash memory, both of which may be utilized to assist in the processing, caching, and storage of data a) captured from sensors which may be operatively coupled to the frame 64, such as image capture devices (such as cameras), microphones, inertial measurement units, accelerometers, compasses, GPS units, radio devices, and/or gyros; and/or b) acquired and/or processed using the remote processing module 72 and/or remote data repository 74, possibly for passage to the display 62 after such processing or retrieval. The local processing and data module 70 may be operatively coupled (76, 78), such as via a wired or wireless communication links, to the remote processing module 72 and remote data repository 74 such that these remote modules (72, 74) are operatively coupled to each other and available as resources to the local processing and data module 70.
[0047] In one embodiment, the remote processing module 72 may comprise one or more relatively powerful processors or controllers configured to analyze and process data and/or image information. In one embodiment, the remote data repository 74 may comprise a relatively large-scale digital data storage facility, which may be available through the Internet or other networking configuration in a “cloud” resource configuration. In one embodiment, all data is stored and all computation is performed in the local processing and data module, allowing fully autonomous use without any remote modules.
[0048] As described with reference to FIGS. 2A-2D, the AR system continually receives input from various devices that collect data about the AR user and the surrounding environment. Referring now to FIG. 3, the various components of an example augmented reality display device will be described. It should be appreciated that other embodiments may have additional components. Nevertheless, FIG. 3 provides a basic idea of the various components, and the types of data that may be collected by AR device.
[0049] Referring now to FIG. 3, a schematic illustrates coordination between the cloud computing assets 46 and local processing assets (308, 120). In one embodiment, the cloud 46 assets are operatively coupled, such as via wired or wireless networking (wireless being preferred for mobility, wired being preferred for certain high-bandwidth or high-data-volume transfers that may be desired), directly to (40, 42) one or both of the local computing assets (120, 308), such as processor and memory configurations which may be housed in a structure configured to be coupled to a user’s head mounted device 120 or belt 308. These computing assets local to the user may be operatively coupled to each other as well, via wired and/or wireless connectivity configurations 44. In one embodiment, to maintain a low-inertia and small-size head mounted subsystem 120, primary transfer between the user and the cloud 46 may be via the link between the belt-based subsystem 308 and the cloud, with the head mounted subsystem 120 primarily data-tethered to the belt-based subsystem 308 using wireless connectivity, such as ultra-wideband (“UWB”) connectivity, as is currently employed, for example, in personal computing peripheral connectivity applications. Through the cloud 46, the AR display system 120 may interact with one or more AR servers 110 hosted in the cloud. The various AR servers 110 may have communication links 115 that allows the servers 110 to communicate with one another.
[0050] With efficient local and remote processing coordination, and an appropriate display device for a user, such as a user interface or user “display device”, or variations thereof, aspects of one world pertinent to a user’s current actual or virtual location may be transferred or “passed” to the user and updated in an efficient fashion. In other words, a map of the world is continually updated at a storage location which may partially reside on the user’s AR system and partially reside in the cloud resources. The map (also referred to as a passable world model) may be a large database comprising raster imagery, 3D and 2D points, parametric information and other information about the real world. As more and more AR users continually capture information about their real environment (e.g., through cameras, sensors, IMUs, etc.), the map becomes more and more accurate.
[0051] More pertinent to the current disclosures, AR systems similar to those described in FIGS. 2A-2D provide unique access to a user’s eyes, which may be advantageously used to uniquely identify the user based on a set of biometric data tracked through the AR system. This unprecedented access to the user’s eyes naturally lends itself to various applications. Given that the AR device interacts crucially with the user’s eye to allow the user to perceive 3D virtual content, and in many embodiments, tracks various biometrics related to the user’s eyes (e.g., eye vergence, eye motion, cones and rods, patterns of eye movements, etc.), the resultant tracked data may be advantageously used in user identification and authentication for various transactions, as will be described in further detail below.
[0052] The AR device is typically fitted for a particular user’s head, and the optical components are aligned to the user’s eyes. These configuration steps may be used in order to ensure that the user is provided with an optimum augmented reality experience without causing any physiological side-effects, such as headaches, nausea, discomfort, etc. Thus, in one or more embodiments, the AR device is configured (both physically and digitally) for each individual user, and may be calibrated specifically for the user. In other scenarios, a loose fitting AR device may be used comfortably by a variety of users. For example, in some embodiments, the AR device knows the distance between the user’s eyes, the distance from the head worn display and the user’s eyes, a distance between the user’s eyes, a curvature of the user’s forehead. All of these measurements may be used to provide the appropriate head-worn display system for a given user. In other embodiments, such measurements may not be necessary in order to perform the identification and authentication function described in this application.
[0053] For example, referring to FIG. 4A-4D, the AR device may be customized for each user. The user’s head shape 402 may be taken into account when fitting the head-mounted AR system, in one or more embodiments, as shown in FIG. 4A. Similarly, the eye components 404 (e.g., optics, structure for the optics, etc.) may be rotated or adjusted for the user’s comfort both horizontally and vertically, or rotated for the user’s comfort, as shown in FIG. 4B. In one or more embodiments, as shown FIG. 4C, a rotation point 406 of the head set with respect to the user’s head may be adjusted based on the structure of the user’s head. Similarly, the inter-pupillary distance (IPD) 408 (i.e., the distance between the user’s eyes) may be compensated for, as shown in FIG. 4D.
[0054] Advantageously, in the context of user identification and authentication, this aspect of the head-worn AR devices is crucial because the system already possesses a set of measurements about the user’s physical features (e.g., eye size, head size, distance between eyes, etc.), and other data that may be used to easily identify the user, and allow the user to complete one or more business transactions. Additionally, the AR system may easily be able to detect when the AR system is being worn by a different AR user other than a user that is authorized to the use the AR system. This allows the AR system to constantly monitor the user’s eyes, and thus be aware of the user’s identity as needed.
[0055] In addition to the various measurements and calibrations performed on the user, the AR device may be configured to track a set of biometric data about the user. For example, the system may track eye movements, eye movement patterns, blinking patterns, eye vergence, eye color, iris patterns, retinal patters, fatigue parameters, changes in eye color, changes in focal distance, and many other parameters, that may be used in providing an optical augmented reality experience to the user.
[0056] Referring to FIG. 5, one simplified embodiment of a suitable user display device 62 is shown, comprising a display lens 106 which may be mounted to a user’s head or eyes by a housing or frame 108. The display lens 106 may comprise one or more transparent mirrors positioned by the housing 108 in front of the user’s eyes 20 and configured to bounce projected light 38 into the eyes 20 and facilitate beam shaping, while also allowing for transmission of at least some light from the local environment. In the depicted embodiment, two wide-field-of-view machine vision cameras 16 are coupled to the housing 108 to image the environment around the user; in one embodiment these cameras 16 are dual capture visible light/infrared light cameras.
[0057] The depicted embodiment also comprises a pair of scanned-laser shaped-wavefront (i.e., for depth) light projector modules 18 (e.g., spatial light modulators such as DLP, fiber scanning devices (FSDs), LCDs, etc.) with display mirrors and optics configured to project light 38 into the eyes 20 as shown. The depicted embodiment also comprises two miniature infrared cameras 24 paired with infrared light sources 26, such as light emitting diodes “LED”s, which are configured to be able to track the eyes 20 of the user to support rendering and user input. The display system 62 further features a sensor assembly 39, which may comprise X, Y, and Z axis accelerometer capability as well as a magnetic compass and X, Y, and Z axis gyro capability, preferably providing data at a relatively high frequency, such as 200 Hz. The depicted system 62 also comprises a head pose processor 36, such as an ASIC (application specific integrated circuit), FPGA (field programmable gate array), and/or ARM processor (advanced reduced-instruction-set machine), which may be configured to calculate real or near-real time user head pose from wide field of view image information output from the cameras 16. The head pose processor 36 is operatively coupled (90, 92, 94; e.g., via wired or wireless connectivity) to the cameras 16 and the rendering engine 34.
[0058] Also shown is another processor 32 configured to execute digital and/or analog processing to derive pose from the gyro, compass, and/or accelerometer data from the sensor assembly 39. The depicted embodiment also features a GPS 37 subsystem to assist with pose and positioning.
[0059] Finally, the depicted embodiment comprises a rendering engine 34 which may feature hardware running a software program configured to provide rendering information local to the user to facilitate operation of the scanners and imaging into the eyes of the user, for the user’s view of the world. The rendering engine 34 is operatively coupled (105, 94, 100/102, 104; i.e., via wired or wireless connectivity) to the sensor pose processor 32, the image pose processor 36, the eye tracking cameras 24, and the projecting subsystem 18 such that rendered light 38 is projected using a scanned laser arrangement 18 in a manner similar to a retinal scanning display. The wavefront of the projected light beam 38 may be bent or focused to coincide with a desired focal distance of the projected light 38.
[0060] The mini infrared cameras 24 may be utilized to track the eyes to support rendering and user input (i.e., where the user is looking, what depth he is focusing; as discussed below, eye vergence may be utilized to estimate depth of focus). The GPS 37, gyros, compass, and accelerometers 39 may be utilized to provide coarse and/or fast pose estimates. The camera 16 images and pose data, in conjunction with data from an associated cloud computing resource, may be utilized to map the local world and share user views with a virtual or augmented reality community.
[0061] While much of the hardware in the display system 62 featured in FIG. 5 is depicted directly coupled to the housing 108 which is adjacent the display 106 and the eyes 20 of the user, the hardware components depicted may be mounted to or housed within other components, such as a belt-mounted component 70, as shown, for example, in FIG. 2D.
[0062] In one embodiment, all of the components of the system 62 featured in FIG. 5 are directly coupled to the display housing 108 except for the image pose processor 36, sensor pose processor 32, and rendering engine 34, and communication between the latter three and the remaining components of the system may be by wireless communication, such as ultra wideband, or wired communication. The depicted housing 108 preferably is head-mounted and wearable by the user. It may also feature speakers, such as those which may be inserted into the ears of a user and utilized to provide sound to the user.
[0063] Having described the principle components of a standard AR device, it should be appreciated that the AR device may comprise many components that are configured to collect data from the user and his/her surroundings. For example, as described above, some embodiments of the AR device collect GPS information to determine a location of the user. In other embodiments, the AR device comprises infrared cameras to track the eyes of the user. In yet other embodiments, the AR device may comprise field-of-view cameras to capture images of the user’s environment, which may, in turn, be used to construct a map (contained in one of the servers 110, as described in FIG. 3) of the user’s physical space, which allows the system to render virtual content in relation to appropriate real-life objects, as described briefly with respect to FIG. 3.
[0064] Regarding the projection of light 38 into the eyes 20 of the user, in one embodiment the mini cameras 24 may be utilized to measure where the centers of a user’s eyes 20 are geometrically verged to, which, in general, coincides with a position of focus, or “depth of focus”, of the eyes 20. A three dimensional surface of all points the eyes verge to is called the “horopter”. The focal distance may take on a finite number of depths, or may be infinitely varying. Light projected from the vergence distance appears to be focused to the subject eye 20, while light in front of or behind the vergence distance is blurred.
[0065] Further, it has been discovered that spatially coherent light with a beam diameter of less than about 0.7 millimeters is correctly resolved by the human eye regardless of where the eye focuses; given this understanding, to create an illusion of proper focal depth, the eye vergence may be tracked with the mini cameras 24, and the rendering engine 34 and projection subsystem 18 may be utilized to render all objects on or close to the horopter in focus, and all other objects at varying degrees of defocus (i.e., using intentionally-created blurring). A see-through light guide optical element configured to project coherent light into the eye may be provided by suppliers such as Lumus, Inc. Preferably the system 62 renders to the user at a frame rate of about 60 frames per second or greater. As described above, preferably the mini cameras 24 may be utilized for eye tracking, and software may be configured to pick up not only vergence geometry but also focus location cues to serve as user inputs. Preferably such a system is configured with brightness and contrast suitable for day or night use.
[0066] In one embodiment such a system preferably has latency of less than about 20 milliseconds for visual object alignment, less than about 0.1 degree of angular alignment, and about 1 arc minute of resolution, which is approximately the limit of the human eye. The display system 62 may be integrated with a localization system, which may involve GPS elements, optical tracking, compass, accelerometers, and/or other data sources, to assist with position and pose determination; localization information may be utilized to facilitate accurate rendering in the user’s view of the pertinent world (i.e., such information would facilitate the glasses to know where they are with respect to the real world). Having described the general components of the AR device, additional embodiments specifically pertinent to user identification and authentication for conducting business transactions will be discussed below.
[0067] As discussed in some detail above, the traditional model(s) for conducting business transactions tend to be inefficient and onerous, and often have the effect of deterring users from engaging in transactions. For example, consider a user at a department store. In traditional models, the user is required to physically go to a store, select items, stand in line, wait for the cashier, provide payment information and or identification, and authorize payment. Even online shopping, which is arguably less cumbersome, comes with its share of drawbacks. Although the user does not have to physically be at the store location and can easily select items of interest, payment still often requires credit card information and authentication. With the advent of AR devices, however, the traditional models of payment (e.g., cash, credit card, monetary tokens, etc.) may be rendered unnecessary, because the AR device can easily confirm the user’s identity and authenticate a business transaction.
[0068] For example, an AR user may leisurely stroll into a retail store and pick up an item. The AR device may confirm the user’s identity, confirm whether the user wants to make the purchase, and simply walk out of the store. In one or more embodiments, the AR device may interface with a financial institution that will transfer money from the user’s account to an account associated with the retail store based on the confirmed purchase. Or, in another example, the AR user may watch an advertisement for a particular brand of shoes. The user may indicate, through the AR device, that the user wants to purchase the shoes. The AR device may confirm identity of the user, and authenticate the purchase. On the back-end, an order may be placed at the retailer of the brand of shoes, and the retailer may simply ship a pair of the desired shoes to the user. As illustrated by the above examples, since the AR device “knows” the identity of the user (and AR devices are typically built and customized for every individual user), financial transactions are easily authenticated, thereby greatly reducing the friction costs typically associated with conducting business.
……
……
……