雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Magic Leap Patent | Methods And Systems For Creating Virtual And Augmented Reality

Patent: Methods And Systems For Creating Virtual And Augmented Reality

Publication Number: 20160026253

Publication Date: 20160128

Applicants: Magic Leap

Abstract

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

CROSS-REFERENCE TO RELATED APPLICATION(S)

[0001] This application claims priority to U.S. Provisional Patent App. Ser. No. 62/012,273 filed on Jun. 14, 2014 entitled “METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY,” under Attorney Docket No. ML.30019.00. This application is a continuation-in-part of U.S. patent application Ser. No. 14/331,218 filed on Jul. 14, 2014 entitled “PLANAR WAVEGUIDE APPARATUS WITH DIFFRACTION LENSING ELEMENT(S) AND SYSTEM EMPLOYING SAME,” under Attorney Docket No. ML.20020.00. This application is cross-related to U.S. patent application Ser. No. 14/555,585 filed on Nov. 27, 2014 entitled “VIRTUAL AND AR SYSTEMS AND METHODS,” under Attorney Docket No. ML.20011.00, U.S. patent application Ser. No. 14/690,401 filed on Apr. 18, 2015 entitled “SYSTEMS AND METHOD FOR AUGMENTED REALITY” under attorney docket number ML.200V7.00, and to U.S. patent application Ser. No. 14/205,126 filed on Mar. 11, 2014 entitled “SYSTEM AND METHOD FOR AUGMENTED AND VIRTUAL REALITY,” under attorney docket number ML.20005.00. The content of the aforementioned patent applications are hereby expressly incorporated by reference in their entirety for all purposes.

BACKGROUND

[0002] Modern computing and display technologies have facilitated the development of systems for so called “virtual reality” or “augmented reality” experiences, wherein digitally reproduced images or portions thereof are presented to a user in a manner wherein they seem to be, or may be perceived as, real. A virtual reality, or “VR”, scenario typically involves presentation of digital or virtual image information without transparency to other actual real-world visual input; an augmented reality, or “AR”, scenario typically involves presentation of digital or virtual image information as an augmentation to visualization of the actual world around the user. For example, an augmented reality scene may allow a user of AR technology may see one or more virtual objects super-imposed on or amidst real world objects (e.g., a real-world park-like setting featuring people, trees, buildings in the background, etc.).

[0003] The human visual perception system is very complex, and producing a VR or AR technology that facilitates a comfortable, natural-feeling, rich presentation of virtual image elements amongst other virtual or real-world imagery elements is challenging. Traditional stereoscopic wearable glasses generally feature two displays that are configured to display images with slightly different element presentation such that a three-dimensional perspective is perceived by the human visual system. Such configurations have been found to be uncomfortable for many users due to a mismatch between vergence and accommodation which may be overcome to perceive the images in three dimensions. Indeed, some users are not able to tolerate stereoscopic configurations.

[0004] Although a few optical configurations (e.g., head-mounted glasses) are available (e.g., GoogleGlass.RTM., Occulus Rift.RTM., etc.), none of these configurations is optimally suited for presenting a rich, binocular, three-dimensional augmented reality experience in a manner that will be comfortable and maximally useful to the user, in part because prior systems fail to address some of the fundamental aspects of the human perception system, including the photoreceptors of the retina and their interoperation with the brain to produce the perception of visualization to the user.

[0005] The human eye is an exceedingly complex organ, and typically comprises a cornea, an iris, a lens, macula, retina, and optic nerve pathways to the brain. The macula is the center of the retina, which is utilized to see moderate detail. At the center of the macula is a portion of the retina that is referred to as the “fovea”, which is utilized for seeing the finest details of a scene, and which contains more photoreceptors (approximately 120 cones per visual degree) than any other portion of the retina.

[0006] The human visual system is not a passive sensor type of system; it actively scans the environment. In a manner somewhat akin to use of a flatbed scanner to capture an image, or use of a finger to read Braille from a paper, the photoreceptors of the eye fire in response to changes in stimulation, rather than constantly responding to a constant state of stimulation. Thus, motion is required to present photoreceptor information to the brain.

[0007] Indeed, experiments with substances such as cobra venom, which has been utilized to paralyze the muscles of the eye, have shown that a human subject will experience blindness if positioned with eyes open, viewing a static scene with venom-induced paralysis of the eyes. In other words, without changes in stimulation, the photoreceptors do not provide input to the brain and blindness is experienced. It is believed that this is at least one reason that the eyes of normal humans have been observed to move back and forth, or dither, in side-to-side motion, also known as “microsaccades”.

[0008] As noted above, the fovea of the retina contains the greatest density of photoreceptors. While it is typically perceived that humans have high-resolution visualization capabilities throughout a field of view, in actuality humans only a small high-resolution center that is mechanically swept around almost constantly, along with a persistent memory of the high-resolution information recently captured with the fovea. In a somewhat similar manner, the focal distance control mechanism of the eye (e.g., ciliary muscles operatively coupled to the crystalline lens in a manner wherein ciliary relaxation causes taut ciliary connective fibers to flatten out the lens for more distant focal lengths; ciliary contraction causes loose ciliary connective fibers, which allow the lens to assume a more rounded geometry for more close-in focal lengths) dithers back and forth by approximately 1/4 to 1/2 diopter to cyclically induce a small amount of “dioptric blur” on both the close side and far side of the targeted focal length. This is utilized by the accommodation control circuits of the brain as cyclical negative feedback that helps to constantly correct course and keep the retinal image of a fixated object approximately in focus.

[0009] The visualization center of the brain also gains valuable perception information from the motion of both eyes and components thereof relative to each other. Vergence movements (e.g., rolling movements of the pupils toward or away from each other to converge the lines of sight of the eyes to fixate upon an object) of the two eyes relative to each other are closely associated with focusing (or “accommodation”) of the lenses of the eyes. Under normal conditions, changing the focus of the lenses of the eyes, or accommodating the eyes, to focus upon an object at a different distance will automatically cause a matching change in vergence to the same distance, under a relationship known as the “accommodation-vergence reflex.” Likewise, a change in vergence will trigger a matching change in accommodation, under normal conditions. Working against this reflex (as is the case with most conventional stereoscopic AR or VR configurations) is known to produce eye fatigue, headaches, or other forms of discomfort in users.

[0010] Movement of the head, which houses the eyes, also has a key impact upon visualization of objects. Humans tend to move their heads to visualize the world around them, and are often are in a fairly constant state of repositioning and reorienting the head relative to an object of interest. Further, most people prefer to move their heads when their eye gaze needs to move more than about 20 degrees off center to focus on a particular object (e.g., people do not typically like to look at things “from the corner of the eye”). Humans also typically scan or move their heads in relation to sounds–to improve audio signal capture and utilize the geometry of the ears relative to the head. The human visual system gains powerful depth cues from what is called “head motion parallax”, which is related to the relative motion of objects at different distances as a function of head motion and eye vergence distance. In other words, if a person moves his head from side to side and maintains fixation on an object, items farther out from that object will move in the same direction as the head, and items in front of that object will move opposite the head motion. These may be very salient cues for where objects are spatially located in the environment relative to the person. Head motion also is utilized to look around objects, of course.

[0011] Further, head and eye motion are coordinated with the “vestibulo-ocular reflex”, which stabilizes image information relative to the retina during head rotations, thus keeping the object image information approximately centered on the retina. In response to a head rotation, the eyes are reflexively and proportionately rotated in the opposite direction to maintain stable fixation on an object. As a result of this compensatory relationship, many humans can read a book while shaking their head back and forth. Interestingly, if the book is panned back and forth at the same speed with the head approximately stationary, the same generally is not true–the person is not likely to be able to read the moving book. The vestibulo-ocular reflex is one of head and eye motion coordination, and is generally not developed for hand motion. This paradigm may be important for AR systems, because head motions of the user may be associated relatively directly with eye motions, and an ideal system preferably will be ready to work with this relationship.

[0012] Indeed, given these various relationships, when placing digital content (e.g., 3-D content such as a virtual chandelier object presented to augment a real-world view of a room; or 2-D content such as a planar/flat virtual oil painting object presented to augment a real-world view of a room), design choices may be made to control behavior of the objects. For example, a 2-D oil painting object may be head-centric, in which case the object moves around along with the user’s head (e.g., as in a GoogleGlass.RTM. approach). In another example, an object may be world-centric, in which case it may be presented as though it is part of the real world coordinate system, such that the user may move his head or eyes without moving the position of the object relative to the real world.

[0013] Thus when placing virtual content into the augmented reality world presented with an AR system, choices are made as to whether the object should be presented as world centric, body-centric, head-centric or eye centric. In head-centric approaches, the virtual object stays in position in the real world so that the user may move his body, head, eyes around it without changing its position relative to the real world objects surrounding it, such as a real world wall. In body-centric approaches, a virtual element may be fixed relative to the user’s torso, so that the user can move his head or eyes without moving the object, but that is slaved to torso movements, In head centric approaches, the displayed object (and/or display itself) may be moved along with head movements, as described above in reference to GoogleGlass.RTM.). In eye-centric approaches, as in a “foveated display” configuration, as is described below, content is slewed around as a function of the eye position.

[0014] With world-centric configurations, it may be desirable to have inputs such as accurate head pose measurement, accurate representation and/or measurement of real world objects and geometries around the user, low-latency dynamic rendering in the augmented reality display as a function of head pose, and a generally low-latency display.

[0015] The U.S. patent applications listed above present systems and techniques to work with the visual configuration of a typical human to address various challenges in virtual reality and augmented reality applications. The design of these virtual reality and/or AR systems presents numerous challenges, including the speed of the system in delivering virtual content, quality of virtual content, eye relief of the user, size and portability of the system, and other system and optical challenges.

[0016] The systems and techniques described herein are configured to work with the visual configuration of the typical human to address these challenges.

SUMMARY

[0017] Embodiments of the present invention are directed to devices, systems and methods for facilitating virtual reality and/or augmented reality interaction for one or more users. In one aspect, a system for displaying virtual content is disclosed.

[0018] In one aspect, an augmented reality system comprises an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.

[0019] Additional and other objects, features, and advantages of the invention are described in the detail description, figures and claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0020] The drawings illustrate the design and utility of various embodiments of the present invention. It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. In order to better appreciate how to obtain the above-recited and other advantages and objects of various embodiments of the invention, a more detailed description of the present inventions briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

[0021] FIG. 1 illustrates a system architecture of an augmented reality (AR) system interacting with one or more servers, according one illustrated embodiment.

[0022] FIG. 2 illustrates a detailed view of a cell phone used as an AR device interacting with one or more servers, according to one illustrated embodiment.

[0023] FIG. 3 illustrates a plan view of an example AR device mounted on a user’s head, according to one illustrated embodiment.

[0024] FIGS. 4A-4D illustrate one or more embodiments of various internal processing components of the wearable AR device.

[0025] FIGS. 5A-5H illustrate embodiments of transmitting focused light to a user through a transmissive beamsplitter substrate.

[0026] FIGS. 6A and 6B illustrate embodiments of coupling a lens element with the transmissive beamsplitter substrate of FIGS. 5A-5H.

[0027] FIGS. 7A and 7B illustrate embodiments of using one or more waveguides to transmit light to a user.

[0028] FIGS. 8A-8Q illustrate embodiments of a diffractive optical element (DOE).

[0029] FIGS. 9A and 9B illustrate a wavefront produced from a light projector, according to one illustrated embodiment.

[0030] FIG. 10 illustrates an embodiment of a stacked configuration of multiple transmissive beamsplitter substrate coupled with optical elements, according to one illustrated embodiment.

[0031] FIGS. 11A-11C illustrate a set of beamlets projected into a user’s pupil, according to the illustrated embodiments.

[0032] FIGS. 12A and 12B illustrate configurations of an array of microprojectors, according to the illustrated embodiments.

[0033] FIGS. 13A-13M illustrate embodiments of coupling microprojectors with optical elements, according to the illustrated embodiments.

[0034] FIGS. 14A-14F illustrate embodiments of spatial light modulators coupled with optical elements, according to the illustrated embodiments.

[0035] FIGS. 15A-15C illustrate the use of a wedge type waveguides along with a plurality of light sources, according to the illustrated embodiments.

[0036] FIGS. 16A-16O illustrate embodiments of coupling optical elements to optical fibers, according to the illustrated embodiments.

[0037] FIG. 17 illustrates a notch filter, according to one illustrated embodiment.

[0038] FIG. 18 illustrates a spiral pattern of a fiber scanning display, according to one illustrated embodiment.

[0039] FIGS. 19A-19N illustrate occlusion effects in presenting a darkfield to a user, according to the illustrated embodiments.

[0040] FIGS. 20A-20O illustrate embodiments of various waveguide assemblies, according to the illustrated embodiments.

[0041] FIGS. 21A-21N illustrate various configurations of DOEs coupled to other optical elements, according to the illustrated embodiments.

[0042] FIGS. 22A-22Y illustrate various configurations of freeform optics, according to the illustrated embodiments.

[0043] FIG. 23 illustrates a top view of components of a simplified individual AR device.

[0044] FIG. 24 illustrates an example embodiment of the optics of the individual AR system.

[0045] FIG. 25 illustrates a system architecture of the individual AR system, according to one embodiment.

[0046] FIG. 26 illustrates a room based sensor system, according to one embodiment.

[0047] FIG. 27 illustrates a communication architecture of the augmented reality system and the interaction of the augmented reality systems of many users with the cloud.

[0048] FIG. 28 illustrates a simplified view of the passable world model, according to one embodiment.

[0049] FIG. 29 illustrates an example method of rendering using the passable world model, according to one embodiment.

[0050] FIG. 30 illustrates a high level flow diagram for a process of recognizing an object, according to one embodiment.

[0051] FIG. 31 illustrates a ring buffer approach employed by object recognizers to recognize objects in the passable world, according to one embodiment.

[0052] FIG. 32 illustrates an example topological map, according to one embodiment.

[0053] FIG. 33 illustrates a high level flow diagram for a process of localization using the topological map, according to one embodiment.

[0054] FIG. 34 illustrates a geometric map as a connection between various keyframes, according to one embodiment.

[0055] FIG. 35 illustrates an example embodiment of the topological map layered on top of the geometric map, according to one embodiment.

[0056] FIG. 36 illustrates a high level flow diagram for a process of performing a wave propagation bundle adjust, according to one embodiment.

[0057] FIG. 37 illustrates map points and render lines from the map points to the keyframes as seen through a virtual keyframe, according to one embodiment.

[0058] FIG. 38 illustrates a high level flow diagram for a process of finding map points based on render rather than search, according to one embodiment.

[0059] FIG. 39 illustrates a high level flow diagram for a process of rendering a virtual object based on a light map, according to one embodiment.

[0060] FIG. 40 illustrates a high level flow diagram for a process of creating a light map, according to one embodiment.

[0061] FIG. 41 depicts a user-centric light map,* according to one embodiment*

[0062] FIG. 42 depicts an object-centric light map, according to one embodiment.

[0063] FIG. 43 illustrates a high level flow diagram for a process of transforming a light map, according to one embodiment.

[0064] FIG. 44 illustrates a library of autonomous navigation definitions or objects, according to one embodiment.

[0065] FIG. 45 illustrates an interaction of various autonomous navigation objects, according to one embodiment.

[0066] FIG. 46 illustrates a stack of autonomous navigation definitions or objects, according to one embodiment.

[0067] FIGS. 47A-47B illustrate using the autonomous navigation definitions to identify emotional states, according to one embodiment.

[0068] FIG. 48 illustrates a correlation threshold graph to be used to define an autonomous navigation definition or object, according to one embodiment.

[0069] FIG. 49 illustrates a system view of the passable world model, according to one embodiment.

[0070] FIG. 50 illustrates an example method of displaying a virtual scene, according to one embodiment.

[0071] FIG. 51 illustrates a plan view of various modules of the AR system, according to one illustrated embodiment.

[0072] FIG. 52 illustrates an example of objects viewed by a user when the AR device is operated in an augmented reality mode, according to one illustrated embodiment.

[0073] FIG. 53 illustrates an example of objects viewed by a user when the AR device is operated in a virtual mode, according to one illustrated embodiment.

[0074] FIG. 54 illustrates an example of objects viewed by a user when the AR device is operated in a blended virtual interface mode, according to one illustrated embodiment.

[0075] FIG. 55 illustrates an embodiment wherein two users located in different geographical locations each interact with the other user and a common virtual world through their respective user devices, according to one embodiment.

[0076] FIG. 56 illustrates an embodiment wherein the embodiment of FIG. 55 is expanded to include the use of a haptic device, according to one embodiment.

[0077] FIG. 57A-57B illustrates an example of mixed mode interfacing, according to one or more embodiments.

[0078] FIG. 58 illustrates an example illustration of a user’s view when interfacing the AR system, according to one embodiment.

[0079] FIG. 59 illustrates an example illustration of a user’s view showing a virtual object triggered by a physical object when the user is interfacing the system in an augmented reality mode, according to one embodiment.

[0080] FIG. 60 illustrates one embodiment of an augmented and virtual reality integration configuration wherein one user in an augmented reality experience visualizes the presence of another user in a virtual realty experience.

[0081] FIG. 61 illustrates one embodiment of a time and/or contingency event based augmented reality experience configuration.

[0082] FIG. 62 illustrates one embodiment of a user display configuration suitable for virtual and/or augmented reality experiences.

[0083] FIG. 63 illustrates one embodiment of local and cloud-based computing coordination.

[0084] FIG. 64 illustrates various aspects of registration configurations, according to one illustrated embodiment.

[0085] FIG. 65 illustrates an example scenario of interacting with the AR system, according to one embodiment.

[0086] FIG. 66 illustrates another perspective of the example scenario of FIG. 65, according to another embodiment.

[0087] FIG. 67 illustrates yet another perspective view of the example scenario of FIG. 65, according to another embodiment.

[0088] FIG. 68 illustrates a top view of the example scenario according to one embodiment.

[0089] FIG. 69 illustrates a game view of the example scenario of FIGS. 65-68, according to one embodiment.

[0090] FIG. 70 illustrates a top view of the example scenario of FIGS. 65-68, according to one embodiment.

[0091] FIG. 71 illustrates an augmented reality scenario including multiple users, according to one embodiment.

[0092] FIGS. 72A-72B illustrate using a smartphone or tablet as an AR device, according to one embodiment.

[0093] FIG. 73 illustrates an example method of using localization to communicate between users of the AR system, according to one embodiment.

[0094] FIGS. 74A-74B illustrates an example office scenario of interacting with the AR system, according to one embodiment.

[0095] FIG. 75 illustrates an example scenario of interacting with the AR system in a house, according to one embodiment.

[0096] FIG. 76 illustrates another example scenario of interacting with the AR system in a house, according to one embodiment.

[0097] FIG. 77 illustrates another example scenario of interacting with the AR system in a house, according to one embodiment.

[0098] FIGS. 78A-78B illustrate yet another example scenario of interacting with the AR system in a house, according to one embodiment.

[0099] FIGS. 79A-79E illustrate another example scenario of interacting with the AR system in a house, according to one embodiment.

[0100] FIGS. 80A-80O illustrate another example scenario of interacting with the AR system in a virtual room, according to one embodiment.

[0101] FIG. 81 illustrates another example user interaction scenario, according to one embodiment.

[0102] FIG. 82 illustrates another example user interaction scenario, according to one embodiment.

[0103] FIGS. 83A-83B illustrates yet another example user interaction scenario, according to one or more embodiments.

[0104] FIGS. 84A-84C illustrates the user interacting with the AR system in a virtual space, according to one or more embodiments.

[0105] FIGS. 85A-85C illustrates various user interface embodiments.

[0106] FIGS. 86A-86C illustrates other embodiments to create a user interface, according to one or more embodiments.

[0107] FIGS. 87A-87C illustrates other embodiments to create and move a user interface, according to one or more embodiments.

[0108] FIGS. 88A-88C illustrates user interfaces created on the user’s hand, according to one or more embodiments.

[0109] FIGS. 89A-89J illustrate an example user shopping experience with the AR system, according to one or more embodiments.

[0110] FIG. 90 illustrates an example library experience with the AR system, according to one or more embodiments.

[0111] FIGS. 91A-91F illustrate an example healthcare experience with the AR system, according to one or more embodiments.

[0112] FIG. 92 illustrates an example labor experience with the AR system, according to one or more embodiments.

[0113] FIGS. 93A-93L illustrate an example workspace experience with the AR system, according to one or more embodiments.

[0114] FIG. 94 illustrates another example workspace experience with the AR system, according to one or more embodiments.

[0115] FIGS. 95A-95E illustrates another AR experience, according to one or more embodiments.

[0116] FIGS. 96A-96D illustrates yet another AR experience, according to one or more embodiments.

[0117] FIGS. 97A-97H illustrates a gaming experience with the AR system, according to one or more embodiments.

[0118] FIGS. 98A-98D illustrate a web shopping experience with the AR system, according to one or more embodiments.

[0119] FIG. 99 illustrates a block diagram of various games in a gaming platform, according to one or more embodiments.

[0120] FIG. 100 illustrates a variety of user inputs to communicate with the augmented reality system, according to one embodiment.

[0121] FIG. 101 illustrates LED lights and diodes tracking a movement of the user’s eyes, according to one embodiment.

[0122] FIG. 102 illustrates a Purkinje image, according to one embodiment.

[0123] FIG. 103 illustrates a variety of hand gestures that may be used to communicate with the augmented reality system, according to one embodiment.

[0124] FIG. 104 illustrates an example totem, according to one embodiment.

[0125] FIG. 105A-105C illustrate other example totems, according to one or more embodiments.

[0126] FIG. 106A-106C illustrate other totems that may be used to communicate with the augmented reality system.

[0127] FIGS. 107A-107D illustrates other example totems, according to one or more embodiments.

[0128] FIGS. 108A-1080 illustrate example embodiments of ring and bracelet totems, according to one or more embodiments.

[0129] FIGS. 109A-109C illustrate more example totems, according to one or more embodiments.

[0130] FIGS. 110A-110B illustrate a charms totem and a keychain totem, according to one or more embodiments.

[0131] FIG. 111 illustrates a high level flow diagram for a process of determining user input through a totem, according to one embodiment.

[0132] FIG. 112 illustrates a high level flow diagram for a process of producing a sound wavefront, according to one embodiment.

[0133] FIG. 113 is a block diagram of components used to produce a sound wavefront, according to one embodiment.

[0134] FIG. 114 is an example method of determining sparse and dense points, according to one embodiment.

[0135] FIG. 115 is a block diagram of projecting textured light, according to one embodiment.

[0136] FIG. 116 is an example block diagram of data processing, according to one embodiment.

[0137] FIG. 117 is a schematic of an eye for gaze tracking, according to one embodiment.

[0138] FIG. 118 shows another perspective of the eye and one or more cameras for gaze tracking, according to one embodiment.

[0139] FIG. 119 shows yet another perspective of the eye and one or more cameras for gaze tracking, according to one embodiment.

[0140] FIG. 120 shows yet another perspective of the eye and one or more cameras for gaze tracking, according to one embodiment.

[0141] FIG. 121 shows a translational matrix view for gaze tracking, according to one embodiment.

[0142] FIG. 122 illustrates an example method of gaze tracking, according to one embodiment.

[0143] FIGS. 123A-123D illustrate a series of example user interface flows using avatars, according to one embodiment.

[0144] FIGS. 124A-124M illustrate a series of example user interface flows using extrusion, according to one embodiment.

[0145] FIGS. 125A-125M illustrate a series of example user interface flows using gauntlet, according to one embodiment.

[0146] FIGS. 126A-126L illustrate a series of example user interface flows using grow, according to one embodiment.

[0147] FIGS. 127A-127E illustrate a series of example user interface flows using brush, according to one embodiment.

[0148] FIGS. 128A-128P illustrate a series of example user interface flows using fingerbrush, according to one embodiment.

[0149] FIGS. 129A-129M illustrate a series of example user interface flows using pivot according to one embodiment.

[0150] FIGS. 130A-130I illustrate a series of example user interface flows using strings, according to one embodiment.

[0151] FIGS. 131A-131I illustrate a series of example user interface flows using spiderweb, according to one embodiment.

[0152] FIG. 132 is a plan view of various mechanisms by which a virtual object relates to one or more physical objects.

[0153] FIG. 133 is a plan view of various types of AR rendering, according to one or more embodiments.

[0154] FIG. 134 illustrates various types of user input in an AR system, according to one or more embodiments.

[0155] FIGS. 135A-135J illustrates various embodiments pertaining to using gestures in an AR system, according to one or more embodiments.

[0156] FIG. 136 illustrates a plan view of various components for a calibration mechanism of the AR system, according to one or more embodiments.

[0157] FIG. 137 illustrates a view of an AR device on a user’s face, the AR device having eye tracking cameras, according to one or more embodiments.

[0158] FIG. 138 illustrates an eye identification image of the AR system, according to one or more embodiments.

[0159] FIG. 139 illustrates a retinal image taken with an AR system, according to one or more embodiments.

[0160] FIG. 140 is a process flow diagram of an example method of generating a virtual user interface, according to one illustrated embodiment.

[0161] FIG. 141 is another process flow diagram of an example method of generating a virtual user interface based on a coordinate frame, according to one illustrated embodiment.

[0162] FIG. 142 is a process flow diagram of an example method of constructing a customized user interface, according to one illustrated embodiment.

[0163] FIG. 143 is a process flow diagram of an example method of retrieving information from the passable world model and interacting with other users of the AR system, according to one illustrated embodiment.

[0164] FIG. 144 is a process flow diagram of an example method of retrieving information from a knowledge based in the cloud based on received input, according to one illustrated embodiment.

[0165] FIG. 145 is a process flow diagram of an example method of calibrating the AR system, according to one illustrated embodiment.

DETAILED DESCRIPTION

[0166] Various embodiments will now be described in detail with reference to the drawings, which are provided as illustrative examples of the invention so as to enable those skilled in the art to practice the invention. Notably, the figures and the examples below are not meant to limit the scope of the present invention. Where certain elements of the present invention may be partially or fully implemented using known components (or methods or processes), only those portions of such known components (or methods or processes) that are necessary for an understanding of the present invention will be described, and the detailed descriptions of other portions of such known components (or methods or processes) will be omitted so as not to obscure the invention. Further, various embodiments encompass present and future known equivalents to the components referred to herein by way of illustration.

[0167] In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, the above-described process flows are described with reference to a particular ordering of process actions. However, the ordering of many of the described process actions may be changed without affecting the scope or operation of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.

[0168] Disclosed are methods and systems for generating virtual and/or augmented reality. In order to provide a realistic and enjoyable virtual reality (VR) or augmented reality (AR) experience, virtual content may be strategically delivered to the user’s eyes in a manner that is respectful of the human eye’s physiology and limitations. The following disclosure will provide various embodiments of such optical systems that may be integrated into an AR system. Although most of the disclosures herein will be discussed in the context of AR systems, it should be appreciated that the same technologies may be used for VR systems also, and the following embodiments should not be read as limiting.

[0169] The following disclosure will provide details on various types of systems in which AR users may interact with each other through a creation of a map that comprises comprehensive information about the physical objects of the real world in real-time. The map may be advantageously consulted in order to project virtual images in relation to known real objects. The following disclosure will provide various approaches to understanding information about the real world, and using this information to provide a more realistic and enjoyable AR experience. Additionally, this disclosure will provide various user scenarios and applications in which AR systems such as the ones described herein may be realized.

System Overview

[0170] In one or more embodiments, the AR system 10 comprises a computing network 5, comprised of one or more computer servers 11 connected through one or more high bandwidth interfaces 15. The servers 11 in the computing network may or may not be co-located. The one or more servers 11 each comprise one or more processors for executing program instructions. The servers may also include memory for storing the program instructions and data that is used and/or generated by processes being carried out by the servers 11 under direction of the program instructions.

[0171] The computing network 5 communicates data between the servers 11 and between the servers and one or more user devices 12 over one or more data network connections 13. Examples of such data networks include, without limitation, any and all types of public and private data networks, both mobile and wired, including for example the interconnection of many of such networks commonly referred to as the Internet. No particular media, topology or protocol is intended to be implied by the figure.

[0172] User devices are configured for communicating directly with computing network 5, or any of the servers 11. Alternatively, user devices 12 communicate with the remote servers 11, and, optionally, with other user devices locally, through a specially programmed, local gateway 14 for processing data and/or for communicating data between the network 5 and one or more local user devices 12.

[0173] As illustrated, gateway 14 is implemented as a separate hardware component, which includes a processor for executing software instructions and memory for storing software instructions and data. The gateway has its own wired and/or wireless connection to data networks for communicating with the servers 11 comprising computing network 5. Alternatively, gateway 14 can be integrated with a user device 12, which is worn or carried by a user. For example, the gateway 14 may be implemented as a downloadable software application installed and running on a processor included in the user device 12. The gateway 14 provides, in one embodiment, one or more users access to the computing network 5 via the data network 13.

[0174] Servers 11 each include, for example, working memory and storage for storing data and software programs, microprocessors for executing program instructions, graphics processors and other special processors for rendering and generating graphics, images, video, audio and multi-media files. Computing network 5 may also comprise devices for storing data that is accessed, used or created by the servers 11.

[0175] Software programs running on the servers and optionally user devices 12 and gateways 14, are used to generate digital worlds (also referred to herein as virtual worlds) with which users interact with user devices 12. A digital world (or map)(as will be described in further detail below) is represented by data and processes that describe and/or define virtual, non-existent entities, environments, and conditions that can be presented to a user through a user device 12 for users to experience and interact with. For example, some type of object, entity or item that will appear to be physically present when instantiated in a scene being viewed or experienced by a user may include a description of its appearance, its behavior, how a user is permitted to interact with it, and other characteristics.

[0176] Data used to create an environment of a virtual world (including virtual objects) may include, for example, atmospheric data, terrain data, weather data, temperature data, location data, and other data used to define and/or describe a virtual environment. Additionally, data defining various conditions that govern the operation of a virtual world may include, for example, laws of physics, time, spatial relationships and other data that may be used to define and/or create various conditions that govern the operation of a virtual world (including virtual objects).

[0177] The entity, object, condition, characteristic, behavior or other feature of a digital world will be generically referred to herein, unless the context indicates otherwise, as an object (e.g., digital object, virtual object, rendered physical object, etc.). Objects may be any type of animate or inanimate object, including but not limited to, buildings, plants, vehicles, people, animals, creatures, machines, data, video, text, pictures, and other users. Objects may also be defined in a digital world for storing information about items, behaviors, or conditions actually present in the physical world. The data that describes or defines the entity, object or item, or that stores its current state, is generally referred to herein as object data. This data is processed by the servers 11 or, depending on the implementation, by a gateway 14 or user device 12, to instantiate an instance of the object and render the object in an appropriate manner for the user to experience through a user device.

[0178] Programmers who develop and/or curate a digital world create or define objects, and the conditions under which they are instantiated. However, a digital world can allow for others to create or modify objects. Once an object is instantiated, the state of the object may be permitted to be altered, controlled or manipulated by one or more users experiencing a digital world.

[0179] For example, in one embodiment, development, production, and administration of a digital world are generally provided by one or more system administrative programmers. In some embodiments, this may include development, design, and/or execution of story lines, themes, and events in the digital worlds as well as distribution of narratives through various forms of events and media such as, for example, film, digital, network, mobile, augmented reality, and live entertainment. The system administrative programmers may also handle technical administration, moderation, and curation of the digital worlds and user communities associated therewith, as well as other tasks typically performed by network administrative personnel.

[0180] Users interact with one or more digital worlds using some type of a local computing device, which is generally designated as a user device 12. Examples of such user devices include, but are not limited to, a smart phone, tablet device, heads-mounted display (HMD), gaming console, or any other device capable of communicating data and providing an interface or display to the user, as well as combinations of such devices. In some embodiments, the user device 12 may include, or communicate with, local peripheral or input/output components such as, for example, a keyboard, mouse, joystick, gaming controller, haptic interface device, motion capture controller, an optical tracking device, audio equipment, voice equipment, projector system, 3D display, and/or holographic 3D contact lens.

[0181] An example of a user device 12 for interacting with the system 10 is illustrated in FIG. 2. In the example embodiment shown in FIG. 2, a user 21 may interface one or more digital worlds through a smart phone 22. The gateway is implemented by a software application 23 stored on and running on the smart phone 22. In this particular example, the data network 13 includes a wireless mobile network connecting the user device (e.g., smart phone 22) to the computer network 5.

[0182] In one implementation of a preferred embodiment, system 10 is capable of supporting a large number of simultaneous users (e.g., millions of users), each interfacing with the same digital world, or with multiple digital worlds, using some type of user device 12.

[0183] The user device provides to the user, an interface for enabling a visual, audible, and/or physical interaction between the user and a digital world generated by the servers 11, including other users and objects (real or virtual) presented to the user. The interface provides the user with a rendered scene that can be viewed, heard or otherwise sensed, and the ability to interact with the scene in real-time. The manner in which the user interacts with the rendered scene may be dictated by the capabilities of the user device. For example, if the user device is a smart phone, the user interaction may be implemented by a user contacting a touch screen. In another example, if the user device is a computer or gaming console, the user interaction may be implemented using a keyboard or gaming controller. User devices may include additional components that enable user interaction such as sensors, wherein the objects and information (including gestures) detected by the sensors may be provided as input representing user interaction with the virtual world using the user device.

[0184] The rendered scene can be presented in various formats such as, for example, two-dimensional or three-dimensional visual displays (including projections), sound, and haptic or tactile feedback. The rendered scene may be interfaced by the user in one or more modes including, for example, augmented reality, virtual reality, and combinations thereof. The format of the rendered scene, as well as the interface modes, may be dictated by one or more of the following: user device, data processing capability, user device connectivity, network capacity and system workload. Having a large number of users simultaneously interacting with the digital worlds, and the real-time nature of the data exchange, is enabled by the computing network 5, servers 11, the gateway component 14 (optionally), and the user device 12.

[0185] In one example, the computing network 5 is comprised of a large-scale computing system having single and/or multi-core servers (e.g., servers 11) connected through high-speed connections (e.g., high bandwidth interfaces 15). The computing network 5 may form a cloud or grid network. Each of the servers includes memory, or is coupled with computer readable memory for storing software for implementing data to create, design, alter, or process objects of a digital world. These objects and their instantiations may be dynamic, come in and out of existence, change over time, and change in response to other conditions. Examples of dynamic capabilities of the objects are generally discussed herein with respect to various embodiments. In some embodiments, each user interfacing the system 10 may also be represented as an object, and/or a collection of objects, within one or more digital worlds.

[0186] The servers 11 within the computing network 5 also store computational state data for each of the digital worlds. The computational state data (also referred to herein as state data) may be a component of the object data, and generally defines the state of an instance of an object at a given instance in time. Thus, the computational state data may change over time and may be impacted by the actions of one or more users and/or programmers maintaining the system 10. As a user impacts the computational state data (or other data comprising the digital worlds), the user directly alters or otherwise manipulates the digital world. If the digital world is shared with, or interfaced by, other users, the actions of the user may affect what is experienced by other users interacting with the digital world. Thus, in some embodiments, changes to the digital world made by a user will be experienced by other users interfacing with the system 10.

[0187] The data stored in one or more servers 11 within the computing network 5 is, in one embodiment, transmitted or deployed at a high-speed, and with low latency, to one or more user devices 12 and/or gateway components 14. In one embodiment, object data shared by servers may be complete or may be compressed, and contain instructions for recreating the full object data on the user side, rendered and visualized by the user’s local computing device (e.g., gateway 14 and/or user device 12). Software running on the servers 11 of the computing network 5 may, in some embodiments, adapt the data it generates and sends to a particular user’s device 12 for objects within the digital world (or any other data exchanged by the computing network 5 as a function of the user’s specific device and bandwidth.

[0188] For example, when a user interacts with the digital world or map through a user device 12, a server 11 may recognize the specific type of device being used by the user, the device’s connectivity and/or available bandwidth between the user device and server, and appropriately size and balance the data being delivered to the device to optimize the user interaction. An example of this may include reducing the size of the transmitted data to a low resolution quality, such that the data may be displayed on a particular user device having a low resolution display. In a preferred embodiment, the computing network 5 and/or gateway component 14 deliver data to the user device 12 at a rate sufficient to present an interface operating at 15 frames/second or higher, and at a resolution that is high definition quality or greater.

[0189] The gateway 14 provides local connection to the computing network 5 for one or more users. In some embodiments, it may be implemented by a downloadable software application that runs on the user device 12 or another local device, such as that shown in FIG. 2. In other embodiments, it may be implemented by a hardware component (with appropriate software/firmware stored on the component, the component having a processor) that is either in communication with, but not incorporated with or attracted to, the user device 12, or incorporated with the user device 12. The gateway 14 communicates with the computing network 5 via the data network 13, and provides data exchange between the computing network 5 and one or more local user devices 12. As discussed in greater detail below, the gateway component 14 may include software, firmware, memory, and processing circuitry, and may be capable of processing data communicated between the network 5 and one or more local user devices 12.

[0190] In some embodiments, the gateway component 14 monitors and regulates the rate of the data exchanged between the user device 12 and the computer network 5 to allow optimum data processing capabilities for the particular user device 12. For example, in some embodiments, the gateway 14 buffers and downloads both static and dynamic aspects of a digital world, even those that are beyond the field of view presented to the user through an interface connected with the user device. In such an embodiment, instances of static objects (structured data, software implemented methods, or both) may be stored in memory (local to the gateway component 14, the user device 12, or both) and are referenced against the local user’s current position, as indicated by data provided by the computing network 5 and/or the user’s device 12.

[0191] Instances of dynamic objects, which may include, for example, intelligent software agents and objects controlled by other users and/or the local user, are stored in a high-speed memory buffer. Dynamic objects representing a two-dimensional or three-dimensional object within the scene presented to a user can be, for example, broken down into component shapes, such as a static shape that is moving but is not changing, and a dynamic shape that is changing. The part of the dynamic object that is changing can be updated by a real-time, threaded high priority data stream from a server 11, through computing network 5, managed by the gateway component 14.

[0192] As one example of a prioritized threaded data stream, data that is within a 60 degree field-of-view of the user’s eye may be given higher priority than data that is more peripheral. Another example includes prioritizing dynamic characters and/or objects within the user’s field-of-view over static objects in the background.

[0193] In addition to managing a data connection between the computing network 5 and a user device 12, the gateway component 14 may store and/or process data that may be presented to the user device 12. For example, the gateway component 14 may, in some embodiments, receive compressed data describing, for example, graphical objects to be rendered for viewing by a user, from the computing network 5 and perform advanced rendering techniques to alleviate the data load transmitted to the user device 12 from the computing network 5. In another example, in which gateway 14 is a separate device, the gateway 14 may store and/or process data for a local instance of an object rather than transmitting the data to the computing network 5 for processing.

[0194] Referring now to FIG. 3, virtual worlds may be experienced by one or more users in various formats that may depend upon the capabilities of the user’s device. In some embodiments, the user device 12 may include, for example, a smart phone, tablet device, head-mounted display (HMD), gaming console, or a wearable device. Generally, the user device will include a processor for executing program code stored in memory on the device, coupled with a display, and a communications interface.

[0195] An example embodiment of a user device is illustrated in FIG. 3, wherein the user device comprises a mobile, wearable device, namely a head-mounted display system 30. In accordance with an embodiment of the present disclosure, the head-mounted display system 30 includes a user interface 37, user-sensing system 34, environment-sensing system 36, and a processor 38. Although the processor 38 is shown in FIG. 3 as an isolated component separate from the head-mounted system 30, in an alternate embodiment, the processor 38 may be integrated with one or more components of the head-mounted system 30, or may be integrated into other system 10 components such as, for example, the gateway 14, as shown in FIG. 1 and FIG. 2.

[0196] The user device 30 presents to the user an interface 37 for interacting with and experiencing a digital world. Such interaction may involve the user and the digital world, one or more other users interfacing the system 10, and objects within the digital world. The interface 37 generally provides image and/or audio sensory input (and in some embodiments, physical sensory input) to the user. Thus, the interface 37 may include speakers (not shown) and a display component 33 capable, in some embodiments, of enabling stereoscopic 3D viewing and/or 3D viewing which embodies more natural characteristics of the human vision system.

[0197] In some embodiments, the display component 33 may comprise a transparent interface (such as a clear OLED) which, when in an “off” setting, enables an optically correct view of the physical environment around the user with little-to-no optical distortion or computing overlay. As discussed in greater detail below, the interface 37 may include additional settings that allow for a variety of visual/interface performance and functionality.

[0198] The user-sensing system 34 may include, in some embodiments, one or more sensors 31 operable to detect certain features, characteristics, or information related to the individual user wearing the system 30. For example, in some embodiments, the sensors 31 may include a camera or optical detection/scanning circuitry capable of detecting real-time optical characteristics/measurements of the user.

[0199] The real-time optical characteristics/measurements of the user may, for example, be one or more of the following: pupil constriction/dilation, angular measurement/positioning of each pupil, spherocity, eye shape (as eye shape changes over time) and other anatomic data. This data may provide, or be used to calculate, information (e.g., the user’s visual focal point) that may be used by the head-mounted system 30 and/or interface system 10 to optimize the user’s viewing experience. For example, in one embodiment, the sensors 31 may each measure a rate of pupil contraction for each of the user’s eyes. This data may be transmitted to the processor 38 (or the gateway component 14 or to a server 11), wherein the data is used to determine, for example, the user’s reaction to a brightness setting of the interface display 33.

[0200] The interface 37 may be adjusted in accordance with the user’s reaction by, for example, dimming the display 33 if the user’s reaction indicates that the brightness level of the display 33 is too high. The user-sensing system 34 may include other components other than those discussed above or illustrated in FIG. 3. For example, in some embodiments, the user-sensing system 34 may include a microphone for receiving voice input from the user. The user sensing system 34 may also include one or more infrared camera sensors, one or more visible spectrum camera sensors, structured light emitters and/or sensors, infrared light emitters, coherent light emitters and/or sensors, gyros, accelerometers, magnetometers, proximity sensors, GPS sensors, ultrasonic emitters and detectors and haptic interfaces.

[0201] The environment-sensing system 36 includes one or more sensors 32 for obtaining data from the physical environment around a user. Objects or information detected by the sensors may be provided as input to the user device. In some embodiments, this input may represent user interaction with the virtual world. For example, a user viewing a virtual keyboard on a desk may gesture with fingers as if typing on the virtual keyboard. The motion of the fingers moving may be captured by the sensors 32 and provided to the user device or system as input, wherein the input may be used to change the virtual world or create new virtual objects.

[0202] For example, the motion of the fingers may be recognized (e.g., using a software program of the processor, etc.) as typing, and the recognized gesture of typing may be combined with the known location of the virtual keys on the virtual keyboard. The system may then render a virtual monitor displayed to the user (or other users interfacing the system) wherein the virtual monitor displays the text being typed by the user.

[0203] The sensors 32 may include, for example, a generally outward-facing camera or a scanner for interpreting scene information, for example, through continuously and/or intermittently projected infrared structured light. The environment-sensing system (36) may be used for mapping one or more elements of the physical environment around the user by detecting and registering the local environment, including static objects, dynamic objects, people, gestures and various lighting, atmospheric and acoustic conditions. Thus, in some embodiments, the environment-sensing system (36) may include image-based 3D reconstruction software embedded in a local computing system (e.g., gateway component 14 or processor 38) and operable to digitally reconstruct one or more objects or information detected by the sensors 32.

[0204] In one example embodiment, the environment-sensing system 36 provides one or more of the following: motion capture data (including gesture recognition), depth sensing, facial recognition, object recognition, unique object feature recognition, voice/audio recognition and processing, acoustic source localization, noise reduction, infrared or similar laser projection, as well as monochrome and/or color CMOS sensors (or other similar sensors), field-of-view sensors, and a variety of other optical-enhancing sensors.

[0205] It should be appreciated that the environment-sensing system 36 may include other components other than those discussed above or illustrated in FIG. 3. For example, in some embodiments, the environment-sensing system 36 may include a microphone for receiving audio from the local environment. The user sensing system (36) may also include one or more infrared camera sensors, one or more visible spectrum camera sensors, structure light emitters and/or sensors, infrared light emitters, coherent light emitters and/or sensors gyros, accelerometers, magnetometers, proximity sensors, GPS sensors, ultrasonic emitters and detectors and haptic interfaces.

[0206] As discussed above, the processor 38 may, in some embodiments, be integrated with other components of the head-mounted system 30, integrated with other components of the interface system 10, or may be an isolated device (wearable or separate from the user) as shown in FIG. 3. The processor 38 may be connected to various components of the head-mounted system 30 and/or components of the interface system 10 through a physical, wired connection, or through a wireless connection such as, for example, mobile network connections (including cellular telephone and data networks), Wi-Fi or Bluetooth.

[0207] In one or more embodiments, the processor 38 may include a memory module, integrated and/or additional graphics processing unit, wireless and/or wired internet connectivity, and codec and/or firmware capable of transforming data from a source (e.g., the computing network 5, the user-sensing system 34, the environment-sensing system 36, or the gateway component 14) into image and audio data, wherein the images/video and audio may be presented to the user via the interface 37.

[0208] In one or more embodiments, the processor 38 handles data processing for the various components of the head-mounted system 30 as well as data exchange between the head-mounted system 30 and the gateway component 14 and, in some embodiments, the computing network 5. For example, the processor 38 may be used to buffer and process data streaming between the user and the computing network 5, thereby enabling a smooth, continuous and high fidelity user experience.

[0209] In some embodiments, the processor 38 may process data at a rate sufficient to achieve anywhere between 8 frames/second at 320.times.240 resolution to 24 frames/second at high definition resolution (1280.times.720), or greater, such as 60-120 frames/second and 4 k resolution and higher (10 k+resolution and 50,000 frames/second). Additionally, the processor 38 may store and/or process data that may be presented to the user, rather than streamed in real-time from the computing network 5.

[0210] For example, the processor 38 may, in some embodiments, receive compressed data from the computing network 5 and perform advanced rendering techniques (such as lighting or shading) to alleviate the data load transmitted to the user device 12 from the computing network 5. In another example, the processor 38 may store and/or process local object data rather than transmitting the data to the gateway component 14 or to the computing network 5.

[0211] The head-mounted system 30 may, in some embodiments, include various settings, or modes, that allow for a variety of visual/interface performance and functionality. The modes may be selected manually by the user, or automatically by components of the head-mounted system 30 or the gateway component 14. As previously described, one example mode of the head-mounted system 30 includes an “off” mode, wherein the interface 37 provides substantially no digital or virtual content. In the off mode, the display component 33 may be transparent, thereby enabling an optically correct view of the physical environment around the user with little-to-no optical distortion or computing overlay.

[0212] In one example embodiment, the head-mounted system 30 includes an “augmented” mode, wherein the interface 37 provides an augmented reality interface. In the augmented mode, the interface display 33 may be substantially transparent, thereby allowing the user to view the local, physical environment. At the same time, virtual object data provided by the computing network 5, the processor 38, and/or the gateway component 14 is presented on the display 33 in combination with the physical, local environment. The following section will go through various embodiments of example head-mounted user systems that may be used for virtual and augmented reality purposes.

User Systems

[0213] Referring to FIGS. 4A-4D, some general componentry options are illustrated. In the portions of the detailed description which follow the discussion of FIGS. 4A-4D, various systems, subsystems, and components are presented for addressing the objectives of providing a high-quality, comfortably-perceived display system for human VR and/or AR.

[0214] As shown in FIG. 4A, a user 60 of a head-mounted augmented reality system (“AR system”) is depicted wearing a frame 64 structure coupled to a display system 62 positioned in front of the eyes of the user. A speaker 66 is coupled to the frame 64 in the depicted configuration and positioned adjacent the ear canal of the user 60 (in one embodiment, another speaker, not shown, is positioned adjacent the other ear canal of the user to provide for stereo/shapeable sound control). The display 62 is operatively coupled 68, such as by a wired lead or wireless connectivity, to a local processing and data module 70 which may be mounted in a variety of configurations, such as fixedly attached to the frame 64, fixedly attached to a helmet or hat 80 as shown in the embodiment of FIG. 4B, embedded in headphones, removably attached to the torso 82 of the user 60 in a configuration (e.g., placed in a backpack (not shown)) as shown in the embodiment of FIG. 4C, or removably attached to the hip 84 of the user 60 in a belt-coupling style configuration as shown in the embodiment of FIG. 4D.

[0215] The local processing and data module 70 may comprise a power-efficient processor or controller, as well as digital memory, such as flash memory, both of which may be utilized to assist in the processing, caching, and storage of data (a) captured from sensors which may be operatively coupled to the frame 64, such as image capture devices (such as cameras), microphones, inertial measurement units, accelerometers, compasses, GPS units, radio devices, and/or gyros; and/or (b) acquired and/or processed using the remote processing module 72 and/or remote data repository 74, possibly for passage to the display 62 after such processing or retrieval.

[0216] The local processing and data module 70 may be operatively coupled (76, 78), such as via a wired or wireless communication links, to the remote processing module 72 and remote data repository 74 such that these remote modules (72, 74) are operatively coupled to each other and available as resources to the local processing and data module 70. The processing module 70 may control the optical systems and other systems of the AR system, and execute one or more computing tasks, including retrieving data from the memory or one or more databases (e.g., a cloud-based server) in order to provide virtual content to the user.

[0217] In one embodiment, the remote processing module 72 may comprise one or more relatively powerful processors or controllers configured to analyze and process data and/or image information. In one embodiment, the remote data repository 74 may comprise a relatively large-scale digital data storage facility, which may be available through the internet or other networking configuration in a “cloud” resource configuration. In one embodiment, all data is stored and all computation is performed in the local processing and data module, allowing fully autonomous use from any remote modules.
……
……
……

您可能还喜欢...