空 挡 广 告 位 | 空 挡 广 告 位

Qualcomm Patent | Content stabilization for head-mounted displays

Patent: Content stabilization for head-mounted displays

Drawings: Click to check drawins

Publication Number: 20210183343

Publication Date: 20210617

Applicant: Qualcomm

Abstract

A head-mounted device may include a processor configured to receive information from a sensor that is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position or a change in the position of the device relative to the face. The sensor may be distance sensor, and the processor may be configured to adjust the rendering of the item of virtual content based a measured distance or change of distance between the head-mounted device and the point of reference on the user’s face. The point of reference on the user’s face may be one or both of the user’s eyes.

Claims

  1. A head-mounted device for use in an augmented reality system, comprising: a memory; a sensor; and a processor coupled to the memory and the sensor, wherein the processor is configured to: receive information from the sensor, wherein the information is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position.

  2. The head-mounted device of claim 1, wherein the information received from the sensor relates to a position of the head-mounted device relative to an eye of the user, and wherein the processor is configured to adjust the rendering of the item of virtual content based on the position of the head-mounted device relative to the eye of the user.

  3. The head-mounted device of claim 2, wherein the sensor comprises an infrared (IR) sensor and IR light source configured to emit IR light towards the user’s face.

  4. The head-mounted device of claim 2, wherein the sensor comprises an ultrasound sensor configured to emit a pulse of ultrasound towards the user’s face.

  5. The head-mounted device of claim 2, wherein the sensor comprises a first camera.

  6. The head-mounted device of claim 5, wherein the sensor further comprises a second camera.

  7. The head-mounted device of claim 1, further comprising an image rendering device coupled to the processor and configured to render the item of virtual content.

  8. The head-mounted device of claim 1, wherein the processor is further configured to: determine an angle to the user’s eyes from an image rendering device on the head-mounted device; and adjust the rendering of the item of virtual content based on the determined angle to the user’s eyes and the determined distance between the head-mounted device and the reference point on the user’s face.

  9. A method of adjusting rendering of an item of virtual content in an augmented reality system to compensate for movement of a head-mounted device on a user, comprising: determining, by a processor, a position of the head-mounted device relative to a point of reference on the user’s face; and adjusting, by the processor, the rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face.

  10. The method of claim 9, wherein the point of reference on the user’s face comprises an eye of the user.

  11. The method of claim 9, further comprising receiving information from a sensor that relates to a position of the head-mounted device relative to an eye of the user, wherein adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative the point of reference determined on the user’s face comprises adjusting the rendering of the item of virtual content based on the position of the head-mounted device relative to the eye of the user.

  12. The method of claim 11, wherein determining the position of the head-mounted device relative to the point of reference on the user’s face comprises determining the position of the head-mounted device relative to the point of reference on the user’s face based on information received from an infrared (IR) sensor and IR light source configured to emit IR light towards the user’s face.

  13. The method of claim 11, wherein determining the position of the head-mounted device relative to the point of reference on the user’s face comprises determining the position of the head-mounted device relative to the point of reference on the user’s face based on information received from an ultrasound sensor configured to emit a pulse of ultrasound towards the user’s face.

  14. The method of claim 9, wherein: determining the position of the head-mounted device relative to a point of reference on the user’s face comprises determining, by the processor, a change in the position of the head-mounted device relative to the point of reference on the user’s face; and adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face comprises adjusting, by the processor, the rendering of the item of virtual content based on the change in position of the head-mounted device relative to the point of reference on the user’s face.

  15. The method of claim 9, wherein determining the position of the head-mounted device relative to a point of reference on the user’s face comprises performing a time-of-flight measurement by the processor based on signals emitted by a sensor on the head-mounted device.

  16. The method of claim 9, wherein determining the position of the head-mounted device relative to a point of reference on the user’s face comprises performing a triangulation operation by the processor based on images captured by an imaging sensor on the head-mounted device.

  17. A non-volatile processor-readable medium having stored thereon processor-executable instructions configured to cause a processor of a head-mounted device to perform operations comprising: determining a position of the head-mounted device relative to a point of reference on a user’s face; and adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face.

  18. The non-volatile processor-readable medium of claim 17, wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations further comprising receiving information from a sensor relating to a position of the head-mounted device relative to an eye of the user, and wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations such that adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face comprises adjusting rendering of the item of virtual content based on the position of the head-mounted device relative to the eye of the user.

  19. The non-volatile processor-readable medium of claim 18, wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations such that determining the position of the head-mounted device relative to the point of reference on the user’s face comprises determining the position of the head-mounted device relative to the point of reference on the user’s face based on information received from an infrared (IR) sensor and IR light source configured to emit IR light towards the user’s face.

  20. The non-volatile processor-readable medium of claim 18, wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations such that determining the position of the head-mounted device relative to the point of reference on the user’s face comprises determining the position of the head-mounted device relative to the point of reference on the user’s face based on information received from an ultrasound sensor configured to emit a pulse of ultrasound towards the user’s face.

  21. The non-volatile processor-readable medium of claim 17, wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations such that: determining the position of the head-mounted device relative to a point of reference on the user’s face comprises determining a change in the position of the head-mounted device relative to the point of reference on the user’s face; and adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device with the point of reference on the user’s face comprises adjusting the rendering of the item of virtual content based on the change in position of the head-mounted device relative to the point of reference on the user’s face.

  22. The non-volatile processor-readable medium of claim 17, wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations such that determining, by a processor, the position of the head-mounted device relative to a point of reference on the user’s face comprises performing a time-of-flight measurement by the processor based on signal emitted by a sensor on the head-mounted device.

  23. The non-volatile processor-readable medium of claim 17, wherein the stored processor-executable instructions are configured to cause a processor of a head-mounted device to perform operations such that determining, by a processor, the position of the head-mounted device relative to a point of reference on the user’s face comprises performing a triangulation operation by the processor based on images captured by an imaging sensor on the head-mounted device.

  24. A head-mounted device, comprising: means for determining a position of the head-mounted device relative to a point of reference on the user’s face; means for rendering an item of virtual content; and means for adjusting rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face.

  25. The head-mounted device of claim 24, wherein means for determining, by a processor, a position of the head-mounted device relative to a point of reference on the user’s face comprises means for determining a position of the head-mounted device relative to an eye of a user, and wherein means for adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative the point of reference determined on the user’s face comprises means for adjusting the rendering of the item of virtual content based on the position of the head-mounted device relative to the eye of the user.

  26. The head-mounted device of claim 25, wherein means for determining the position of the head-mounted device relative to the point of reference on the user’s face comprises an infrared (IR) sensor and IR light source configured to emit IR light towards the user’s face.

  27. The head-mounted device of claim 25, wherein means for determining the position of the head-mounted device relative to the point of reference on the user’s face comprises means for emitting a pulse of ultrasound towards the user’s face.

  28. The head-mounted device of claim 24, wherein: means for determining the position of the head-mounted device relative to a point of reference on the user’s face comprises means for determining a change in the position of the head-mounted device relative to the point of reference on the user’s face; and means for adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device with the point of reference on the user’s face comprises means for adjusting the rendering of the item of virtual content based on the change in position.

  29. The head-mounted device of claim 24, wherein means for determining the position of the head-mounted device relative to a point of reference on the user’s face comprises means for performing a time-of-flight measurement by the processor based on signal emitted by a sensor on the head-mounted device.

  30. The head-mounted device of claim 2, wherein means for determining the position of the head-mounted device relative to a point of reference on the user’s face comprises means for performing a triangulation operation by the processor based on images captured by an imaging sensor on the head-mounted device.

Description

BACKGROUND

[0001] In recent years augmented reality software applications that combine real-world images from a user’s physical environment with computer-generated imagery or virtual objects (VOs) have grown in popularity and use. An augmented reality software application may add graphics, sounds, and/or haptic feedback to the natural world that surrounds a user of the application. Images, video streams and information about people and/or objects may be presented to the user superimposed on the visual world as an augmented scene on a wearable electronic display or head-mounted device (e.g., smart glasses, augmented reality glasses, etc.).

SUMMARY

[0002] Various aspects include head-mounted devices for use in an augmented reality system that are configured to compensate for movement of the device on a user’s face. In various aspects a head-mounted device may include a memory, a sensor, and a processor coupled to the memory and the sensor, in which the processor may be configured to receive information from the sensor, in which the information may be indicative of a position of the head-mounted device relative to a reference point on a face of a user, and adjust a rendering of an item of virtual content based on the position.

[0003] In some aspects, the information received from the sensor relates to a position of the head-mounted device relative to an eye of the user, in which the processor may be configured to adjust the rendering of the item of virtual content based on the position of the head-mounted device relative to the eye of the user. In some aspects, the sensor may include an infrared (IR) sensor and IR light source configured to emit IR light towards the user’s face. In some aspects, the sensor may include an ultrasound sensor configured to emit a pulse of ultrasound towards the user’s face and determine the position of the head-mounted device relative to the reference point on a user’s face. In some aspects, the sensor may include a first camera, and in some aspects the sensor further may include a second camera. In some aspects, the head-mounted device may further include an image rendering device coupled to the processor and configured to render the item of virtual content.

[0004] In some aspects, the processor may be further configured to determine an angle to the user’s eyes from an image rendering device on the head-mounted device and adjust the rendering of the item of virtual content based on the determined angle to the user’s eyes and the determined distance between the head-mounted device and the point of reference on the user’s face.

[0005] Some aspects may include a method of adjusting rendering of an item of virtual content in an augmented reality system to compensate for movement of a head-mounted device on a user, which may include determining a position of the head-mounted device relative to a point of reference on the user’s face, and adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face. In some aspects, the point of reference on the user’s face may include an eye of the user.

[0006] Some aspects may further include receiving information from a sensor that relates to a position of the head-mounted device relative to an eye of the user, in which adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative the point of reference determined on the user’s face may include adjusting the rendering of the item of virtual content based on a distance and angle to an eye of the user determined based on the position of the head-mounted device relative to the eye of the user.

[0007] In some aspects, determining the position of the head-mounted device relative to the point of reference on the user’s face may include determining the position of the head-mounted device relative to the point of reference on the user’s face based on information received from an infrared (IR) sensor and IR light source configured to emit IR light towards the user’s face.

[0008] In some aspects, determining the position of the head-mounted device relative to a point of reference on the user’s face may include determining the position of the head-mounted device relative to the point of reference on the user’s face based on information received from an ultrasound sensor configured to emit a pulse of ultrasound towards the user’s face.

[0009] In some aspects, determining the position of the head-mounted device relative to a point of reference on the user’s face may include determining a change in the position of the head-mounted device relative to the point of reference on the user’s face, and adjusting the rendering of the item of virtual content based on the determined position of the head-mounted device relative to the point of reference on the user’s face may include adjusting the rendering of the item of virtual content based on the change in position of the head-mounted device relative to the point of reference on the user’s face

[0010] In some aspects, determining the position of the head-mounted device relative to a point of reference on the user’s face may include performing a time-of-flight measurement by the processor based on signal emitted by a sensor on the head-mounted device. In some aspects, determining the position of the head-mounted device relative to a point of reference on the user’s face may include performing a triangulation operation by the processor based on images captured by an imaging sensor on the head-mounted device.

[0011] Further aspects include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor in a head-mounted device or an associated computing device to perform operations of any of the methods summarized above. Further aspects include a head-mounted device or an associated computing device having means for accomplishing functions of any of the methods summarized above.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example embodiments of various embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the claims.

[0013] FIG. 1A is an illustration of a head-mounted device (e.g., augmented reality glasses) that may be configured to perform vision-based registration operations that account for changes in distances/angles between the cameras of the head-mounted device in accordance with various embodiments.

[0014] FIG. 1B is a system block diagram that illustrates the computer architecture and sensors that could be included in a head-mounted device that is configured to perform vision-based registration operations that account for changes in distances/angles between the cameras of the head-mounted device and the eyes of the user in accordance with various embodiments.

[0015] FIGS. 2A-2E are illustrations of imaging systems suitable for displaying electronically generated images or items of virtual content on a heads-up display system.

[0016] FIGS. 3-5 are processor flow diagrams illustrating additional methods of performing vision-based registration operations that account for changes in distances/angles between the cameras of the head-mounted device and the eyes of the user in accordance with various embodiments.

[0017] FIG. 6 is a component block diagram of mobile device suitable for implementing some embodiments.

[0018] FIG. 7 is a component diagram of example computing device suitable for use with the various embodiments.

DETAILED DESCRIPTION

[0019] Various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.

[0020] Augmented reality systems work by displaying an element of virtual content so that it appears on, near or associated with an object of volume in the real world. The term “augmented reality system” refers to any system that renders items of virtual content within a scene that includes real-world objects, including systems that render items of virtual content so they appear suspended within the real world, mixed reality systems, and video pass-through systems that render images of real-world objects (e.g., obtained by an outward-facing camera) combined with items of virtual content. In a common form of augmented reality systems, a user wears a head-mounted device that includes an outward facing camera that captures images of the real world, a processor (which may be a separate computing device or a processor within the head-mounted device) generates items of virtual content (e.g., images, text, icons, etc.), uses images from the outward facing camera to determine how or where the items of virtual content should be rendered to appear on, near or associated with selected real world object(s), and an image rendering device (e.g., a display or projector) that renders images so that to the user the items of virtual content appear to be in the determined location(s) with respect to real world objects.

[0021] For ease of describing various embodiments, the positioning of items of virtual content with respect to selected real world object(s) is referred to herein as “registration” and an item of virtual content is “registered” with a real-world object when the item appears to the user to be on, near or associated with selected real world object(s). As used herein, an item of virtual content is “associated with” a real-world object when the augmented reality system attempts to render the item such that the item appears registered (i.e., appearing to be on, near or remaining at a fixed relative position) with the real-world object.

[0022] In registering items of virtual content associated with distant real world objects, a head-mounted device renders the virtual content so that it appears to the user to be at the same distance as the real world objects, even though the virtual content is being rendered by an image rendering device (e.g., projector or display) that is only a few centimeters from the user’s eyes. This may be accomplished through the use of lenses in the image rendering device that refocus light from the projection or display of virtual content so that the light is focused by the lens of the user’s eye on the retina when the user is looking at the distant real-world objects. Thus, even though the image of virtual content is generated within millimeters of a user’s eyes, the virtual content appears in focus as if it were at the same distance from the user as the real-world objects with which it is associated by the augmented reality system.

[0023] Conventional head-mounted devices used for augmented reality applications typically include a thick or cumbersome nose bridge and frame, or are designed to be secured to the head of the user via a head strap. As augmented reality software applications continue to grow in popularity and use, it is expected that there will be an increased consumer demand for new types of head-mounted devices that have thinner or lighter nose bridges and frames, and which may be worn without a head strap, similar to reading glasses or spectacles. Due to these and other new characteristics, it may be more likely that the nose bridge will slide down the nose of the user, that the frame will move or shift on the face of the user, and that the user will frequently adjust the location, position and orientation of the head-mounted device on the user’s nose and face (similar to how people currently adjust their reading glasses or spectacles). Such movements and adjustments of the devices may change the positions, orientations, distances and angles between the camera of the head-mounted device, the eyes of the user, and the electronic display of head-mounted device.

[0024] Projecting or lensing items of virtual content from an image rendering device (e.g., a projector or display) close to the user’s eyes so that the virtual content appears registered with a real world object and in focus when the user looks at distant real world objects in a head-mounted device involves the use waveguides, laser projections, lenses or projectors. Such rendering techniques that make the apparent position of rendered content sensitive to changes in the distance and angles between the user’s eyes and the image rendering device (e.g., projector or display). If such distance and angles remain fixed, then the items of virtual content may remain in focus and appear to remain registered with the real-world objects in a fixed relative position determined by the augmented reality system. However, if the distance and angles between the user’s eye and the an image rendering device (e.g., projector or display) changes (e.g., if the head-mounted device slips down the user’s nose or the user repositions the head-mounted device on the user’s face), that will change the apparent depth and/or location of items of virtual content, while the distance to and location of real world objects does not appear to change (i.e., items of virtual content appear to move with respect to real-world objects). Due to the short distance from the image rendering device (e.g., projector or display) to the user’s eye compared to the distance to real world objects, even small changes in distance and angle of the head-mounted device will appear to move items of virtual content through large angles and distances compared to the distant objects.

[0025] Even when not directly interacting with the virtual objects, the user may make subtle movements (head, neck or facial movements, etc.) and abrupt movements (e.g., running, jumping, bending over, etc.) that may impact the apparent positions and orientations of items of virtual content with respect to real-world objects. Such user movements may also cause a head-mounted device to move or shift on the user’s nose or face, which changes the distance and angle between the virtual object image rendering device (e.g., projector or display) and the user’s eyes. This may cause apparent changes in positions, apparent distances and/or orientations of the items of virtual content with respect to real-world objects. Similarly, when the user manually adjusts the position of the head-mounted device on the user’s face, any movement changes the orientation, distance and/or position of the display optics relative to the user’s eyes, depending upon the amount and direction/rotation of movement. These movements of the head-mounted device relative to the user’s eyes can result in the virtual objects appearing at a different distance than real world objects (e.g., appear out of focus when the user looks at the real-world objects), as well as at different angular locations compared to the real world objects. Such sensitivity of virtual object apparent distance and angular position to movement of the head-mounted device on the user’s face may impact the fidelity of the augmented scene and degrading the user experience.

[0026] Some conventional solutions attempt to improve the accuracy of the registration by collecting information from external sensing devices, such as magnetic or ultrasonic sensors communicatively coupled to the head-mounted device, to determine position and orientation relative to the user’s eyes, and use this information during the positioning phase of the vision-based registration to adjust the locations in which the virtual objects will be rendered. For example, a conventional head-mounted device may include gyroscopes and accelerometers that can sense rotations of the device through three axes of rotation and movement through three dimensions (i.e., 6 degrees of freedom). While these conventional solutions, particularly in combination with images provided by outward facing cameras, provide information to the augmented reality system that enable realigning (e.g., update the registration of) items of virtual reality with associated real world objects, such sensors do not account for movements of the head-mounted device relative to the eyes of the user. Rather, most conventional vision-based registration techniques/technologies presume a fixed position and orientation of the outward facing camera and the inward facing image rendering device (e.g., waveguide, projector or display) relative to the user’s eyes. Consequently, conventional augmented reality head-mounted devices may exhibit frequent changes in the apparent distance and angular position of items of virtual content with respect to distant objects due to movements of the devices on the user’s face.

[0027] In overview, various embodiments include a head-mounted device that is equipped with both outward facing world-view image sensors/cameras and inward facing gaze-view sensors/cameras. The inward facing gaze-view sensors/cameras may be configured to determine or measure changes in the distance and angle (referred to herein as “position” as defined below) between the image rendering device (e.g., projector or display) and the user’s eyes. In a typical head-mounted device, the image rendering device (e.g., projector or display) will be a fixed distance from the outward facing camera (or more accurately, the image plane of the outward facing camera). Thus, while the augmented reality system determines the appropriate rendering of items of virtual content to appear associated with (i.e., appear to be on or located near) a real world object or objects, this process presumes a fixed distance and angular relationship between the outward facing camera image plane and the image rendering device (e.g., projector or display) for this purpose. To correct for changes in position of the image rendering device relative to the user’s eyes due to movement of the head-mounted device on the user’s face, a processor within or in communication with (e.g., via a wireless or wired link) the head-mounted device may be configured to use distance and angle measurements from a sensor configured to determine changes in distance and angle to the user’s face or eyes, and adjust the rendering of an item of virtual content (e.g., augmented imagery) to account for changes in distances/angles between the head-mounted device and the eyes of the user. Such adjustments may function to stabilize the apparent location of the item of virtual content with respect to a distance real world object so that the virtual content remains in the same apparent location with respect to the observed real world as determined by the augmented reality system when the head-mounted device shifts on the user’s head.

[0028] In some embodiments, the head-mounted device may be configured to determine the distance and angle (or change in distance and/or angle) between a point of registration on the head-mounted device (e.g., a distance/angle sensor) and a point of registration on the user’s face (e.g., the user’s eyes) about six axes or degrees of freedom, namely the X, Y, Z, roll, pitch and yaw axes and dimensions. For ease of reference the terms “position” and “change in position” are used herein as a general reference to the distance and angular orientation between the head-mounted device and the user’s eyes, and is intended to encompass any dimensional or angular measurement about the six axes or degrees of freedom. For example, movement of the head-mounted device down the nose of the user will result in changes in distance along the X and Z axes (for example) as well as rotation about the pitch axis, the combination of all of which may be referred to herein as a change in position of the head-mounted device relative to the user’s eyes.

[0029] Assuming the head-mounted device is rigid, there will be a constant distance and angular relationship between the outward facing image sensor, the projector or display that renders images of items of virtual content, and an inward facing sensor configured to determine the position of the head-mounted device relative to a reference point on the user’s face. For example, the sensor may measure the distance (or change in distance) and angles (or change in angles) to the point on the user’s face to determine the position (or change of position) along six axes or degrees of freedom. Further, a measurement of the position of the sensor relative to a point of registration on the user’s face can be related through a fixed geometric transformation to both the outward facing image sensor and the projector or display. Therefore, the inward-facing distance and angle measuring sensor may be positioned anywhere on the head-mounted device, and distance and angle measurements may be used by the processor to determine the position of the head-mounted device relative to the user’s eyes, and adjust the rendering of an item of virtual content so that it appears to the user to remain registered with an associated real-world object.

[0030] Various types of sensors may be used in various embodiments to determine the relative position, or change in position, of the head-mounted device relative to the user’s eyes. In an example embodiment, the sensor may be an inward facing infrared sensor that generates small flashes of infrared light, detects reflections of the small flashes of infrared light off the eye of the user, and determine the distance and angle (or change in distance and/or angle) between the outward facing image sensor and the eye of the user by performing a time-of-flight measurement of the detected reflections. In another example, a single visible light camera may be configured to determine changes in distance and/or angle between the outward facing image sensor and the eye of the user based on changes in observed positions of features between two or more images. In another example, two spaced apart imaging sensors (i.e., a binocular image sensor or stereo camera) may be used to determine distance through image processing by the processor to determine the angles in each sensor to a common point of reference on the user’s face (e.g., the pupil of one eye) and calculate the distance using triangulation. In another example embodiment, the sensor may be a capacitance touch sensing circuit or circuits, which may be embedded on the interior of the head-mounted device so as to make contact with a user’s face (e.g., the nose bridge, brow region, or the temple area), and configured to output capacitance data that the processor may analyze to determine whether the device has moved or shifted on the user’s face. In another example embodiment, the sensor may be an ultrasonic transducer that generates pulses of ultrasound, detects echoes of the ultrasound pulses off the face of the user, and determines the distance (or change in distance) between the outward facing image sensor and the user’s face by performing a time-of-flight measurement of the detected echoes. Distance or change in distance may be determined based on the time between generation of the IR flash or ultrasound and the speed of light or sound. The sensor may be configured to determine an angle to the user’s eyes and thus also measure a change in angular orientation of the head-mounted device (and thus the outward facing camera) and the user’s eyes. The processor may then use such measurements to determine the position (or change in position) of the head-mounted device relative to the user’s eye, and determine adjustments to make to the rendering of an item of virtual content so that the item appears to remain registered with an associated real-world object.

[0031] The head-mounted device processor may render the adjusted or updated image of the virtual content at the updated display location (i.e., distance and angle) to generate an augmented scene so that items of virtual content remain registered with real world objects as determined by the augmented reality system.

[0032] The term “mobile device” is used herein to refer to any one or all of cellular telephones, smartphones, Internet-of-things (TOT) devices, personal or mobile multi-media players, laptop computers, tablet computers, ultrabooks, palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, wireless gaming controllers, head-mounted devices, and similar electronic devices which include a programmable processor, a memory and circuitry for sending and/or receiving wireless communication signals to/from wireless communication networks. While the various embodiments are particularly useful in mobile devices, such as smartphones and tablets, the embodiments are generally useful in any electronic device that includes communication circuitry for accessing cellular or wireless communication networks.

[0033] The phrase “head-mounted device” and the acronym (HID) is used herein to refer to any electronic display system that presents the user with a combination of computer-generated imagery and real-world images from a user’s physical environment (i.e., what the user would see without the glasses) and/or enables the user to view the generated image in the context of the real-world scene. Non-limiting examples of head-mounted devices include, or may be included in, helmets, eyeglasses, virtual reality glasses, augmented reality glasses, electronic goggles, and other similar technologies/devices. As described herein, a head-mounted device may include a processor, a memory, a display, one or more cameras (e.g., world-view camera, gaze-view camera, etc.), one or more six degree-of-freedom triangulation scanners, and a wireless interface for connecting with the Internet, a network, or another computing device. In some embodiments, the head-mounted device processor may be configured to perform or execute an augmented reality software application.

[0034] In some embodiments a head-mounted device may be an accessory for and/or receive information from a mobile device (e.g., desktop, laptop, Smartphone, tablet computer, etc.), with all or portions of the processing being performed on the processor of that mobile device (e.g., the computing devices illustrated in FIGS. 6 and 7, etc.). As such, in various embodiments, the head-mounted device may be configured to perform all processing locally on the processor in the head-mounted device, offload all of the main processing to a processor in another computing device (e.g. a laptop present in the same room as the head-mounted device, etc.), or split the main processing operations between the processor in the head-mounted device and the processor in the other computing device. In some embodiments, the processor in the other computing device may be a server in “the cloud” with which the processor in the head-mounted device or in an associated mobile device communicates via a network connection (e.g., a cellular network connection to the Internet).

[0035] The phrase “six degrees of freedom (6-DOF)” is used herein to refer to the freedom of movement of a head-mounted device or its components (relative to the eyes/head of the user, a computer-generated image or virtual object, a real-world object, etc.) in three-dimensional space or with respect to three perpendicular axes relative to the user’s face. The position of a head-mounted device on a user’s head may change, such as moving in a forward/backward direction or along the X-axis (surge), in a left/right direction or along the Y-axis (sway), and in an up/down direction or along the Z-axis (heave). The orientation of a head-mounted device on a user’s head may change, such as rotating about the three perpendicular axes. The term “roll” may refer to rotation along the longitudinal axis or tilting side to side on the X-axis. The term “pitch” may refer to rotation along the transverse axis or tilting forward and backward on the Y-axis. The term “yaw” may refer to rotation along normal axis or turning left and right on the Z-axis.

[0036] A number of different methods, technologies, solutions, and/or techniques (herein collectively “solutions”) may be used for determining the location, position, or orientation of a point on the user’s face (e.g., a point on the facial structure surrounding the user’s eyes, the eye, eye socket, corner of the eye, cornea, pupil, etc.), any or all of which may be implemented by, included in, and/or used by the various embodiments. As noted above, various types of sensors, including IR, image sensors, binocular image sensors, capacitance touch sensing circuits, and ultrasound sensors, may be used to measure a distance and angle from the sensor on head-mounted device to the point on the user’s face. The processor may apply trilateration or multilateration to the measurements by the sensor, as well as accelerometer and gyroscope sensor data to determine changes in the position (i.e., distance and angular orientation) of the head-mounted device relative to the user’s face through six degrees of freedom (DOF). For example, a head-mounted device may be configured to transmit sound (e.g., ultrasound), light or a radio signal to a target point, measure how long it takes for a reflection of the sound, light or radio signal to be detected by a sensor on the head-mounted device, and use any or all of the above techniques (e.g., time of arrival, angle of arrival, etc.) to estimate the distance and angle between a lens or camera of the head-mounted device and the target point. In some embodiments, a processor, such as the processor of the head-mounted device, may use a three-dimensional (3D) model of the user’s face (e.g., a 3D reconstruction) in processing images taken by an inward-facing image sensor (e.g., a digital camera) to determine the position of the head-mounted device relative to the user’s eyes.

……
……
……

您可能还喜欢...