空 挡 广 告 位 | 空 挡 广 告 位

Microsoft Patent | Real-World Anchor In A Virtual-Reality Environment

Patent: Real-World Anchor In A Virtual-Reality Environment

Publication Number: 20200111256

Publication Date: 20200409

Applicants: Microsoft

Abstract

A virtual-reality (“VR”) renders a virtual anchor object within the VR environment that correlates to a real-world anchor object. The anchor object’s real-world location relative to a computer system is determined and rendered at a location within the VR environment in such a manner that the virtual anchor object is world-locked relative to the real-world environment, as opposed to being world-locked relative to the VR environment. In response to movements of the computer system, the virtual anchor object’s location is updated in order to maintain the real-world world-locked relationship. Objects having known properties can also be used as a comparison to captured images to determine relative positioning of the VR device.

BACKGROUND

[0001] Virtual-reality (VR) systems have received significant attention because of their ability to create truly unique experiences for their users. For reference, conventional VR systems create a completely immersive experience by restricting their users’ views to only VR environments/scenes.

[0002] A VR environment is typically presented to a user through a head-mounted device (HMD), which completely blocks any view of the real world. In contrast, conventional augmented-reality (AR) systems create an AR experience by visually presenting virtual images that are placed in or that interact with the real world. As used herein, the terms “virtual image” and “virtual object” may be used interchangeably and are used to collectively refer to any image rendered within a VR environment/scene.

[0003] Some VR systems also utilize one or more on-body devices (including the HMD), a handheld device, and other peripherals. The HMD provides a display that enables a user to view overlapping and/or integrated visual information (i.e. virtual images) within the VR environment. The user can often interact with virtual objects in the VR environment by using one or more peripherals and sometimes even their own body.

[0004] Continued advances in hardware capabilities and rendering technologies have greatly improved how VR systems render virtual objects. In fact, the rendering technology of VR systems has improved so much that users often forget they are still physically located in the real world. One negative result of providing such an immersive experience is that users can become disoriented, relative to the real-world, and can lose their balance and/or collide with objects in real-world while engaging with the VR environment/scene.

[0005] The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.

BRIEF SUMMARY

[0006] Disclosed embodiments relate to computer systems, methods, and devices (e.g., HMDs) that deliver a better virtual-reality (VR) user experience by providing virtual content designed to make the user at least partially cognizant of his/her real-world environment while not significantly distracting or degrading the user’s VR experience. As used herein, the phrase “anchor object” refers to a real-world object (e.g., a couch, TV, display screen, etc.) physically located within the user’s real-world environment, and the phrase “virtual anchor object” refers to a virtual image that is rendered in a VR environment/scene and that corresponds to the anchor object.

[0007] In some embodiments, a real-world object (e.g., a piece of furniture, a fixture, a computer screen, TV screen, etc.) is selected to operate as an anchor object. Selecting a particular object to serve as the anchor object is performed by analyzing the attributes of any number of candidate objects and then choosing a particular candidate having suitable attributes. Once selected, the anchor object’s position and orientation relative to the user’s computer system (e.g., a HMD) is then determined. As used herein, “position” and “orientation” may individually or collectively refer to any one or more of location/position, depth, angular alignment, perspective, and/or orientation. A virtual anchor object is rendered within a VR environment, which is being rendered by the HMD and which is viewable by the user. This virtual anchor object is rendered at a placement location indicative of the determined position and orientation of the anchor object. In this regard, the virtual anchor object’s placement location is world-locked relative to the real-world environment as opposed to being world-locked relative to the VR environment. In response to a tracked movement of the HMD, the position information is updated to track the changes to the HMD’s position relative to the anchor object’s actual real-world position. Concurrently with those updates, the virtual anchor object’s placement location is updated in accordance with the updated information so as to reflect the world-locked relationship between the virtual anchor object’s placement location in the VR environment and the anchor object’s real-world location.

[0008] Some embodiments are also provided for calibrating a HMD to the real-world environment anchor. Initially, an instruction is issued to a separate computer system (e.g., a PC display) that is determined to be located within the same environment as the user’s VR computer system (e.g., a HMD). This instruction, when executed by the separate computer system, causes the separate computer system to display one or more known images (e.g., a calibration marker image or a buffered video recording) on an associated display screen. Once these known images are displayed, the HMD then detects attributes of those images. These attributes are used to generate information describing a positional relationship between the HMD and the separate computer system’s display screen. As used herein, “positional relationship” may also refer to any one or combination of location/position, depth, angular alignment, perspective, and/or orientation of an object (e.g., the display screen) relative to the HMD. Thereafter, when the HMD moves, the positional relationship information is updated to reflect the movements. In conjunction with this updated information, a virtual anchor object is also rendered in the VR environment. The virtual anchor object’s visual appearance may be representative of the separate computer system’s display screen (e.g., an outline of the screen), and the virtual anchor object is rendered at a placement location that visually reflects the positional relationship between the HMD and the separate computer system’s display screen.

[0009] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

[0010] Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

[0012] FIG. 1 illustrates a flowchart of an example method for displaying, within a VR environment, a virtual anchor object corresponding to a real-world anchor object.

[0013] FIG. 2 illustrates an example real-world environment with a number of candidate anchor objects.

[0014] FIG. 3 illustrates how a HMD is able to scan a real-world environment to identify any number of candidate anchor objects.

[0015] FIG. 4 illustrates how an anchor object may be selected within the real-world to act as a visual cue or reference within a VR environment.

[0016] FIG. 5 illustrates an example of a VR environment, including the HMD’s field of view (“FOV”) of that VR environment.

[0017] FIG. 6 illustrates how a real-world anchor object may be used to help a user engage with a VR environment while still enabling the user to remain aware/cognizant of the real-world objects (i.e. obstacles) that are present within the real-world environment but that are not visible because of the HMD.

[0018] FIG. 7 illustrates how a virtual anchor object, which corresponds to the selected real-world anchor object, may be rendered within the VR environment to assist the user in remaining cognizant of his/her real-world environment.

[0019] FIG. 8 illustrates how, regardless of where the HMD moves, the virtual anchor object remains world-locked relative to the real world as opposed to being world-locked relative to the VR environment.

[0020] FIG. 9 illustrates how the virtual anchor object may be displayed using different visual characteristics in order to either enhance or reduce its visual impact on the VR environment.

[0021] FIG. 10 illustrates how, in the event that the HMD moves in a manner such that the virtual anchor object leaves the HMD’s FOV, a direction indicator may be displayed to show how the HMD would have to be moved in order to bring the virtual anchor object back into the HMD’s FOV.

[0022] FIG. 11 illustrates how, regardless of how the HMD is oriented within the real-world environment, a direction indicator may be provided to indicate where the real-world anchor object is located relative to the HMD.

[0023] FIG. 12 illustrates a flowchart of an example method for selecting a computer screen to operate as an anchor object and for displaying a corresponding virtual anchor object within a VR environment.

[0024] FIG. 13 illustrates a real-world environment in which an Internet of Things (“IoT”) device (e.g., a smart TV) is selected to serve as the anchor object.

[0025] FIG. 14 illustrates how a calibration marker image may be used to facilitate the process of determining the relative position between the HMD and the display screen of a IoT device.

[0026] FIG. 15 illustrates how the HMD is able to record an image of the calibration marker image while the calibration marker image is being displayed on the display screen of the IoT device in order to determine the relative position and orientation between the HMD and the display screen.

[0027] FIGS. 16A, 16B, and 16C show how the distances between each marker included within a calibration marker image may be used to facilitate the calibration process (i.e. the process of determining the relative position between the IoT device’s display, which comprises an anchor, and the HMD). FIG. 16D illustrates a flowchart of an example method for calibrating the marker image or anchor with the HMD.

[0028] FIG. 17 illustrates how a buffered video may be used during the calibration process.

[0029] FIG. 18 illustrates a scenario where multiple real-world anchor objects are provided and where multiple corresponding virtual anchor objects (including one shaped as a star) are rendered in a VR environment.

[0030] FIG. 19 illustrates an example computer system specially configured to perform any of the disclosed operations.

DETAILED DESCRIPTION

[0031] Disclosed embodiments relate to computer systems, methods, and devices (e.g., HMDs) that provide, within a VR environment, a virtual anchor object representative of a real-world anchor object. As used herein, the phrase “anchor object” refers to a real-world object (e.g., a couch, TV, display screen, etc.) physically located within the user’s real-world environment, and the phrase “virtual anchor object” refers to a virtual image that is rendered in a VR environment/scene and that corresponds to the anchor object. As also used herein, the terms “position”, “positional relationship,” and “orientation” are generalized terms that may individually or collectively refer to any one or combination of location/position, depth, angular alignment, perspective, and/or orientation between one object (e.g., an anchor object) and another object (e.g., the HMD).

[0032] In some embodiments, a real-world object is selected to operate as an anchor object. Once the anchor object is selected, then a corresponding virtual anchor object is rendered within the VR environment. This corresponding virtual anchor object is world-locked within the VR environment relative to the anchor object’s real-world location. Therefore, regardless of how the HMD moves or the VR environment changes, the corresponding virtual anchor object is projected within the VR environment at a location indicative/reflective of the anchor object’s real-world location. As such, the user of the HMD can remain cognizant of his/her real-world environment (even when immersed in the VR environment) by remaining aware of the location of the anchor object. This cognizance helps the user avoid colliding with real-world objects.

[0033] In some embodiments, a display screen (e.g., a computer screen, smartphone screen, television (“TV”) screen, gaming console screen, etc.) is selected to operate as a real-world anchor object. In this case, an HMD issues an instruction to the computer system controlling the display screen to cause the display screen to display one or more known images (e.g., a calibration marker image, a buffered video recording, etc.). Once the known image(s) is displayed, the HMD captures/records an image of the displayed known image(s) as the known image(s) is being displayed on the display screen, and the HMD determines certain attributes of the known image(s). These attributes are then used to generate information describing the positional relationship between the display screen and the HMD. Additionally, a virtual anchor object corresponding to the display screen is rendered within a VR environment projected by the HMD. In response to movements of the HMD, the virtual anchor object’s location within the VR environment is updated so as to reflect the positional relationship between the HMD and the display screen.

[0034] By performing these and other operations, the disclosed embodiments are able to significantly improve the user’s experience. For instance, one of the primary allures of VR headsets is that they provide a truly immersive experience. There is a price that comes with being fully immersed in the virtual world, however, because the user is blind to the real world. It has been shown that as users interact with VR environments, users often collide with real-world objects. These collisions abruptly break the users’ VR immersion experiences. The disclosed embodiments provide technical solutions to these technical problems, as well as others, by providing a virtual anchor object (within the VR environment) associated with a static, or rather fixed, real-world anchor object. Using this virtual anchor object, the user is able to extrapolate the position of real-world obstacles (e.g., walls, fixtures, furniture, etc.) in his/her mind and then avoid those obstacles while engaging with the VR environment. Consequently, the user’s VR experience may not be abruptly interrupted.

[0035] The disclosed calibration methods (e.g., disclosed in reference to FIG. 16D), also facilitate the manner in which the anchor is presented with the proper positioning within the VR environment (e.g., with the proper orientation, distance, size and angular alignment).

Example Method(s)

[0036] Attention will now be directed to FIG. 1 which refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed. The method presented in FIG. 1 is provided to introduce the disclosed embodiments while subsequent portions of the disclosure will more fully clarify different aspects of the disclosed embodiments.

[0037] FIG. 1 illustrates a flowchart of an example method 100 for providing, within a VR environment, a virtual anchor object corresponding to a real-world anchor object. Initially, method 100 includes an act 105 of selecting a particular real-world object located within a real-world environment (as opposed to a VR environment) to operate as an anchor object. Determining which real-world object will operate as the anchor object is based on one or more detected attributes of the real-world object.

[0038] For example, FIG. 2 shows a real-world environment 200 in which a user 205 is located. User 205 is currently using HMD 210. Within real-world environment 200, there may be any number of candidate anchor objects, such as, for example, candidate 215A, candidate 215B, and candidate 215C. The HMD 210 is able to use any number of cameras to scan the real-world environment 200 and to classify/segment objects from one another. As a part of this segmentation process, HMD 210 is able to identify certain attributes of those objects. Take, for example, candidate 215A (i.e. the picture frame). HMD 210 is able to determine that candidate 215A has a high probability of being a very stationary, static, or fixed object because it is a picture frame mounted on the wall (e.g., HMD 210 is able to determine that such objects typically do not move). Because the detected attributes of candidate 215A (i.e. it being a picture frame, it being mounted to the wall, etc.) highly suggest that candidate 215A is unlikely to move, HMD 210 will regard candidate 215A as being a good candidate to operate as an anchor object.

[0039] Candidate 215B, on the other hand, may be identified as being only a moderately acceptable candidate. More specifically, candidate 215B is a bed with a bedspread. Here, HMD 210 may determine that because bedspreads sometimes move (e.g., as a result of a person sitting on the bed), the bed (including the bedspread) may be identified by HMD 210 as being only moderately acceptable to act as an anchor object.

[0040] Candidate 215C, however, may be identified as being a poor candidate. More specifically, candidate 215C is a soccer ball. HMD 210 may determine that the soccer ball is highly unlikely to remain stationary in one location for a prolonged period of time. Based on analyzing the type and determined characteristics/attributes of candidate 215C, HMD 210 may categorize candidate 215C as being a poor candidate. It will be appreciated that this analysis may be performed by a separate computer system, such as, for example a computer or service running in a cloud environment.

[0041] FIG. 3 shows an example scenario in which HMD 300 is performing a scanning operation 305 on the real-world environment 310. Here, HMD 300 is representative of HMD 210 from FIG. 2, and the real-world environment 310 is representative of the real-world environment 200. From this, it will be appreciated that the disclosed embodiments are able to scan the real-world environment 310 to detect real-world objects. Once detected, the embodiments are able to analyze and characterize/segment those objects based on their detected attributes. In some instances, a machine learning algorithm may be used to characterize/segment objects. Additionally, classification information obtained from the Internet or some other data repository may be used to better gauge the attributes of the real-world objects within the real-world environment 310. Based on these characterizations, the embodiments are then able to classify objects as being good candidates, moderately acceptable candidates, or poor candidates (or some other classification scheme). Grouping objects into different candidate tiers may be based on how stable a particular object is determined to be. That is, it is beneficial to select objects (e.g., to act as the anchor object) whose determined stability attributes satisfy at least a threshold level of stability. Different thresholds may be used for the different tiers (e.g., good, moderate, and poor).

[0042] As demonstrated above, in some embodiments, the process of selecting a particular real-world object to operate as the anchor object may initially include identifying multiple real-world objects from within the real-world environment. Each of these real-world objects may then be classified based on a designated criteria (e.g., a stability criteria). Thereafter, the embodiments may select one (or more) real-world objects to operate as the anchor object based on a determination that the designated criteria (e.g., the stability criteria) of the selected real-world object adequately satisfies a pre-established criteria threshold (e.g., a stability threshold). This selection may occur automatically by the HMD or, alternatively, it may occur in response to user input. For instance, the user may be presented with any number of selectable candidate anchor objects. From this, the user can select one (or more) of those candidate anchor objects to actually operate as the anchor object.

[0043] Returning to FIG. 1, in act 110, a position and orientation of the anchor object relative to the computer system is determined. FIG. 4, for example, shows a real-world environment 400, which is representative of the real-world environments from FIGS. 2 and 3, as well as an indication regarding the selection of a particular anchor 405 (i.e. the picture frame). In the scenario presented in FIG. 4, the HMD is able to determine its position and orientation relative to anchor 405. Determining position and orientation will be discussed in more detail in connection with FIGS. 13 through 17. Very briefly, however, it will be appreciated that the position and orientation information may include any one or more of location/position information, depth information, angular alignment information, perspective information, and/or orientation information.

[0044] Returning to FIG. 1, in act 115, a particular virtual anchor object is also rendered within a VR environment, which is being rendered by the computer system (e.g., the HMD). This virtual anchor object is rendered at a placement location in the VR environment indicative/reflective of the determined position and orientation of the anchor object relative to the computer system. For example, the virtual anchor object is rendered as having a depth, perspective, angular alignment (e.g., corresponding pitch, yaw, and roll), obliqueness, and orientation (e.g., both vertical and horizontal) representative of the real-world anchor object’s depth, angular alignment, obliqueness, perspective, and orientation relative to the HMD. Such features are discussed in more detail later in connection with FIGS. 16A-16D.

[0045] Turning briefly to FIG. 5, here there is shown a VR environment 500 that is being rendered by the HMD 505 (e.g., the computer system in act 115 of FIG. 1). In this example scenario, VR environment 500 is representative of a rollercoaster experience where the user seems to be sitting in a rollercoaster as the rollercoaster travels along a set of tracks. In this regard, the VR environment 500 can be thought of as a non-stationary moving environment such that VR environment 500 appears to be moving relative to the user who is wearing the HMD, and where the VR environment 500 moves regardless of any movements of the user or HMD (i.e. even if the user sits perfectly still, it still seems that the environment is moving). In other embodiments, the VR environment 500 may be a stationary environment (e.g., a room) that does not move if the user remains still. For instance, if the VR environment 500 were a room, then the user could walk about the virtual room, but the virtual room would appear to be stationary. Accordingly, as will be demonstrated next, a virtual anchor object may be rendered in a locked position relative to the real-world environment as opposed to being locked relative to the VR environment (even a non-stationary VR environment) such that the virtual anchor object is fixedly displayed irrespective/independent of changes to the VR environment or even to movements of the HMD.

[0046] While the VR environment 500 may be very expansive, it will be appreciated that the user of the HMD 505 will be able to see only the content presented within HMD 505’s field of view (FOV) 510. By repositioning/moving HMD 505, different portions of the VR environment 500 will be displayed in the FOV 510. As shown, VR environment 500 may include any number of virtual objects, such as, for example, VR object 515 (e.g., a rollercoaster track), VR object 520 (e.g., a tree), and VR object 525 (e.g., a rollercoaster train).

[0047] FIG. 6 shows how, even though the user is physically located within the real-world environment 600, which is representative of the previously described real-world environments, the user may be engaged with the VR environment 605, which is representative of the previously described VR environments. Because the user may be moving while immersed in the VR environment 605, it is beneficial to remind the user that he/she is still in the real-world environment 600 and that the user should avoid colliding with real-world objects. Consequently, anchor 610 was selected to help the user remain cognizant of the real-world environment 600.

[0048] To do so, as described earlier in act 115 of FIG. 1, the disclosed embodiments render a particular virtual anchor object within the VR environment, where the virtual anchor object corresponds to the anchor object. Such a scenario is shown in FIG. 7.

[0049] More specifically, FIG. 7 shows a VR environment 700 and a rendered virtual anchor object (labeled as anchor 705) that corresponds to the anchor 610 from FIG. 6 and anchor 405 from FIG. 4. It will be appreciated that anchor 705 (corresponding to the picture frame) in FIG. 7 is rendered within VR environment 700 at a placement location reflective of the picture frame’s actual real-world location and orientation. That is, regardless of how the HMD moves and regardless of the content displayed in the HMD’s FOV, the anchor 705 is always rendered at a location within the VR environment 700 coinciding with the real-world anchor’s position and orientation. Furthermore, anchor 705 is rendered in a manner to reflect the real-world anchor object’s position, depth, orientation, angular alignment, obliqueness, and/or perspective relative to the HMD.

[0050] For example, returning to FIG. 1, in response to a tracked movement of the computer system, the information describing the relative location and relative orientation of the anchor object is updated (act 120). These updates are performed to track one or more changes of the computer system’s position relative to the anchor object’s position.

[0051] With these updates, the virtual anchor object’s placement location within the VR environment is updated in accordance with the updated information (act 125 in FIG. 1). That is, the virtual anchor object’s placement location is updated in order to reflect a world-locked relation between the virtual anchor object’s placement location and the anchor object’s position.

[0052] FIGS. 7 and 8 more fully clarify this aspect. For instance, in FIG. 7, anchor 705 (corresponding to the picture frame) is displayed in the right-hand area of the HMD’s FOV. In contrast, FIG. 8 shows a VR environment 800, which is representative of VR environment 700 from FIG. 7, and the same anchor 805. Here, however, anchor 805 is displayed on the left-hand area of the HMD’s FOV. This change in placement location occurred as a result of the HMD shifting position relative to the picture frame. As an example, in the scenario presented in FIG. 7, the user of the HMD was physically positioned within the real-world environment so that the picture frame was within the user’s right-hand peripheral view. Later, as shown by the scenario presented in FIG. 8, the user and HMD shifted position thereby causing the picture frame to now be located within the user’s left-hand peripheral view. It will be appreciated that in some circumstances, the virtual objects in the VR environment 800 may also have changed based on the user’s new position, but for simplicity sake, the same virtual objects as FIG. 7 are used in FIG. 8. In this manner, the VR environment was updated so that the virtual anchor object associated with the picture frame (i.e. anchor 705 and 805 from FIGS. 7 and 8, respectively) was rendered in location so as to maintain the world-locked relationship between the real-world as opposed to being world-locked relative to the VR environment.

[0053] Accordingly, the disclosed embodiments beneficially provide a virtual anchor object within a VR environment, where the virtual anchor object is rendered within the VR environment at a location that always corresponds to the real-world anchor object. This rendering of the virtual anchor object helps the user remain aware of his/her real-world physical environment. By maintaining this awareness, the user will be able to intuitively recall where real-world obstacles (e.g., furniture, fixtures, walls, etc.) are located and can avoid those obstacles, even when immersed in a VR environment.

Modifying the Virtual Anchor Object

[0054] Attention will now be directed to FIG. 9, which shows another example VR environment 900 with a rendered anchor 905. Here, anchor 905 is rendered as being at least partially transparent in VR environment 900 so that anchor 905 only partially occludes other virtual content in VR environment 900. For instance, anchor 905 is shown as being displayed overtop a portion of the rollercoaster track and a tree. Because anchor 905 is transparent, the underlying rollercoaster track and tree are still visible to the user. In this regard, various visual properties of anchor 905 may be modified in different manners. In some instances, the visual properties may be changed automatically while in other instances the properties may be changed manually. Modifications to anchor 905’s visual appearance may be made to its transparency, color, shape, outline, fill, three-dimensional characteristics, continuously displayed state, and blinking state.

[0055] As an example, FIG. 9 shows that the shape of anchor 905 corresponds to the shape of the picture frame from FIG. 3. That is, the attributes of the real-world object (e.g., the rectangular picture frame) may be used to determine the shape or outline of anchor 905. Furthermore, the visual differences in shape between anchor 705 from FIG. 7 and anchor 805 from FIG. 8 show that the rendered shape may be dependent on the current depth, angular alignment (e.g., pitch, yaw, and roll), obliqueness, orientation, and perspective of the real-world anchor object relative to the HMD. For instance, if the picture frame were immediately in front of the HMD, then the rendered anchor object would be rendered in a rectangular shape. If the user were to move the HMD so that the picture frame progressively moved away and towards the user’s peripheral vision, then the shape of the rendered anchor object would also progressively change (e.g., perhaps from that of a rectangle to that of an angled polygon to match the peripheral view of the picture frame). In this manner, the shape of the anchor 905 may dynamically change to coincide with the depth, orientation, angular alignment, obliqueness, and perspective of the real-world anchor relative to the HMD.

您可能还喜欢...