Facebook Patent | Perspective Shuffling In Virtual Co-Experiencing Systems
Patent: Perspective Shuffling In Virtual Co-Experiencing Systems
Publication Number: 20200169586
Publication Date: 20200528
Applicants: Facebook
Abstract
In one embodiment, a method includes connecting to a virtual session for co-experiencing digital media content with one or more other users in a virtual reality environment, where the virtual reality environment comprises a screen for displaying the digital media content, receiving relative-position information indicating relative positions between the first user and the one or more other users in the virtual reality environment, rendering the screen based on a first position of the first user in the virtual reality environment, wherein the screen and the first position of the first user have a predefined spatial relationship in the virtual reality environment, and rendering, based on the received relative-position information and the first position of the first user, a second avatar representing a second user in the virtual reality environment, wherein the second user is one of the one or more other users.
TECHNICAL FIELD
[0001] This disclosure generally relates to Virtual Reality (VR) systems, and in particular related to consuming digital content in a virtual environment.
BACKGROUND
[0002] Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
SUMMARY OF PARTICULAR EMBODIMENTS
[0003] In particular embodiments, a computing device associated with an artificial reality system may provide a distortion-free viewing position (e.g., centered) to a user who is co-experiencing digital content with the other users in a virtual environment. An artificial reality system may allow a plurality of users associated with virtual reality (VR) devices to co-experience digital content, such as a sports event, a movie or a TV show. Because co-experiencing is a social event among the participating users, the participating users may need to be able to look at each other while the users are talking to each other even though a visual presentation of a user in the virtual environment may be a digital avatar, not the user herself. Thus, avatars representing respective users may be placed on a curved seat in the virtual environment. If the screen is placed right in front of the centered user, the other users on each side may experience image distortion because the users are viewing the screen at an angle. Furthermore, users sitting on the extreme right or left may be too close to the screen. The virtual co-experiencing system may allow each user to face the screen right in front of the user. When a first user joins a virtual digital content co-experiencing event, a first computing device associated with a VR device for the first user may determine a first position of the first user in the virtual environment rendered by the first computing device. The first computing device may render a screen in the virtual environment rendered by the first computing device such that the screen and the first position may have a predefined spatial relationship. The predefined spatial relationship between the screen and the first position of the first user may be that the screen may be positioned at a predetermined distance from the first position and the screen may be centered at and perpendicular to a sightline of the user when the user faces forward. The first computing device may render a second avatar representing a second user that is also participating to the virtual digital content co-experiencing event at a second position, where a spatial relationship between the first position and the second position may be received from a computing device, and where the screen and the second position may not have the predefined spatial relationship. A second computing device associated with a VR device for the second user may determine a third position of the second user in the virtual environment rendered by the second computing device. The second computing device may render a screen in the virtual environment rendered by the second computing device such that the screen and the third position may have a predefined spatial relationship. The second computing device may render a first avatar representing the first user at a fourth position, where a spatial relationship between the fourth position and the third position in the virtual environment rendered by the second computing device may be identical to the spatial relationship between the first position and the second position in the virtual environment rendered by the first computing device. The screen and the fourth position may not have the predefined spatial relationship in the virtual environment rendered by the second computing device.
[0004] While the users are co-experiencing the digital content in the virtual environment, the users may communicate with each other by talking to each other, looking at each other (more specifically, looking at each other’s avatar). An avatar needs to represent the current situation of the corresponding user as close as possible at any given point of time. When a first user and a second user are watching the screen in their respective virtual environments rendered by respective computing devices, both the first user and the second user may sense that the screen is right in front of her/him. Thus, the first user and the second user may face the screen directly. However, from the first user’s perspective, the screen-watching second user may turn his face slightly towards the screen that is right in front of the first user because the second user may not be positioned right in front of the screen. The computing device associated with the first user may render an avatar for the second user as if the second user turns his face to the screen while the second user is facing his own screen. The computing devices may communicate with each other to share current facial directions of respective users.
[0005] A first computing device associated with a first user may connect to a virtual session for co-experiencing digital media content with one or more other users in a virtual reality environment, wherein the virtual reality environment may comprise a screen for displaying the digital media content. The first computing device may receive relative-position information indicating relative positions between the first user and the one or more other users in the virtual reality environment. The first computing device may render the screen based on a first position of the first user in the virtual reality environment, wherein the screen and the first position of the first user may have a predefined spatial relationship in the virtual reality environment. The first computing device may render, based on the received relative-position information and the first position of the first user, a second avatar representing a second user in the virtual reality environment, wherein the second user may be one of the one or more other users. The screen and a first avatar representing the first user may be rendered based on a second position associated with the second user in the virtual reality environment on a second computing device associated with a second user of the one or more other users. The screen rendered by the second computing device and the second position of the second user may have the predefined spatial relationship in the virtual reality environment rendered by the second computing device. The screen rendered by the second computing device and the first avatar representing the first user may have a different spatial relationship than the predefined spatial relationship in the virtual reality environment rendered by the second computing device.
[0006] The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed above. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 illustrates an example artificial reality system.
[0008] FIG. 2 illustrates example interactions between a control computing device and a computing device connected to a VR device.
[0009] FIGS. 3A-3C illustrate example virtual environments for co-experiencing digital content rendered by computing devices associated with participating users.
[0010] FIGS. 4A-4B illustrate example re-renderings of avatars to synchronize facing directions of the avatars to the facing directions of corresponding users.
[0011] FIG. 5 illustrates an example method for rendering a virtual environment for co-experiencing digital content.
[0012] FIG. 6 illustrates an example network environment associated with a social-networking system.
[0013] FIG. 7 illustrates an example social graph.
[0014] FIG. 8 illustrates an example computer system.
DESCRIPTION OF EXAMPLE EMBODIMENTS
[0015] FIG. 1 illustrates an example artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user 105, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The example artificial reality system illustrated in FIG. 1 may comprise a head-mounted display (HMD) 101, a controller 102, and a computing device 103. A user 105 may wear a head-mounted display (HMD) 101 that may provide visual artificial reality content to the user 105. The HMD 101 may include an audio device that may provide audio artificial reality content to the user 105. A controller 102 may comprise a trackpad and one or more buttons. The controller 102 may receive input from the user 105 and relay the input to the computing device 103. The controller 102 may also provide haptic feedback to the user 105. The computing device 103 may be connected to the HMD 101 and the controller 102. The computing device 103 may control the HMD 101 and the controller 102 to provide the artificial reality content to the user and receive input from the user 105. The computing device 103 may be a standalone host computer system, combined with the HMD 101, a mobile device, or any other hardware platform capable of providing artificial reality content to one or more users 105 and receive input from the users 105.
[0016] In particular embodiments, a computing device 103 associated with an artificial reality system may provide a distortion-free viewing position (e.g., centered) to a user 105 who is co-experiencing digital content with the other users 105 in a virtual environment. An artificial reality system may allow a plurality of users 105 associated with virtual reality (VR) devices to co-experience digital content, such as a sports event, a movie or a TV show. Because co-experiencing is a social event among the participating users 105, the participating users 105 may need to be able to look at each other while the users 105 are talking to each other even though a visual presentation of a user 105 in the virtual environment may be a digital avatar, not the user herself. Thus, avatars representing respective users 105 may be placed on a curved seat in the virtual environment. If the screen is placed right in front of the centered user, the other users on each side may experience image distortion because the users are viewing the screen at an angle. Furthermore, users sitting on the extreme right or left may be too close to the screen. The virtual co-experiencing system may allow each user to face the screen right in front of the user. When a first user 105 joins a virtual digital content co-experiencing event, a first computing device 103 associated with a VR device for the first user 105 may determine a first position of the first user 105 in the virtual environment rendered by the first computing device. The first computing device 103 may render a screen in the virtual environment rendered by the first computing device 103 such that the screen and the first position may have a predefined spatial relationship. The predefined spatial relationship between the screen and the first position of the first user 105 may be that the screen may be positioned at a predetermined distance from the first position and the screen may be centered at and perpendicular to a sightline of the user when the user 105 faces forward. The first computing device 103 may render a second avatar representing a second user 105 that is also participating to the virtual digital content co-experiencing event at a second position, where a spatial relationship between the first position and the second position may be received from a computing device, and where the screen and the second position may not have the predefined spatial relationship. A second computing device 103 associated with a VR device for the second user 105 may determine a third position of the second user in the virtual environment rendered by the second computing device 103. The second computing device 103 may render a screen in the virtual environment rendered by the second computing device 103 such that the screen and the third position may have a predefined spatial relationship. The second computing device 103 may render a first avatar representing the first user 105 at a fourth position, where a spatial relationship between the fourth position and the third position in the virtual environment rendered by the second computing device 103 may be identical to the spatial relationship between the first position and the second position in the virtual environment rendered by the first computing device 103. The screen and the fourth position may not have the predefined spatial relationship in the virtual environment rendered by the second computing device 103. Although this disclosure describes providing a distortion-free viewing position to a user in a virtual environment for co-experiencing digital content in a particular manner, this disclosure contemplates providing a distortion-free viewing position to a user in a virtual environment for co-experiencing digital content in any suitable manner.
[0017] In particular embodiments, a first computing device 103 connected to a virtual reality (VR) device may be associated with a first user 105. The first computing device 103 may receive an invitation to a virtual digital content co-experiencing event from a control computing device. A user 105 may want to have the virtual digital content co-experiencing event with one or more other users 105. The user 105 may initiate sending invitations to one or more computing devices 103 associated with the one or more other users 105. The one or more computing devices 103 may be connected to respective VR devices. The first computing device 103 associated with the first user 105 may be one of the one or more computing devices. FIG. 2 illustrates example interactions between a control computing device and a computing device connected to a VR device. The computing device 103 connected to a VR device may receive an invitation 210 from a control computing device 201. As an example and not by way of limitation, Alice may want to have a watching party for a world cup match with her friends. Alice may cause a system to invite Bob, Charles, David, and Esther to a virtual co-experiencing event. A computing device 103 associated with Bob may receive the invitation 210 to the event from a control computing device 201. The computing devices 103 associated with Charles, David and Esther may also receive the invitation 210. Although this disclosure describes receiving an invitation to a virtual digital content co-experiencing event in a particular manner, this disclosure contemplates receiving an invitation to a virtual digital content co-experiencing event in any suitable manner.
[0018] In particular embodiments, the first computing device 103 may, in response to the invitation, connect to a virtual session for co-experiencing digital media content with one or more other users 105 in a virtual reality environment. To connect to the virtual session, the first computing device may send a join request 220 to the control computing device 201. The join request 220 may comprise an identifier of the first user 105 and an identifier for a first avatar selected by the first user. As an example and not by way of limitation, continuing with a prior example, the computing device 103 associated with Bob may present a message indicating that an invitation for a co-experiencing event from Alice has arrived. If Bob accepts the invitation by clicking an “Accept” button on the screen, the computing device 103 associated with Bob may ask Bob to select one of a plurality of avatars that may represent Bob during a virtual session for the co-experiencing. The computing device 103 associated with Bob may have received the plurality of avatars from the control computing device 201. The computing device 103 associated with Bob may connect to the virtual session by sending a join request 220 to the control computing device 201. The join request 220 may comprise an identifier for Bob and an identifier for the avatar that Bob has selected. As another example and not by way of limitation, continuing with a prior example, the computing device 103 associated with Charles may be configured to accept any invitation for a co-experiencing event. On receiving the invitation, the computing device 103 associated with Charles may send a join request 220 without acquiring a confirmation from Charles to the control computing device 201. The computing device 103 associated with Charles may use a pre-determined avatar to represent Charles during the virtual session. The join request 220 may comprise an identifier for Charles and an identifier for the pre-determined avatar. Although this disclosure describes joining a virtual session for a virtual co-experiencing event in a particular manner, this disclosure contemplates joining to a virtual session for a virtual co-experiencing event in any suitable manner.