Facebook Patent | Effective streaming of augmented-reality data from third-party systems

Patent: Effective streaming of augmented-reality data from third-party systems

Drawings: Click to check drawins

Publication Number: 20210097762

Publication Date: 20210401

Applicant: Facebook

Abstract

In one embodiment, a method includes receiving an augmented-reality object and an associated display rule from each of a plurality of third-party systems, receiving one or more signals associated with a current view of an environment of a first user from a client system associated with the first user, selecting at least one of the augmented-reality objects received from the plurality of third-party systems based on the one or more signals and the display rule associated with the selected augmented-reality object, and sending instructions for presenting the selected augmented-reality object with the current view of the environment to the client system.

Claims

  1. A method comprising, by one or more computing systems: receiving, from each of a plurality of third-party systems, an augmented-reality object and an associated display rule; receiving, from a client system associated with a first user, one or more signals associated with a current view of an environment of the first user; selecting at least one of the augmented-reality objects received from the plurality of third-party systems based on (1) the one or more signals, (2) a way the first user is likely to interact with the selected augmented-reality object, and (3) the display rule associated with the selected augmented-reality object; and sending, to the client system, instructions for presenting the selected augmented-reality object with the current view of the environment.

  2. The method of claim 1, wherein the one or more signals comprise one or more of: location information of the environment; social graph information associated with the environment; social graph information associated with the first user; contextual information associated with the environment; or time information.

  3. The method of claim 1, wherein each of the plurality of third-party systems is associated with a third-party content provider.

  4. The method of claim 3, wherein each third-party content provider is registered to the one or more computing systems.

  5. The method of claim 1, further comprising: generating, for each of the plurality of third-party systems, a declarative model; and receiving, via the declarative model from the corresponding third-party system, one or more preferences for one or more types of augmented-reality objects.

  6. The method of claim 5, wherein selecting the at least one of the augmented-reality objects received from the plurality of third-party systems is further based on the one or more preferences received from each third-party system.

  7. The method of claim 1, further comprising: generating, for at least one of the plurality of third-party systems, a discovery model; and sending, to the client system, a prompt via the discovery model, wherein the prompt comprises an executable link for installing a third-party application associated with the at least one third-party system.

  8. The method of claim 1, further comprising: receiving, from the client system, one or more user interactions with the selected augmented-reality object from the first user.

  9. The method of claim 1, wherein the augmented-reality object comprises one or more of an interactive digital element, a visual overlay, or a sensory projection.

  10. One or more computer-readable non-transitory storage media embodying software that is operable when executed to: receive, from each of a plurality of third-party systems, an augmented-reality object and an associated display rule; receive, from a client system associated with a first user, one or more signals associated with a current view of an environment of the first user; select at least one of the augmented-reality objects received from the plurality of third-party systems based on (1) the one or more signals, (2) a way the first user is likely to interact with the selected augmented-reality object, and (3) the display rule associated with the selected augmented-reality object; and send, to the client system, instructions for presenting the selected augmented-reality object with the current view of the environment.

  11. The media of claim 10, wherein the one or more signals comprise one or more of: location information of the environment; social graph information associated with the environment; social graph information associated with the first user; contextual information associated with the environment; or time information.

  12. The media of claim 10, wherein each of the plurality of third-party systems is associated with a third-party content provider.

  13. The media of claim 12, wherein each third-party content provider is registered to the one or more computing systems.

  14. The media of claim 10, wherein the software is further operable when executed to: generate, for each of the plurality of third-party systems, a declarative model; and receive, via the declarative model from the corresponding third-party system, one or more preferences for one or more types of augmented-reality objects.

  15. The media of claim 14, wherein selecting the at least one of the augmented-reality objects received from the plurality of third-party systems is further based on the one or more preferences received from each third-party system.

  16. The media of claim 10, wherein the software is further operable when executed to: generate, for at least one of the plurality of third-party systems, a discovery model; and send, to the client system, a prompt via the discovery model, wherein the prompt comprises an executable link for installing a third-party application associated with the at least one third-party system.

  17. The media of claim 10, wherein the software is further operable when executed to: receive, from the client system, one or more user interactions with the selected augmented-reality object from the first user.

  18. The media of claim 10, wherein the augmented-reality object comprises one or more of an interactive digital element, a visual overlay, or a sensory projection.

  19. A system comprising: one or more processors; and a non-transitory memory coupled to the processors comprising instructions executable by the processors, the processors operable when executing the instructions to: receive, from each of a plurality of third-party systems, an augmented-reality object and an associated display rule; receive, from a client system associated with a first user, one or more signals associated with a current view of an environment of the first user; select at least one of the augmented-reality objects received from the plurality of third-party systems based on (1) the one or more signals, (2) a way the first user is likely to interact with the selected augmented-reality object, and (3) the display rule associated with the selected augmented-reality object; and send, to the client system, instructions for presenting the selected augmented-reality object with the current view of the environment.

  20. The system of claim 19, wherein the one or more signals comprise one or more of: location information of the environment; social graph information associated with the environment; social graph information associated with the first user; contextual information associated with the environment; or time information.

Description

TECHNICAL FIELD

[0001] This disclosure generally relates to virtual reality and augmented reality.

BACKGROUND

[0002] Virtual reality (VR) is an experience taking place within a computer-generated reality of immersive environments can be similar to or completely different from the real world. Applications of virtual reality can include entertainment (i.e. gaming) and educational purposes (i.e. medical or military training). Other distinct types of VR style technology include augmented-reality and mixed reality. Currently standard virtual reality systems use either virtual reality headsets or multi-projected environments to generate realistic images, sounds and other sensations that simulate a user’s physical presence in a virtual environment. Virtual reality typically incorporates auditory and video feedback but may also allow other types of sensory and force feedback through haptic technology.

[0003] Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory. The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e., masking of the natural environment) and is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. Augmented-reality is used to enhance natural environments or situations and offer perceptually enriched experiences. With the help of advanced AR technologies (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulatable.

SUMMARY OF PARTICULAR EMBODIMENTS

[0004] In particular embodiments, a reality-stream server may efficiently stream augmented-reality data to a client system such as AR glasses for different applications using reality-stream. The generation of the augmented-reality data may be based on contextual information associated with the client system. In particular embodiments, different applications may be useful for a user to explore his/her surroundings. However, installing lots of applications on a client system such as AR glasses may be unrealistic as these client systems may operate on limited computing power, which cannot afford many applications running on them. To address the above issue, the embodiments disclosed herein may enable application-providers, in other words third-party content providers, to register with the reality-stream server for streaming services, which may also enrich user experience with the client system, e.g., AR glasses. The user may not need to install those applications. Instead, when the reality-stream server obtains information such as location and context via the AR glasses, the server may determine what information associated with the applications may be useful for the user and then stream augmented-reality data based on such information to the user. As a result, the user may enjoy the augmented-reality data associated with different applications without the burden of the increased computing power on the client system. Although this disclosure describes streaming particular data via a particular system in a particular manner, this disclosure contemplates streaming any suitable data via any suitable system in any suitable manner.

[0005] In particular embodiments, the reality-stream server may receive, from each of a plurality of third-party systems, an augmented-reality object and an associated display rule. The reality-stream server may then receive, from a client system associated with a first user, one or more signals associated with a current view of an environment of the first user. In particular embodiments, the reality-stream server may select at least one of the augmented-reality objects received from the plurality of third-party systems based on the one or more signals and the display rule associated with the selected augmented-reality object. The reality-stream server may further send, to the client system, instructions for presenting the selected augmented-reality object with the current view of the environment.

[0006] Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented-reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.

[0007] The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed herein. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] FIG. 1 illustrates an example diagram flow of streaming augmented-reality data for a user.

[0009] FIG. 2 illustrates an example method for streaming augmented-reality data.

[0010] FIG. 3 illustrates an example social graph.

[0011] FIG. 4 illustrates an example computer system.

DESCRIPTION OF EXAMPLE EMBODIMENTS

[0012] Effective Streaming of Augmented-Reality Data from Third-Party Systems

[0013] In particular embodiments, a reality-stream server may efficiently stream augmented-reality data to a client system such as AR glasses for different applications using reality-stream. The generation of the augmented-reality data may be based on contextual information associated with the client system. In particular embodiments, different applications may be useful for a user to explore his/her surroundings. However, installing lots of applications on a client system such as AR glasses may be unrealistic as these client systems may operate on limited computing power, which cannot afford many applications running on them. To address the above issue, the embodiments disclosed herein may enable application-providers, in other words third-party content providers, to register with the reality-stream server for streaming services, which may also enrich user experience with the client system, e.g., AR glasses. The user may not need to install those applications. Instead, when the reality-stream server obtains information such as location and context via the AR glasses, the server may determine what information associated with the applications may be useful for the user and then stream augmented-reality data based on such information to the user. As a result, the user may enjoy the augmented-reality data associated with different applications without the burden of the increased computing power on the client system. Although this disclosure describes streaming particular data via a particular system in a particular manner, this disclosure contemplates streaming any suitable data via any suitable system in any suitable manner.

[0014] In particular embodiments, the reality-stream server may receive, from each of a plurality of third-party systems, an augmented-reality object and an associated display rule. The reality-stream server may then receive, from a client system associated with a first user, one or more signals associated with a current view of an environment of the first user. In particular embodiments, the reality-stream server may select at least one of the augmented-reality objects received from the plurality of third-party systems based on the one or more signals and the display rule associated with the selected augmented-reality object. The reality-stream server may further send, to the client system, instructions for presenting the selected augmented-reality object with the current view of the environment.

[0015] FIG. 1 illustrates an example diagram flow 100 of streaming augmented-reality data for a user. In particular embodiments, a user may wear AR/VR glasses 105 as a smart client system to get useful data. The AR/VR glasses 105 may capture one or more signals, in other words sensor stream 110 (e.g., pictures, videos, or audio), based on one or more sensors. The sensor stream 110 may be sent to an event processing module 115. The event processing module 115 may analyze what events are associated with the sensor stream 110, e.g., arriving at a restaurant. The event processing module 115 may additionally filter or transform the sensor stream to extract key information such as location, objects, people, faces, etc., that are associated with the sensor stream 110. The event processing module 115 may further send the filtered/transformed sensor stream 120 to a stream processing module 125. During the event processing, the current status of the reality stream generation may be stored in a stage unit 130. The stream processing service module 125 may communicate with a cloud computing platform 135 to retrieve relevant streaming data, i.e., augmented-reality objects, for the user. The augmented-reality objects may be provided by a plurality of third-party systems. In particular embodiments, each of the plurality of third-party systems may be associated with a third-party content provider. Each third-party content provider may be registered to the one or more computing systems, i.e., the reality-stream server disclosed herein. The cloud computing platform 135 may have information of which third-party content providers have registered with the reality-stream server to stream their augmented-reality data to end users. The cloud computing platform 135 may also have other information such social graphs, which may be useful for determining what event stream data should be sent to the user. Based on the accessed information from the cloud computing platform 135, the stream processing service module 125 may generate event stream 140, which may include reality augmentation and people information. Such event stream 140 may be sent back to the event processing module 115. Upon receiving the event stream 140, the event processing module 115 may process it so that the event stream 115 can be displayed to the user via the AR/VR glasses 105 effectively.

[0016] In particular embodiments, the reality-stream server may determine what augmented-reality data may be most relevant to the user based on the one or more signals including location, time, social graph, available content, the way the user may interact with the augmented-reality data, etc. As a result, it may stream augmented-reality data to AR glasses for the user without overwhelming the user by only showing the user the most relevant data to enrich user experience with efficiency. As an example and not by way of limitation, when a user wearing AR glasses gets close to a restaurant, the reality-stream server may stream augmented-reality object such as a famous dish of this restaurant to the AR glasses. The augmented-reality object of the dish may be provided by a third-party system associated with a third-party content provider. Instead of having a corresponding third-party application running on the client system with expensive computations, the reality-stream server may showcase the augmentation via the AR glasses as long as the third-party content provider is registered with the server. As another example and not by way of limitation, the reality-stream server may leverage social context, e.g., a dish the user’s friend has shared, to stream augmented-reality object of the shared dish to the VR glasses of the user.

[0017] In particular embodiments, the one or more signals may comprise one or more of location information of the environment, social graph information associated with the environment, social graph information associated with the first user, contextual information associated with the environment, or time information. As an example and not by way of limitation, the location information of the environment may indicate the user is at a movie theater, based on which the server may select a trailer of a movie now playing as the augmented-reality object. As another example and not by way of limitation, the environment may be Times Square and the social graph information associated with the environment may indicate most people took pictures of Times Square. Accordingly, the server may select a picture of Times Square as the augmented-reality object. As another example and not by way of limitation, the user may be at a shopping mall with many merchandisers, but the social graph information associated with the user indicates that the user checked in a particular bakery many times before. As a result, the server may select augmented-reality object such as a picture of a new cake provided by this bakery from augmented-reality objects provided by all the merchandisers in the shopping mall. As another example and not by way of limitation, the user may be at Stanford University. The social graph information associated with the user may indicate the user has a high social graph affinity with Stanford Law School (e.g., the user attended the Law School). As a result, the server may select augmented-reality object such as a picture of a newly published book by Stanford Law School. As another example and not by way of limitation, the user may be at a museum and the contextual information associated with the environment may indicate that the museum is having a temporary exhibition. Accordingly, the server may select augmented-reality objects such as a virtual tour of the temporary exhibition. As another example and not by way of limitation, the user may be at office and the time information may indicate that it is early morning. Correspondingly, the server may select augmented-reality objects such as calendar information and work schedule.

[0018] In particular embodiments, the augmented-reality object may comprise one or more of an interactive digital element, a visual overlay, or a sensory projection. The augmented-reality object may be two-dimensional (2D) or three-dimensional (3D). The augmented-reality object may even include animated objects. In particular embodiments, the augmented-reality object may be augmented onto real physical objects. The reality-stream server may further receive, from the client system, one or more user interactions with the selected augmented-reality object from the first user.

[0019] In particular embodiments, client systems like AR glasses usually do not have enough computation power on for multiple applications to execute tasks of generating augmented-reality data efficiently. However, it is important to provide augmented-reality data quickly to the client system of a user for good user experience. Using the reality-stream server to stream augmented-reality data towards the AR glasses based on location and context (e.g., social context) captured by the AR glasses of the user may well address the aforementioned limitation as no applications are required to run on the AR glasses. As an example and not by way of limitation, a user may be walking towards a direction. By using the reality-stream server, the augmented-reality data may be pre-loaded in a radius around the user so that when the user physically gets to a certain place, the augmented-reality data may be displayed via the AR glasses to the user immediately without any delay.

[0020] In particular embodiments, the reality-stream server may generate, for each of the plurality of third-party systems, a declarative model. The reality-stream server may then receive, via the declarative model from the corresponding third-party system, one or more preferences for one or more types of augmented-reality objects. In particular embodiments, selecting the at least one of the augmented-reality objects received from the plurality of third-party systems may be further based on the one or more preferences received from each third-party system. As a result, the reality-stream server may only stream the data a third-party system declared to the user. As an example and not by way of limitation, Instagram may declare to stream augmented-reality data if the user is tagged in a picture/video posted by the user’s friend.

[0021] In particular embodiments, the reality-stream server may generate, for at least one of the plurality of third-party systems, a discovery model. The reality-stream server may then send, to the client system, a prompt via the discovery model. The prompt may comprise an executable link for installing a third-party application associated with the at least one third-party system. As an example and not by way of limitation, without installing a particular gaming application a user may not see a cartoon character from the game in augmented reality. The discover model may enable the user to see such cartoon character via the AR glasses when his/her friend is playing this game even if the user did not install the application. The user may be prompted to download and install the application if he/she wants to play the game. As a result, the discovery model may provide a user a whole different way of discovering content and applications.

[0022] FIG. 2 illustrates an example method 200 for streaming augmented-reality data. The method may begin at step 210, where the reality-stream server may receive, from each of a plurality of third-party systems, an augmented-reality object and an associated display rule. At step 220, the reality-stream server may receive, from a client system associated with a first user, one or more signals associated with a current view of an environment of the first user. At step 230, the reality-stream server may select at least one of the augmented-reality objects received from the plurality of third-party systems based on the one or more signals and the display rule associated with the selected augmented reality object. At step 240, the reality-stream server may send, to the client system, instructions for presenting the selected augmented-reality object with the current view of the environment. Particular embodiments may repeat one or more steps of the method of FIG. 2, where appropriate. Although this disclosure describes and illustrates particular steps of the method of FIG. 2 as occurring in a particular order, this disclosure contemplates any suitable steps of the method of FIG. 2 occurring in any suitable order. Moreover, although this disclosure describes and illustrates an example method for streaming augmented-reality data including the particular steps of the method of FIG. 2, this disclosure contemplates any suitable method for streaming augmented-reality data including any suitable steps, which may include all, some, or none of the steps of the method of FIG. 2, where appropriate. Furthermore, although this disclosure describes and illustrates particular components, devices, or systems carrying out particular steps of the method of FIG. 2, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method of FIG. 2.

[0023] Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented-reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers

Social Graphs

[0024] FIG. 3 illustrates example social graph 300. In particular embodiments, there may be one or more social graphs 300 stored in one or more data stores. In particular embodiments, social graph 300 may include multiple nodes–which may include multiple user nodes 302 or multiple concept nodes 304–and multiple edges 306 connecting the nodes. Each node may be associated with a unique entity (i.e., user or concept), each of which may have a unique identifier (ID), such as a unique number or username. Example social graph 300 illustrated in FIG. 3 is shown, for didactic purposes, in a two-dimensional visual map representation. In particular embodiments, a reality-stream server, a client system, or a third-party system may access social graph 300 and related social-graph information for suitable applications. The nodes and edges of social graph 300 may be stored as data objects, for example, in a data store (such as a social-graph database). Such a data store may include one or more searchable or queryable indexes of nodes or edges of social graph 300.

[0025] In particular embodiments, a user node 302 may correspond to a user of an online social network. As an example and not by way of limitation, a user may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or over the online social network. In particular embodiments, when a user registers for an account with the online social network, a social-networking system may create a user node 302 corresponding to the user, and store the user node 302 in one or more data stores. Users and user nodes 302 described herein may, where appropriate, refer to registered users and user nodes 302 associated with registered users. In addition or as an alternative, users and user nodes 302 described herein may, where appropriate, refer to users that have not registered with social-networking system. In particular embodiments, a user node 302 may be associated with information provided by a user or information gathered by various systems, including the social-networking system. As an example and not by way of limitation, a user may provide his or her name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, or other demographic information. In particular embodiments, a user node 302 may be associated with one or more data objects corresponding to information associated with a user. In particular embodiments, a user node 302 may correspond to one or more webpages.

[0026] In particular embodiments, a concept node 304 may correspond to a concept. As an example and not by way of limitation, a concept may correspond to a place (such as, for example, a movie theater, restaurant, landmark, or city); a website (such as, for example, a website associated with social-network system or a third-party website associated with a web-application server); an entity (such as, for example, a person, business, group, sports team, or celebrity); a resource (such as, for example, an audio file, video file, digital photo, text file, structured document, or application) which may be located within social-networking system or on an external server, such as a web-application server; real or intellectual property (such as, for example, a sculpture, painting, movie, game, song, idea, photograph, or written work); a game; an activity; an idea or theory; an object in a augmented/virtual reality environment; another suitable concept; or two or more such concepts. A concept node 304 may be associated with information of a concept provided by a user or information gathered by various systems, including social-networking system. As an example and not by way of limitation, information of a concept may include a name or a title; one or more images (e.g., an image of the cover page of a book); a location (e.g., an address or a geographical location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable concept information; or any suitable combination of such information. In particular embodiments, a concept node 304 may be associated with one or more data objects corresponding to information associated with concept node 304. In particular embodiments, a concept node 304 may correspond to one or more webpages.

[0027] In particular embodiments, a node in social graph 300 may represent or be represented by a webpage (which may be referred to as a “profile page”). Profile pages may be hosted by or accessible to a social-networking system. Profile pages may also be hosted on third-party websites associated with a third-party system. As an example and not by way of limitation, a profile page corresponding to a particular external webpage may be the particular external webpage and the profile page may correspond to a particular concept node 304. Profile pages may be viewable by all or a selected subset of other users. As an example and not by way of limitation, a user node 302 may have a corresponding user-profile page in which the corresponding user may add content, make declarations, or otherwise express himself or herself. As another example and not by way of limitation, a concept node 304 may have a corresponding concept-profile page in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding to concept node 304.

[0028] In particular embodiments, a concept node 304 may represent a third-party webpage or resource hosted by a third-party system. The third-party webpage or resource may include, among other elements, content, a selectable or other icon, or other inter-actable object (which may be implemented, for example, in JavaScript, AJAX, or PHP codes) representing an action or activity. As an example and not by way of limitation, a third-party webpage may include a selectable icon such as “like,” “check-in,” “eat,” “recommend,” or another suitable action or activity. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., “check-in”), causing a client system to send to social-networking system a message indicating the user’s action. In response to the message, social-networking system may create an edge (e.g., a check-in-type edge) between a user node 302 corresponding to the user and a concept node 304 corresponding to the third-party webpage or resource and store edge 306 in one or more data stores.

[0029] In particular embodiments, a pair of nodes in social graph 300 may be connected to each other by one or more edges 306. An edge 306 connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, an edge 306 may include or represent one or more data objects or attributes corresponding to the relationship between a pair of nodes. As an example and not by way of limitation, a first user may indicate that a second user is a “friend” of the first user. In response to this indication, social-networking system may send a “friend request” to the second user. If the second user confirms the “friend request,” social-networking system may create an edge 306 connecting the first user’s user node 302 to the second user’s user node 302 in social graph 300 and store edge 306 as social-graph information in one or more of data stores &64. In the example of FIG. 3, social graph 300 includes an edge 306 indicating a friend relation between user nodes 302 of user “A” and user “B” and an edge indicating a friend relation between user nodes 302 of user “C” and user “B.” Although this disclosure describes or illustrates particular edges 306 with particular attributes connecting particular user nodes 302, this disclosure contemplates any suitable edges 306 with any suitable attributes connecting user nodes 302. As an example and not by way of limitation, an edge 306 may represent a friendship, family relationship, business or employment relationship, fan relationship (including, e.g., liking, etc.), follower relationship, visitor relationship (including, e.g., accessing, viewing, checking-in, sharing, etc.), subscriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Moreover, although this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to users or concepts being connected may, where appropriate, refer to the nodes corresponding to those users or concepts being connected in social graph 300 by one or more edges 306. The degree of separation between two objects represented by two nodes, respectively, is a count of edges in a shortest path connecting the two nodes in the social graph 300. As an example and not by way of limitation, in the social graph 300, the user node 302 of user “C” is connected to the user node 302 of user “A” via multiple paths including, for example, a first path directly passing through the user node 302 of user “B,” a second path passing through the concept node 304 of company “Acme” and the user node 302 of user “D,” and a third path passing through the user nodes 302 and concept nodes 304 representing school “Stanford,” user “G,” company “Acme,” and user “D.” User “C” and user “A” have a degree of separation of two because the shortest path connecting their corresponding nodes (i.e., the first path) includes two edges 306.

[0030] In particular embodiments, an edge 306 between a user node 302 and a concept node 304 may represent a particular action or activity performed by a user associated with user node 302 toward a concept associated with a concept node 304. As an example and not by way of limitation, as illustrated in FIG. 3, a user may “like,” “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” a concept, each of which may correspond to an edge type or subtype. A concept-profile page corresponding to a concept node 304 may include, for example, a selectable “check in” icon (such as, for example, a clickable “check in” icon) or a selectable “add to favorites” icon. Similarly, after a user clicks these icons, social-networking system may create a “favorite” edge or a “check in” edge in response to a user’s action corresponding to a respective action. As another example and not by way of limitation, a user (user “C”) may listen to a particular song using a particular application (e.g., an online music application). In this case, social-networking system may create a “listened” edge 306 and a “used” edge (as illustrated in FIG. 3) between user nodes 302 corresponding to the user and concept nodes 304 corresponding to the song and application to indicate that the user listened to the song and used the application. Moreover, social-networking system may create a “played” edge 306 (as illustrated in FIG. 3) between concept nodes 304 corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, “played” edge 306 corresponds to an action performed by an external application (e.g., the online music application) on an external audio file (the song). Although this disclosure describes particular edges 306 with particular attributes connecting user nodes 302 and concept nodes 304, this disclosure contemplates any suitable edges 306 with any suitable attributes connecting user nodes 302 and concept nodes 304. Moreover, although this disclosure describes edges between a user node 302 and a concept node 304 representing a single relationship, this disclosure contemplates edges between a user node 302 and a concept node 304 representing one or more relationships. As an example and not by way of limitation, an edge 306 may represent both that a user likes and has used at a particular concept. Alternatively, another edge 306 may represent each type of relationship (or multiples of a single relationship) between a user node 302 and a concept node 304 (as illustrated in FIG. 3 between user node 302 for user “E” and concept node 304 for “Online Music App”).

[0031] In particular embodiments, social-networking system may create an edge 306 between a user node 302 and a concept node 304 in social graph 300. As an example and not by way of limitation, a user viewing a concept-profile page (such as, for example, by using a web browser or a special-purpose application hosted by the user’s client system) may indicate that he or she likes the concept represented by the concept node 304 by clicking or selecting a “Like” icon, which may cause the user’s client system to send to social-networking system a message indicating the user’s liking of the concept associated with the concept-profile page. In response to the message, social-networking system may create an edge 306 between user node 302 associated with the user and concept node 304, as illustrated by “like” edge 306 between the user and concept node 304. In particular embodiments, social-networking system may store an edge 306 in one or more data stores. In particular embodiments, an edge 306 may be automatically formed by social-networking system in response to a particular user action. As an example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, an edge 306 may be formed between user node 302 corresponding to the first user and concept nodes 304 corresponding to those concepts. Although this disclosure describes forming particular edges 306 in particular manners, this disclosure contemplates forming any suitable edges 306 in any suitable manner.

Social Graph Affinity and Coefficient

[0032] In particular embodiments, social-networking system may determine the social-graph affinity (which may be referred to herein as “affinity”) of various social-graph entities for each other. Affinity may represent the strength of a relationship or level of interest between particular objects associated with the online social network, such as users, concepts, content, actions, advertisements, other objects associated with the online social network, or any suitable combination thereof. Affinity may also be determined with respect to objects associated with third-party systems or other suitable systems. An overall affinity for a social-graph entity for each user, subject matter, or type of content may be established. The overall affinity may change based on continued monitoring of the actions or relationships associated with the social-graph entity. Although this disclosure describes determining particular affinities in a particular manner, this disclosure contemplates determining any suitable affinities in any suitable manner.

[0033] In particular embodiments, social-networking system may measure or quantify social-graph affinity using an affinity coefficient (which may be referred to herein as “coefficient”). The coefficient may represent or quantify the strength of a relationship between particular objects associated with the online social network. The coefficient may also represent a probability or function that measures a predicted probability that a user will perform a particular action based on the user’s interest in the action. In this way, a user’s future actions may be predicted based on the user’s prior actions, where the coefficient may be calculated at least in part on the history of the user’s actions. Coefficients may be used to predict any number of actions, which may be within or outside of the online social network. As an example and not by way of limitation, these actions may include various types of communications, such as sending messages, posting content, or commenting on content; various types of observation actions, such as accessing or viewing profile pages, media, or other suitable content; various types of coincidence information about two or more social-graph entities, such as being in the same group, tagged in the same photograph, checked-in at the same location, or attending the same event; or other suitable actions. Although this disclosure describes measuring affinity in a particular manner, this disclosure contemplates measuring affinity in any suitable manner.

……
……
……

You may also like...