雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Facebook Patent | Systems And Methods To Present Reactions To Media Content In A Virtual Environment

Patent: Systems And Methods To Present Reactions To Media Content In A Virtual Environment

Publication Number: 20180189554

Publication Date: 20180705

Applicants: Facebook

Abstract

Systems, methods, and non-transitory computer readable media are configured to receive a recording of an expression of a content provider in response to a digital environment. The expression can be based on at least one of gestures, body movement, speech, and sounds of the content provider. An animation can be based on the recording. A reaction based on the animation can be presented to a user in the digital environment.

FIELD OF THE INVENTION

[0001] The present technology relates to virtual environments. More particularly, the present technology relates to techniques for presenting reactions to media content in virtual environments.

BACKGROUND

[0002] Users often utilize computing devices for a wide variety of purposes. Users can use their computing devices to, for example, interact with one another, access media content, share media content, and create media content. In some cases, media content can be provided by users of a social networking system. The media content can include one or a combination of, for example, text, images, videos, and audio. The media content may be published to the social networking system for consumption by others.

[0003] Under conventional approaches, media content provided through a social networking system can be accessed by users of the social networking system in various manners. In some cases, various media content can be provided to a user based on selections of the user or interests of the user as determined by the social networking system. In some instances, the user can provide information in response to media content accessed by the user.

SUMMARY

[0004] Various embodiments of the present technology can include systems, methods, and non-transitory computer readable media configured to receive a recording of an expression of a content provider in response to a digital environment. The expression can be based on at least one of gestures, body movement, speech, and sounds of the content provider. An animation can be created based on the recording. A reaction based on the animation can be presented to a user in the digital environment.

[0005] In some embodiments, the animation can comprise at least one of a coin or an avatar exhibiting motion that mirrors the expression of the content provider.

[0006] In some embodiments, the coin can comprise an identifying picture of the content provider and the avatar can comprise a generic sketch of at least a portion of a human figure.

[0007] In some embodiments, a form of the reaction to be presented to the user can be determined based on a type of the digital environment.

[0008] In some embodiments, the reaction can be associated with a time stamp relating to a portion of media content providing the digital environment. Play back of the reaction to the user can be automatically initiated in response to the portion of the media content being presented to the user.

[0009] In some embodiments, a plurality of reactions associated with a scene in the digital environment can be indicated to the user for selection by the user in response to the scene being presented to the user.

[0010] In some embodiments, a plurality of reactions can be ranked for potential presentation to the user in the digital environment. The plurality of reactions can be presented in rank order to the user.

[0011] In some embodiments, the digital environment can comprise at least one of a virtual reality (VR) environment, an augmented reality (AR) environment, or a mixed reality (MR) environment.

[0012] In some embodiments, the digital environment can be provided through media content presented through an interface of a computing device, the media content comprising at least one of a panoramic photo, a 360 photo, a photo sphere, a 360 video, a three-dimensional (3D) simulation, or a 3D animation.

[0013] In some embodiments, the digital environment can be provided through a viewfinder of a computing device.

[0014] It should be appreciated that many other features, applications, embodiments, and/or variations of the disclosed technology will be apparent from the accompanying drawings and from the following detailed description. Additional and/or alternative implementations of the structures, systems, non-transitory computer readable media, and methods described herein can be employed without departing from the principles of the disclosed technology.

BRIEF DESCRIPTION OF THE DRAWINGS

[0015] FIG. 1 illustrates a system including an example content provision module, according to an embodiment of the present technology.

[0016] FIG. 2 illustrates an example reactions module, according to an embodiment of the present technology.

[0017] FIG. 3A illustrates an example first scenario, according to an embodiment of the present technology.

[0018] FIG. 3B illustrates an example second scenario, according to an embodiment of the present technology.

[0019] FIG. 4 illustrates an example first method, according to an embodiment of the present technology.

[0020] FIG. 5 illustrates an example second method, according to an embodiment of the present technology.

[0021] FIG. 6 illustrates a network diagram of an example system that can be utilized in various scenarios, according to an embodiment of the present technology.

[0022] FIG. 7 illustrates an example of a computer system that can be utilized in various scenarios, according to an embodiment of the present technology.

[0023] The figures depict various embodiments of the disclosed technology for purposes of illustration only, wherein the figures use like reference numerals to identify like elements. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated in the figures can be employed without departing from the principles of the disclosed technology described herein.

DETAILED DESCRIPTION

Animated Reactions in a Virtual Reality Environment

[0024] As mentioned, users often utilize computing devices for a wide variety of purposes. Users can use their computing devices to, for example, interact with one another, access media content, share media content, and create media content. In some cases, media content can be provided by users of a social networking system. The media content can include one or a combination of, for example, text, images, videos, and audio. The media content may be published to the social networking system for consumption by others.

[0025] Under conventional approaches, media content provided through a social networking system can be accessed by users of the social networking system in various manners. In some cases, various media content can be provided to a user based on selections of the user or interests of the user as determined by the social networking system. In some instances, the social networking system can present media content for the user in support of a digital environment. A digital environment can include any experience or environment provided to a user in which the user can access content and otherwise interact. In many instances, a content provider can be permitted to provide a response to accessed media content. In conventional approaches, the content provider can provide a response to accessed media content by posting a text-based message, such as a comment. While to some degree it can be informative, a text-based message often fails to convey the full meaning and sentiment intended by the content provider who authored it. As a related matter, a text-based message often fails to generate interest or enthusiasm of other users who have viewed it that is sufficient to engender full discussion about the media content. Accordingly, communications about media content can be undesirably muted in the social networking system.

[0026] An improved approach rooted in computer technology overcomes the foregoing and other disadvantages associated with conventional approaches specifically arising in the realm of computer technology. Systems, methods, and computer readable media of the present technology can allow a user of a social networking system, such as a content provider, to provide a reaction to a digital environment accessed by the content provider. The digital environment can be presented through media content. To create a reaction, the content provider can record his or her gestures, speech, body movement, and other expression in response to the media content. For example, the expression of the user can be recorded by a camera, a microphone, sensors, or other equipment through which the content provider can interact in the digital environment. Contextual data, such as a time stamp relating to a portion of the media content to which the reaction relates, can be associated with the reaction. In addition, the content provider can specify access rights to restrict access to the reaction to designated users. A user of the social networking system can potentially access the media content. User interactions with the media content can be monitored. When it is determined that the user has accessed the portion of the media content associated with the reaction, the reaction can be provided to the user if the user enjoys permission to access the reaction based on the access rights. In some instances, at or around the time the portion of the media content to which the reaction relates is provided to the user, the user can be automatically provided with the reaction or an option to access the reaction. The reaction can be presented as an overlay in the digital environment. The reaction can be provided in different forms. For example, the reaction can be presented as a moving “coin” that includes an image of a body portion of the content provider, such as a face. Movement of the coin can be animated to reflect the expression of the content provider when the reaction was created. As another example, the reaction can be presented as an avatar whose animated movements reflect the expression of the content provider when the reaction was created. The form of the reaction can be selectively determined based on a type of the digital environment through which the reaction is to be presented. More details regarding the present technology are described herein.

[0027] FIG. 1 illustrates an example system 100 including an example content provision module 102 configured to provide reactions in a digital environment, according to an embodiment of the present technology. The digital environment can be presented through media content. The content provision module 102 can allow a content provider to create and record reactions for presentation to users in a digital environment. A reaction can constitute a response to media content through which the digital environment is provided that conveys meaning and sentiment of the content provider through, for example, verbal communication and body language. As used herein, a reaction can include a reflection of gestures, body movement, speech, sounds, and any other types of expression of a content provider in response to the media content. The content provision module 102 can allow a user experiencing the digital environment to access reactions in the digital environment. The content provision module 102 can include a digital environment module 104 and a reactions module 106. The components (e.g., modules, elements, steps, blocks, etc.) shown in this figure and all figures herein are exemplary only, and other implementations may include additional, fewer, integrated, or different components. Some components may not be shown so as not to obscure relevant details. In various embodiments, one or more of the functionalities described in connection with the content provision module 102 can be implemented in any suitable combinations.

[0028] The digital environment module 104 can provide a digital environment for a user. As used herein, a digital environment can include any medium, channel, platform, experience, or surrounding through which a content provider or a user, as appropriate, can create, configure, access, manage, or otherwise interact with reactions. In some embodiments, a digital environment can be provided to a content provider or a user through an interface of a computing device associated with the content provider or the user. The interface can include, for example, a desktop computer, a touchscreen of mobile device, a viewport mounted in headgear, a camera view or viewfinder of a mobile device, and the like. The content provider or the user can interact through the interface in the digital environment by appropriate user inputs and commands, such as mouse clicks, touch gestures, controller commands, body gestures, voice commands, etc. In some embodiments, a digital environment can include, for example, a two dimensional (2D) environment, a virtual reality (VR) environment, an augmented reality (AR) environment, a mixed reality (MR) environment, or other types of digital environments. In some embodiments, a digital environment can be provided through media content presented through an interface. In some embodiments, equipment through which a content provider or a user can interact in a digital environment can be in whole or in part included in or implemented by a user device 610, as discussed in more detail herein.

[0029] The media content through which a digital environment can be presented can be any suitable type of media content. The media content can include, for example, 2D images, 2D video, panoramic photos, 360 photos, photo spheres, 360 (or spherical) videos, three-dimensional (3D) simulations, 3D animations, and the like. The media content also can include, for example, a combination of different types of media content. For example, the media content can include any content that in whole or in part reflects 360 degree views or presents 3D content. In one instance, the media content can include a 360 photo or a 360 video that captures a 360 degree view of a scene. In another instance, the media content can include virtual reality (VR) content through which 3D environments can be presented to the user. As used herein, media content also includes presentation of environmental surroundings through a camera view or viewfinder of a camera or other device. 360 or spherical videos are referenced herein for ease of illustration. However, in various embodiments, the present technology can be adapted for any type of media content supportive of an immersive user experience including, for example, half sphere videos (e.g., 180 degree videos), arbitrary partial sphere videos, 225 degree videos, 3D 360 videos, to name some examples. In various embodiments, the present technology described herein can be adapted for any media content that partially or wholly encompasses (or surrounds) a viewer (or user). Moreover, such media content need not be limited to, for example, videos that are formatted using a spherical shape but may also be applied to immersive media content (e.g., videos) formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three dimensional world.

[0030] The reactions module 106 can allow a content provider to create a reaction to a digital environment for access by a user. The content provider can record a reaction in response to media content, or a portion of the media content, through which the digital environment is presented. As used herein, a portion of media content can include, for example, a scene, segment, component, element, theme, concept, or other selection of or in the media content. The reaction can convey an expression of the content provider through, for example, verbal communications and body movement in response to the portion of the media content. The content provider can specify access rights for the reaction. When a user accesses the portion of media content associated with the reaction, the user can be provided access to the reaction based on the access rights. A form of the reaction presented to the user can be based on a type of digital environment in which the user is interacting. As some examples, the reaction can be presented as an animated coin or an animated avatar that mirrors the recorded expression of the content provider. Functionality of the reactions module 106 is described in more detail herein.

[0031] In some embodiments, the content provision module 102 can be implemented, in part or in whole, as software, hardware, or any combination thereof. In general, a module as discussed herein can be associated with software, hardware, or any combination thereof. In some implementations, one or more functions, tasks, and/or operations of modules can be carried out or performed by software routines, software processes, hardware, and/or any combination thereof. In some cases, the content provision module 102 can be, in part or in whole, implemented as software running on one or more computing devices or systems, such as on a server or a client computing device. For example, the content provision module 102 can be, in part or in whole, implemented within or configured to operate in conjunction or be integrated with a social networking system (or service), such as a social networking system 630 of FIG. 6. As another example, the content provision module 102 can be implemented as or within a dedicated application (e.g., app), a program, or an applet running on a user computing device or client computing system. In some instances, the content provision module 102 can be, in part or in whole, implemented within or configured to operate in conjunction or be integrated with client computing device, such as a user device 610 of FIG. 6. It should be understood that many variations are possible.

[0032] The system 100 can include a data store 108 configured to store and maintain various types of data, such as the data relating to support of and operation of the content provision module 102. The data store 108 also can maintain other information associated with a social networking system. The information associated with the social networking system can include data about users, social connections, social interactions, locations, geo-fenced areas, maps, places, events, groups, posts, communications, content, account settings, privacy settings, and a social graph. The social graph can reflect all entities of the social networking system and their interactions. As shown in the example system 100, the content provision module 102 can be configured to communicate and/or operate with the data store 108.

[0033] FIG. 2 illustrates an example reactions module 202, according to an embodiment of the present technology. In some embodiments, the reactions module 106 of FIG. 1 can be implemented with the reactions module 202. The reactions module 202 can include a configuration module 204, a user interaction module 206, and a presentation module 208.

[0034] The configuration module 204 can allow a content provider to configure and create a reaction to media content, or a portion thereof, for presentation in a digital environment. The configuration module 204 can provide an option through an interface for the content provider to create a reaction. In some embodiments, the option can be provided as a selectable element of the interface. When the content provider selects the element, a recording of the content provider in response to the portion of the media content can be performed. In some embodiments, the recording can have a predetermined time duration (e.g., 5 seconds, 10 seconds, 30 seconds, etc.). In other embodiments, the recording can have a time duration selected by the content provider. The recording can proceed as a countdown displayed to the content provider through the interface during which the recording should be completed. The content provider can record his or her gestures, body movement, speech, sounds, and other expression in response to the portion of the media content. For example, the content provider can speak and move to convey his or her expression for the recording. The expression of the content provider can be recorded by a camera, a microphone, sensors, or other equipment. For example, the expression can be recorded by a “selfie” camera and a microphone of a computing device providing an interface for presenting media content to a content provider. As another example, the expression can be recorded by a camera, a microphone, or other monitoring equipment that can capture expression of a content provider through sensors attached or adjacent to the body of the content provider. The recordings can capture some or all of the body movement of the content provider, sound of the content provider, or both. For example, the recordings can capture only facial gestures and head movements of the content provider along with audio of the content provider. As another example, the recordings can capture all of the body movements of the content provider, including movement of the hands, arms, feet, legs, etc., along with audio of the content provider.

[0035] A content provider can record any type of expression as a reaction in response to media content or a portion thereof. As just one example, to reflect or convey a sense or sentiment of happiness, a content provider can sing with elation while performing a spirited dance to constitute his or her expression in response to a portion of media content. As another example, to reflect or convey a serious observation, remark, comment, or other information about a portion of media content, a content provider can speak with a serious tone and gesture emphatically with his or her hands. As yet another example, a content provider can select an option not to record video and only permit recording of audio. Likewise, a content provider can select an option not to record audio and only permit recording of video. Many different expressions are possible. Upon conclusion of the recording of the expression, a reaction in response to media content, or a portion thereof, can be created for potential presentation to users who later access the media content, as discussed in more detail herein.

[0036] The configuration module 204 can manage contextual information relating to a reaction. In some embodiments, a time stamp of a reaction in relation to associated media content can be determined and logged. For example, if a reaction was created by a content provider in response to a portion of media content at a point or window of time during presentation of the media content, the reaction can be associated with a time stamp relating to the point or the window of time. As discussed in more detail herein, when a user later accesses the portion of the media content, a reaction can be provided to the user based on the time stamp.

[0037] The configuration module 204 can allow a content provider to associate a created reaction with a concept reflected in a digital environment. A concept can include any item, element, theme, or other component reflected in or depicted by media content. In some embodiments, the configuration module 204 can prompt the content provider to select a concept to associate with the reaction. In some embodiments, the content provider can identify the selected concept by an appropriate user interaction in the digital environment. Access to the reaction can be provided to a user in the digital environment when the attention of the user is directed at the selected concept, as discussed in more detail herein.

[0038] The configuration module 204 can allow a content provider to specify access rights designating users who are permitted to view a reaction created by the content provider. In some embodiments, the configuration module 204 can prompt the content provider through an interface to specify the access rights. For example, the content provider can identify one or more users who are permitted access based on their identifications (e.g., user IDs, names, etc.). In another example, the content provider can identify users by their degree of connection to the content provider in a social networking system. For instance, the content provider can identify users in a social networking system who are within a selected number of degrees of connection from the content provider as having permission to view the reaction. In some embodiments, the content provider can identify users who do not have permission to view the reaction and all other users not so identified can have permission to view the reaction. Many variations are possible.

[0039] The user interaction module 206 can receive detected information reflecting users and their interactions in a digital environment. The detected information can be provided in real time (or near real time) by computing devices, sensors, or other equipment that is capable of detecting and monitoring actions of users in the digital environment. For example, the detected information can include information regarding timing of media content presented to a user. As another example, the detected information can include information regarding an interaction directed by a user at a particular concept depicted in the digital environment. Such interaction can include, for example, a gaze gesture by the user directed at the concept as detected by, for example, sensors that can detect and monitor eye movement of the user.

[0040] The presentation module 208 can selectively present reactions to users in a digital environment. In some embodiments, a reaction can be potentially provided to a user based on contextual information associated with the reaction. For example, if an elapsed time of presentation of media content to a user matches or coincides with a time stamp of a reaction in relation to the media content, the reaction can be provided to the user at the same time that relevant media content is being provided to the user. In other words, the user can access a portion of media content and, at the same time, can timely access a reaction created in response to the portion of the media content. In some embodiments, the provision of a reaction to a user can be a predetermined time before or a predetermined time after presentation of a portion of media content to which the reaction responds. In some embodiments, if an interaction of a user, such as a gaze gesture, in a digital environment is directed at a concept associated with a reaction, the presentation module 208 can determine that the reaction can be indicated and played back to the user. In this way, provision of the reaction can be relevant to the focus of the user on the concept as indicated by his or her interactions. If an interaction of a user is not directed at the concept, the presentation module 208 can determine that the reaction should not be presented to the user. In some embodiments, before provision of a reaction to a user, the presentation module 208 can check to see if the user can be provided with the reaction based on access rights. If the access rights permit the user to access the reaction, the reaction can be provided to the user. If the access rights do not permit the user to access the reaction, the reaction will not be provided to the user.

[0041] The presentation module 208 can present a reaction in a variety of forms in a digital environment. In some embodiments, a reaction can be provided as an animated coin for presentation to a user. The coin can be an object having a substantially circular or other shape. The coin can include an image of a content provider of the reaction. The image can be a profile picture or other picture associated with the content provider. The coin and image therein can be animated to include motion that mirrors or follows body movements or movements of a particular body part (e.g., head) of a content provider during recording of expression constituting a reaction. The animated coin can be presented as an overlay in a digital environment. The animated coin can be presented in 2D or 3D based on its suitability for a type of the digital environment in which it will appear, as discussed herein. The coin can be animated for any type of movement, such as any type of translational and rotational motion. The animation of the coin also can include audio recorded as part of the expression of the content provider recorded to constitute the reaction. For example, if recorded expression of a content provider includes head turning by the content provider, the coin can be animated to include turning to mirror the head turning by the content provider. As another example, if recorded expression of a content provider includes jumping up and down by the content provider, the coin can be animated to include moving up and down to mirror the jumping by the content provider. As yet another example, if recorded expression of a content provider is audio information only, the coin can be animated to reflect signal patterns in the audio information. For instance, the coin can be animated to include moving (e.g., spinning, flipping, undulating, etc.) in synchronicity with points in the audio information exhibiting relatively high signal amplitudes. In some embodiments, animation of the coin can include replay of video, audio, or both recorded from a content provider.

[0042] In some embodiments, a reaction can be provided as an avatar representing a content provider who created the reaction. The avatar can be presented as ghost-like figure without depiction of physical features capable of identifying the content provider. For example, the avatar can be presented as a generic sketch of a head and torso of a human figure with generic facial features that do not depict the specific facial features of the content provider. As another example, the avatar can be presented as a generic sketch of a human figure displaying a head with generic facial features, body, arms, and legs. As yet another example, the avatar can be presented as a realistic depiction of the content provider (e.g., an image of the face and head of the content provider) that reflects actual physical characteristics of the content provider. The animated avatar can be presented as an overlay in a digital environment. The animated avatar can be presented in 2D or 3D based on its suitability for a type of the digital environment in which it will appear, as discussed herein. Like the coin, the avatar can be animated to include mirroring or following the body movements or movements of a particular body part (e.g., head) of the content provider during recording of an expression constituting a reaction. For example, if recorded expression of a content provider involved speaking and dancing, the avatar can be animated to speak and dance in a manner similar to the recorded expression of the content provider. As another example, if recorded expression of a content provider involved singing and gesturing, the avatar can be animated to sing and gesture in a manner similar to the recorded expression of the content provider. The avatar can be animated for any type of movement. Many variations are possible.

[0043] The presentation module 208 can select a form of a reaction for presentation to a user based on a variety of considerations. In some embodiments, the presentation module 208 can select a form of a reaction based on a type of digital environment presented to a user. For example, when a digital environment is a 2D environment, the presentation module 208 can select a reaction in the form of a coin. In this example, a reaction in the form of a coin in some cases may be better suited to a 2D environment. As another example, when a digital environment is a VR environment, the presentation module 208 can select a reaction in the form of an avatar. In this regard, an animated avatar reflecting dynamic movement in 3D may better optimize user experience in a VR environment. In some embodiments, the presentation module 208 can select a form of reaction based on availability of a form of reaction. For example, if expression of a content provider did not include body movements, or if no camera or other sensors were available to record body movements of the content provider, the presentation module 208 can determine that the reaction can be presented as a coin instead of an avatar. In some embodiments, a form of reaction can be based on a product or feature in which the reaction is to be presented. For example, if a reaction is to be provided in media content relating to stories in a news feed, the presentation module 208 can select a coin as a default form of the reaction. Many variations are possible.

[0044] The presentation module 208 can provide access to reactions in various modes of presentation. In some embodiments, the presentation module 208 can implement a presentation mode in which reactions are automatically indicated for a user accessing media content. For example, as media content is presented to a user, reactions relevant to portions of the media content are automatically presented to the user as the user accesses the portions of the media content. For instance, assume that a first reaction is associated with a first scene in media content and a second reaction and a third reaction are associated with a second scene in the media content. In this instance, the media content can provide a digital environment, such as a VR environment. As a user accesses or views the media content, the first reaction can be automatically indicated to the user when the user accesses the first scene. The indication of the first reaction, which can be a coin or an avatar presented in the digital environment, can be a selectable overlay in the digital environment or the media content. After selection of the first reaction, the first reaction can be executed (e.g., played back) so that the animation associated with the first reaction is performed in the digital environment. After execution of the first reaction or after presentation of the first scene, the first reaction can disappear from the digital environment. As the user continues to view the media content, the second reaction and the third reaction can be automatically indicated to the user when the user accesses the second scene. Likewise, the indication of the reactions can be selectable overlays in the digital environment. After selection of one or both of the second reaction and the third reaction, the reactions can be executed (e.g., played back) so that the animations associated with the reactions are performed in the digital environment at the same, overlapping, or different times. Thereafter, the reactions can disappear. The foregoing description can be applied to indication and execution of any number of reactions. In some instances, the user can select a reaction to cause play back of the reaction as presentation of the media content to the user continues simultaneously. In some instances, play back of a reaction can be automatically initiated when an elapsed time of presentation of associated media content matches a timestamp of the reaction.

[0045] In some embodiments, the presentation module 208 can limit reactions presented to a user. In some embodiments, when a plurality of reactions for potential presentation to a user satisfies (e.g., is equal to or greater than) a threshold number of reactions, only the threshold number of reactions can be presented to a user. In some embodiments, reactions can be ranked and presented to a user in rank order. For example, a reaction that is created by a content provider having relatively higher affinity with a user to whom the reaction is potentially presented can be ranked higher than a reaction created by a content provider having relatively lower affinity. As another example, a reaction determined to relate to a concept having a relatively higher level of relevance or interest to a user can be ranked higher than a reaction not so determined.

[0046] In some embodiments, the presentation module 208 can allow creation of reactions that are layered or cascaded. In this regard, a content provider can create a first reaction. The first reaction can be accessed by a user. The user, in turn, can create a second reaction in response to the first reaction. Likewise, a third reaction can be created by the content provider or another content provider in response to the second reaction, and so on. The present technology can provide any number of layers of reactions to support communications among content providers and users.

[0047] FIG. 3A illustrates an example first scenario, according to an embodiment of the present technology. As shown in FIG. 3A, an interface 300 presented through a computing device presents a digital environment to a user associated with the computing device. The digital environment can be provided through media content 302. As shown, the media content 302 is associated with a story in a news feed of the user that includes a 360 video. A reaction 304 is overlaid in the media content 302. In the example shown, the form of the reaction 304 is a coin. The coin includes an image of a content provider. The reaction 304 was previously created by the content provider in response to a portion of the media content 302. The reaction 304 can be associated with a time stamp in relation to the portion of the media content 302. The reaction 304 is indicated to the user (i.e., is presented to the user) because the user is permitted to experience the reaction 304 based on access rights previously specified by the content provider.

[0048] In some instances, play back of the reaction 304 can be initiated by a command applied by the user to the interface 300. In other instances, play back of the reaction 304 can be initiated automatically when the time of play back of the media content 302 matches the time stamp of the reaction 304. Play back of the reaction 304 allows the user to be presented with animation of the coin. The animation of the coin can exhibit motion that mirrors expression of the content provider when the expression was recorded to create the reaction 304. In the example shown, the animation of the coin includes spinning and moving higher to reflect spinning and ascending motion of the content provider when the expression of the content provider was recorded. The animation of the coin also can include play back of sounds of the content provider that were included in the recorded expression.

……
……
……

您可能还喜欢...