雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Facebook Patent | Generating customized, personalized reactions to social media content

Patent: Generating customized, personalized reactions to social media content

Drawings: Click to check drawins

Publication Number: 20210037195

Publication Date: 20210204

Applicant: Facebook

Abstract

The present disclosure is directed toward systems, computer-readable media, and methods for generating a personalized selfie reaction-element in connection social media content. For example, the systems and methods described herein generate a personalized selfie reaction-element including a multi-media item and one or more elements and/or enhancements. The systems and method described herein can then provide the personalized selfie reaction-element for use in connection with various types of social media content including communication threads, ephemeral content, posts, and direct messages.

Claims

  1. A method for generating customized reaction elements to social media content, comprising: detecting, by a processor of a client-computing device, a selection of an option to create a personalized selfie reaction-element in connection with social media content; providing, from a camera of the client-computing device, a live camera viewfinder display via a graphical user interface; capturing, in response to a detected selection of a capture element, a multi-media item of a user via the camera of the client-computing device; detecting a selection of one or more augmented reality enhancements; generating the personalized selfie reaction-element by combining the multi-media recording of the user and the selected one or more augmented reality enhancements; and providing the personalized selfie reaction-element for association with the social media content.

  2. The method as recited in claim 1, further comprising detecting a face associated with the user within the multi-media item of the user.

  3. The method as recited in claim 2, wherein generating the personalized selfie reaction-element further comprises overlaying, based on the detected face within the multi-media item, the selected one or more augmented reality enhancements over the multi-media item of the user.

  4. The method as recited in claim 2, wherein generating the personalized selfie reaction-element further comprises altering, based on the detected face within the multi-media item, an appearance of one or more portions of the multi-media item of the user based on the selected one or more augmented reality enhancements.

  5. The method as recited in claim 1, wherein generating the personalized selfie reaction-element further comprises generating a multi-media recording of the user and automatically looping the multi-media recording of the user combined with the selected one or more augmented reality enhancements.

  6. The method as recited in claim 1, further comprising: detecting a selection of a color-gradient option associated with the multi-media item of the user; and wherein generating the personalized selfie reaction-element further comprises: converting the multi-media item to black and white; cropping a background out of the multi-media item of the user; and adding a color-gradient background into the multi-media item of the user.

  7. The method as recited in claim 1, further comprising: detecting a selection of a placement location associated with the social media content; and wherein providing the personalized selfie reaction-element for association with the social media content further comprises providing the placement location associated with the social media content.

  8. A system comprising: at least one camera; at least one processor; and at least one non-transitory computer-readable storage medium storing instructions thereon that, when executed by the at least one processor, cause the system to: detect a selection of an option to create a personalized selfie reaction-element in connection with social media content; provide, from the at least one camera, a live camera viewfinder display via a graphical user interface; capture, in response to a detected selection of a capture element, a multi-media item of a user via the at least one camera; detect a selection of one or more augmented reality enhancements; generate the personalized selfie reaction-element by combining the multi-media item of the user and the selected one or more augmented reality enhancements; and provide the personalized selfie reaction-element for association with the social media content.

  9. The system as recited in claim 8, further storing instructions thereon that, when executed by the at least one processor, cause the system to detect a face associated with the user within the multi-media item of the user.

  10. The system as recited in claim 9, further storing instructions thereon that, when executed by the at least one processor, cause the system to generate the personalized selfie reaction-element by overlaying, based on the detected face within the multi-media item, the selected one or more augmented reality enhancements over the multi-media item of the user.

  11. The system as recited in claim 9, further storing instructions thereon that, when executed by the at least one processor, cause the system to generate the personalized selfie reaction-element by altering, based on the detected face within the multi-media recording, an appearance of one or more portions of the multi-media item of the user based on the selected one or more augmented reality enhancements.

  12. The system as recited in claim 8, further storing instructions thereon that, when executed by the at least one processor, cause the system to generate the personalized selfie reaction-element by generating a multi-media recording of the user and automatically looping the multi-media recording of the user combined with the selected one or more augmented reality enhancements.

  13. The system as recited in claim 8, further storing instructions thereon that, when executed by the at least one processor, cause the system to: detect a selection of a color-gradient option associated with the multi-media item of the user; and generate the personalized selfie reaction-element by: converting the multi-media item to black and white; cropping a background out of the multi-media item of the user; and adding a color-gradient background into the multi-media item of the user.

  14. The system as recited in claim 8, further storing instructions thereon that, when executed by the at least one processor, cause the system to: detect a selection of a placement location associated with the social media content; and wherein providing the personalized selfie reaction-element for association with the social media content further comprises providing the placement location associated with the social media content.

  15. A non-transitory computer-readable medium storing instructions thereon that, when executed by at least one processor, cause a client-computing device to: detect a selection of an option to create a personalized selfie reaction-element in connection with social media content; provide, from a camera of the client-computing device, a live camera viewfinder display via a graphical user interface; capture, in response to a detected selection of a capture element, a multi-media item of a user via the camera of the client-computing device; detect a selection of one or more augmented reality enhancements; generate the personalized selfie reaction-element by combining the multi-media item of the user and the selected one or more augmented reality enhancements; and providing the personalized selfie reaction-element for association with the social media content.

  16. The non-transitory computer-readable medium as recited in claim 15, further storing instructions thereon that, when executed by the at least one processor, cause the client-computing device to detect a face associated with the user within the multi-media item of the user.

  17. The non-transitory computer-readable medium as recited in claim 16, wherein the instructions thereon that, when executed by the at least one processor, cause the client-computing device to generate the personalized selfie reaction-element by overlaying, based on the detected face within the multi-media item, the selected one or more augmented reality enhancements over the multi-media item of the user.

  18. The non-transitory computer-readable medium as recited in claim 16, wherein the instructions thereon that, when executed by the at least one processor, cause the client-computing device to generate the personalized selfie reaction-element by altering, based on the detected face within the multi-media recording, an appearance of one or more portions of the multi-media recording of the user based on the selected one or more augmented reality enhancements.

  19. The non-transitory computer-readable medium as recited in claim 15, further storing instructions thereon that, when executed by the at least one processor, cause the client-computing device to: detect a selection of a color-gradient option associated with the multi-media recording of the user; and generate the personalized selfie reaction-element by: converting the multi-media recording to black and white; cropping a background out of the multi-media item of the user; and adding a color-gradient background into the multi-media item of the user.

  20. The non-transitory computer-readable medium as recited in claim 15, further storing instructions thereon that, when executed by the at least one processor, cause the client-computing device to: detect a selection of a placement location associated with the social media content; and wherein providing the personalized selfie reaction-element for association with the social media content further comprises providing the placement location associated with the social media content.

Description

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of and priority to U.S. Provisional Patent Application No. 62/881,826, filed Aug. 1, 2019, the contents of which are hereby incorporated by reference in their entirety.

BACKGROUND

[0002] Recent years have seen significant improvements in digital communication. For example, conventional social media systems can enable users to generate, post, and interact with social media content. To illustrate, conventional social media systems can enable a user to access GIFs, stickers, and other digital elements in order to react to a co-user’s post or direct message.

[0003] Although conventional social media systems can enable users to generate, post, and interact with various types of social media content, such systems have a number of problems in relation to flexibility and accuracy. To illustrate, conventional social media systems can utilize pre-generated reaction elements (e.g., GIFs, stickers, emoticons) in social media communications to convey a reaction or response. Such pre-generated reaction elements, however, provide little flexibility. In particular, beyond changing colors, conventional pre-generated reaction elements are typically fixed and provide no opportunity to provide a personalized reaction.

[0004] In addition to being inflexible, conventional pre-generated reaction elements are often inaccurate. In particular, users are typically forced to choose one of a set of pre-generated reaction elements. In some cases, however, there may not be a pre-generated reaction element that accurately or fully captures how a given user wishes to re-act to social media content. Due to convenience and/or time constraints, the given users may opt to use the pre-generated reaction element that is inaccurate.

[0005] These along with additional problems and issues exist with regard to conventional social media systems.

BRIEF SUMMARY

[0006] Embodiments of the present disclosure provide benefits and/or solve one or more of the foregoing or other problems in the art with systems, non-transitory computer-readable media, and methods for generating, within a social media application, customized selfie reaction-element s in connection with social media content. For example, in one or more embodiments, the systems, non-transitory computer-readable media, and methods described herein enable a user to configure and generate a personalized selfie reaction-element within a social media application during an ongoing chat thread and/or actively playing ephemeral content. The systems, computer-readable media, and methods described herein can also utilize facial detection and tracking techniques in applying selected augmented reality elements to the captured multi-media recording of a user in order to generate a customized, personalized selfie reaction-element. The systems, computer-readable media, and methods described herein can then provide the generated personalized selfie reaction-element for inclusion in a communication thread, to be incorporated into ephemeral content, or for other use in social media communications.

[0007] Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of such exemplary embodiments. The features and advantages of such embodiments may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features will become more fully apparent from the following description and appended claims, or may be learned by the practice of such exemplary embodiments as set forth hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] In order to describe the manner in which the above recited and other advantages and features can be obtained, a more particular description of the aspects of one or more embodiments briefly described above will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings. It should be noted that the figures are not drawn to scale, and that elements of similar structure or function are generally represented by like reference numerals for illustrative purposes throughout the figures. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting of scope, one or more embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

[0009] FIG. 1 illustrates an environmental diagram of a personalized reaction system in accordance with one or more embodiments;

[0010] FIGS. 2A-2F illustrate a series of graphical user interfaces that the personalized reaction system can provide in configuring and generating a personalized selfie reaction-element for use in connection with a communication thread in accordance with one or more embodiments;

[0011] FIGS. 3A-3E illustrate a series of graphical user interfaces that the personalized reaction system can provide in configuring and generating a personalized selfie reaction-element for use in connection with a user’s ephemeral content in accordance with one or more embodiments;

[0012] FIGS. 4A-4F illustrate a series of graphical user interfaces that the personalized reaction system can provide in configuring and generating a personalized selfie reaction-element for use in connection with a co-user’s ephemeral content in accordance with one or more embodiments;

[0013] FIG. 5A-5D illustrate a series of graphical user interfaces that the personalized reaction system can provide in selecting augmented reality enhancements for use in generating a personalized selfie reaction-element in accordance with one or more embodiments;

[0014] FIG. 6 illustrates a flowchart of generating an augmented reality model of a user’s face for use in positioning an augmented reality enhancement in accordance with one or more embodiments;

[0015] FIG. 7 illustrates a detailed schematic diagram of the personalized reaction system in accordance with one or more embodiments;

[0016] FIG. 8 illustrates a flowchart of a series of acts for generating a personalized selfie reaction-element in accordance with one or more embodiments;

[0017] FIG. 9 illustrates a block diagram of an exemplary computing device in accordance with one or more embodiments;

[0018] FIG. 10 is an example network environment of a networking system in accordance with one or more embodiments; and

[0019] FIG. 11 illustrates a social graph in accordance with one or more embodiments.

DETAILED DESCRIPTION

[0020] One or more embodiments of the present disclosure includes a personalized reaction system that generates a personalized selfie reaction-element for association with social media content in response to user input received via a social media application. For example, the personalized reaction system can provide various tools and options for configuring and generating customized, personalized selfie reaction-elements within a social media application that enable a user to quickly and easily add customized, personalized selfie reaction-element s to social media content. In one or more embodiments, the personalized reaction system determines and provides augmented reality (“AR”) elements that may be associated with or incorporated into a customized, personalized selfie reaction-element.

[0021] To illustrate, in one or more embodiments, a user may be accessing a social media application to engage in various types of social media communication with various co-users (e.g., “friends”). For example, the user may be participating in a chat thread, composing a post, viewing a co-user’s ephemeral content, or creating his or her own ephemeral content. Often, users will react to electronic communications, posts, and ephemeral content with rich communication objects such as GIFs, stickers, and emoticons in order to quickly communicate a thought or sentiment. Despite this, as discussed above, conventional social media systems generally do not provide flexibility. As such, the user misses out on the opportunity to react to a social media communication in a genuine way utilizing rich communication objects.

[0022] To remedy this, the personalized reaction system provides a “selfie” option in connection with various types of social media communications (e.g., chat threads and ephemeral content) within a social media application. In one or more embodiments, and in response to a detected selection of the selfie option, the personalized reaction system accesses a camera of the user’s client-computing device in order to provide a live camera viewfinder display in connection with various options for configuring a customized, personalized selfie reaction-element. In at least one embodiment, the personalized reaction system provides the live camera viewfinder display and configuration options in a single graphical user interface overlaid on the currently active social media communication. In one or more embodiments, the personalized reaction system can capture a multi-media still or recording via the live camera viewfinder in response to detecting user input (e.g., a long press on a shutter button).

[0023] Moreover, in one or more embodiments, the personalized reaction system can provide one or more AR enhancements. For example, the personalized reaction system can determine a number of AR enhancements based on overall popularity, the user’s social media use history, the user’s social media profile information, or information associated with one or more of the user’s social media co-users. The personalized reaction system can then provide the AR enhancements as options for customizing a reaction element.

[0024] The personalized reaction system can then combine the captured selfie with one or more selected AR enhancements to generate a customized personalized reaction selfie. The customized personalized reaction selfie provides a personalized reaction by including a selfie of the user. Furthermore, the customized personalized reaction selfie provides various options for the user to customize or tailor the reaction selfie to provide a desired reaction, sentiment, or feeling via the AR enhancements.

[0025] Furthermore, the personalized reaction system can provide various additional options to stylize a customized personalized reaction selfie. For example, the personalized reaction system can utilize facial detection techniques to crop out a background of a captured multi-media recording of a user. Additionally, the personalized reaction system can add in additional background, such as a gradient color background.

[0026] In one or more embodiments, the personalized reaction system can associate a generated customized personalized reaction selfie with social media content. For example, in one embodiment, the personalized reaction system can associate a generated customized personalized reaction selfie with social media content such as a chat thread electronic communication, a social media post, and/or ephemeral content associated with the user or one or more of the user’s co-users.

[0027] Accordingly, the personalized reaction system provides a number of advantages and benefits over conventional social media systems and methods. For example, by providing tools and functionality within a single social media application that enables a user to configure a customized personalized reaction selfie, the personalized reaction system improves efficiency relative to conventional social media systems. Specifically, the personalized reaction system efficiently utilizes computing resources by providing customized personalized reaction selfie functionality within a single social media application without requiring another application to create the personalized content.

[0028] Moreover, the personalized reaction system improves accuracy relative to conventional social media systems. For example, the personalized reaction system allows users to generate a personalized reaction selfie that accurately portrays a reaction, sentiment, or feeling the user desires to convey in response to social media content.

[0029] Additionally, the personalized reaction system improves flexibility relative to conventional social media systems. For example, in one or more embodiments, the personalized reaction system flexibility provides the ability to create personalized reactions selfies on the fly in response to social media content rather than using pre-generated reaction elements.

[0030] As illustrated by the foregoing discussion, the present disclosure utilizes a variety of terms to describe features and advantages of the personalized reaction system. For example, as used herein “social media content” refers to any type of content available via a networking system. In one or more embodiments, social media content can include, but is not limited to, electronic communications, posts, images, recordings, media players, commercial content, ephemeral content, and interactive display elements.

[0031] As used herein, “ephemeral content” refers to social media content that is available for viewing during a predetermined window of time. For example, ephemeral content can include edited or unedited images and recordings within a user’s ephemeral content collection available via the networking system. In at least one embodiment, the networking system retains newly added ephemeral content within the user’s ephemeral content collection for a predetermined amount of time (e.g., 24 hours) starting from when the ephemeral content was added. At the end of the predetermined amount of time, the networking system can automatically remove and delete the ephemeral content from the user’s ephemeral content collection.

[0032] As used herein, a “multi-media item” refers to multi-media images and multi-media recordings for use in configuring and generating a customized, personalized selfie reaction-element. For example, a multi-media recording can be a recording captured via a camera of a client-computing device (e.g., a front-facing camera of a smart phone). Similarly, a multi-media image can be a still image captured via a camera of a client-computing device.

[0033] As used herein, a “personalized selfie reaction-element” refers to a display element that includes a selfie multi-media item. For example, a personalized selfie reaction-element can include a multi-media recording or image of a user. A “customized, personalized selfie reaction-element” can comprise a selfie multi-media item with one or more augmented reality (AR) enhancements. For example, a customized, personalized selfie reaction-element can comprise a multi-media recording or image of a user laughing with an overlay of augmented reality tears.

[0034] As used herein, an “augmented reality enhancement” or “AR enhancement” refers to a computer-generated display element that is incorporated into a multi-media item. For example, the personalized reaction system can incorporate an overlay-type augmented reality enhancement into a multi-media recording such that the augmented reality enhancement tracks a certain portion or area of a user’s face throughout the multi-media recording. Additionally, the personalized reaction system can incorporate an alteration-type augmented reality enhancement into a multi-media recording such that an area or portion of the user’s face is altered throughout the multi-media recording.

[0035] FIG. 1 illustrates an example environment 100 for implementing the personalized reaction system 102. As illustrated in FIG. 1, the environment 100 includes client-computing devices 102a, 102b that implement networking system applications 110a, 110b. Further shown in FIG. 1, the environment 100 also includes a server 106 hosting a networking system 104 that includes the personalized reaction system 102. The networking system 104 can comprise a system that connects client-computing device users and allows exchange of data over a network. Also as illustrated in FIG. 1, in one or more embodiments, the networking system 104 operates the personalized reaction system 102. In additional or alternative embodiments, the networking system application 110a, 110b includes part or all of the personalized reaction system 102 installed on the client-computing devices 108a, 108b, respectively.

[0036] The client-computing devices 108a, 108b and the server 106 communicate via a network 112, which may include one or more networks and may use one or more communication platforms or technologies suitable for transmitting data and/or communication signals. In one or more embodiments, the network 112 includes the Internet or World Wide Web. The network 112, however, can include various other types of networks that use various communication technologies and protocols, such as a corporate intranet, a virtual private network (“VPN”), a local area network (“LAN”), a wireless local network (“WLAN”), a cellular network, a wide area network (“WAN”), a metropolitan area network (“MAN”), or a combination of two or more such networks.

[0037] Although FIG. 1 illustrates a particular number and arrangement of client-computing devices, in additional embodiments the client-computing devices 108a, 108b may directly communicate with the networking system 104, bypassing the network 112. Further, in other embodiments, the environment 100 may include any number of client-computing devices. Additional details with respect to the client-computing devices 108a, 108b are discussed below with respect to FIG. 8.

[0038] In one or more embodiments, the client-computing devices 108a, 108b can be one of various types of computing devices. For example, each of the client-computing devices 108a, 108b may include a mobile device such as a mobile telephone, a smartphone, a PDA, a tablet, or a laptop. Additionally, or alternatively, the client-computing devices 108a, 108b may include a non-mobile device such as a desktop computer, a server, or another type of computing device. It will be understood that a both client-computing devices 108a, 108b can include the same type of computing functionality. In other words, in a preferred embodiment, both the client-computing device 108a and the client-computing device 108b are mobile computing devices, such as smartphones. In at least one embodiment, the user of the client-computing device 108a and the user of the client-computing device 108b are associated or co-users (e.g., “friends”) via the networking system 104.

[0039] In one or more embodiments, each of the client-computing devices 108a, 108b include a networking system application 110a, 110b associated with the networking system 104. For example, the networking system application 110a, 110b enables the users of the client-computing devices 108a, 108b to view and interact with networking system content, and to submit social media content via the networking system 104. In at least one embodiment, social media content submitted to the networking system 104 from the client-computing device 108a can be viewed and interacted with at the client-computing device 108b, and vice versa. Furthermore, in at least one embodiment, as mentioned above, the networking system application 110a, 110b includes part or all of the personalized reaction system 102.

[0040] As shown in FIG. 1, and as mentioned above, the server 106 hosts the networking system 104. In one or more embodiments, the networking system 104 provides posts, electronic messages, ephemeral content, structured objects, and live video streams to one or more co-users (e.g., by way of a profile, a newsfeed, a communication thread, a timeline, a “wall,” a live video stream display, or any other type of graphical user interface presented via the networking system application 110a, 110b on the client-computing devices 102a, 102b respectively). For example, one or more embodiments provide a user with a communication thread including one or more electronic messages exchanged between the users of the client-computing device 108a, and the client-computing device 108b. In another example, one or more embodiments provide a user with a social networking system newsfeed containing posts from one or more co-users associated with the user (e.g., the user’s “friends”). In one or more embodiments, a post and/or electronic message can include one or more media communications (e.g., edited or unedited digital photographs and digital videos), such as described above.

[0041] The networking system 104 also enables the user to engage in all other types of networking system activity. For example, the networking system 104 enables a social networking system user to interact with communication threads, watch and/or create ephemeral content, click on posts and hyperlinks, compose and submit electronic messages and posts, interact with structured object, watch live video streams, interact with media communications, and so forth.

[0042] As will be described in more detail below, the components of the personalized reaction system 102 can provide, along and/or in combination with other components, one or more graphical user interfaces (“GUIs”). In particular, the networking system applications 110a, 110b on the client-computing devices 108a, 108b can display one or more GUIs generated by the personalized reaction system 102, described above. The networking system applications 110a, 110b enable the user of the client-computing device 108a and/or the user of the client-computing device 108b to interact with a collection of display elements within one or more GUIs for a variety of purposes. FIGS. 2A-4F and the description that follows illustrate various example embodiments of the GUIs that are used to describe the various features of the personalized reaction system 102.

[0043] For example, FIGS. 2A-2F illustrate features of the personalized reaction system 102 in connection with a communication thread hosted by the networking system 104 between the user of the client-computing device 108a and the client-computing device 108b. FIGS. 3A-3E illustrate features of the personalized reaction system 102 in connection with ephemeral content created on the client-computing device 108a utilizing the networking system application 110a. FIGS. 4A-4F illustrate features of the personalized reaction system 102 in connection with ephemeral content created on the client-computing device 108b and accessed on the client-computing device 108a.

[0044] As just mentioned, the personalized reaction system 102 provides features and functionality to a user in response to the user initializing the networking system application 110a on the client-computing device 108a. FIG. 2A illustrates the personalized reaction system 102 providing a communication thread GUI 204 on a touch screen display 202 on the client-computing device 108a. In one or more embodiments, the networking system application 110a (as described with reference to FIG. 1) organizes one or more electronic communications 208a, 208b, and 208c within a communication thread 206 displayed within the communication thread GUI 204. For example, the networking system application 110a organizes the electronic communications 208a-208c chronologically in the communication thread 206 in the order they were received and displays each electronic communication 208a, 208b, and 208c so as to indicate the sender of each communication. In at least one embodiment, the communication thread 206 includes the electronic communications 208a-208c exchanged between the user of the client-computing device 108a (e.g., the electronic communication 208c) and the user of the client-computing device 102b (e.g., the electronic communications 208a, 208b).

[0045] In one or more embodiments, the networking system 104 can provide various options for creating electronic communications (e.g., social media content) for inclusion in the communication thread 206. For example, in response to a detected user input via the communication thread GUI 204 (e.g., a swipe up, a selection of an option element adjacent to a text input box, etc.), the networking system 104 can provide an options overlay 210, as shown in FIG. 2B, overlaid on all or a portion of the communication thread GUI 204. As shown in FIG. 2B, the options overlay 210 can include the selectable selfie option 212a, along with other electronic communications selectable options (e.g., stickers, GIFS, emoji).

[0046] In response to a detected selection of the selfie option 212a, the personalized reaction system 102 can provide configuration options for the creation of a customized, personalized selfie reaction-element. For example, as shown in FIG. 2C, the personalized reaction system 102 can provide the selfie configuration overlay 214 on the communication thread GUI 204. In one or more embodiments, the selfie configuration overlay 214 can include a configuration portion and a saved stickers portion.

[0047] In at least one embodiment, as shown in FIG. 2C, the configuration portion includes: a live camera viewfinder display 218; augmented reality enhancement options 220a, 220b, 220c, and 220d; a background selection element 222, a timer element 224, and a capture element 226. Each of these will now be discussed in detail. For example, the live camera viewfinder display 218 includes a viewfinder feed from a camera of the client-computing device 108a. In one or more embodiments, the viewfinder feed may be from a front-facing camera of the client-computing device 108a or a rear-facing camera of the client-computing device 108a.

[0048] As discussed above, in one or more embodiments, the personalized reaction system 102 generates a personalized selfie reaction-element based on an underlying multi-media item and one or more additional effects and/or enhancements. In at least one embodiment, the personalized reaction system 102 can capture a multi-media item in response to a detected selection of the capture element 226. For example, the personalized reaction system 102 can capture a multi-media recording of the viewfinder feed within the live camera viewfinder display 218 in response to a detected press-and-hold touch gesture with the capture element 226. In one or more embodiments, the multi-media recording comprises a video. In alternative embodiments, the multi-media recording comprises super short, a burst of images stitched together into a video that plays forward and backward repeatedly. Additionally, the personalized reaction system 102 can capture a multi-media image (e.g., a snapshot or frame) from the viewfinder feed within the live camera viewfinder display 218 in response to a detected tap touch gesture with the capture element 226. In at least one embodiment, the personalized reaction system 102 can toggle back and forth between a color view and black-and-white view of the multi-media item in response to a detected selection of the display 218.

[0049] After capturing a multi-media item, the personalized reaction system 102 can replace the viewfinder feed in the live camera viewfinder display 218 with the captured multi-media item. In this configuration, the personalized reaction system 102 can provide a real-time display of the personalized selfie reaction-element as the user selects configuration options.

[0050] In one or more embodiments, the personalized reaction system 102 provides the augmented reality enhancement options 220a-220d for overlay or incorporation into a customized, personalized selfie reaction-element. For example, in response to a detected selection of the augmented reality enhancement option 220a, the personalized reaction system 102 adds no augmented reality enhancements to the selfie element within the display 218. In response to a detected selection of the augmented reality enhancement option 220b, the personalized reaction system 102 can add a “crying” augmented reality enhancement to the selfie element within the display 218 (e.g., AR tears). In response to a detected selection of the augmented reality enhancement option 220c, the personalized reaction system 102 can add AR “heart eyes” to the selfie element within the display 218. In response to a detected selection of the augmented reality enhancement option 220d, the personalized reaction system 102 can add an AR “celebration” augmented reality enhancement to the selfie element within the display 218.

[0051] In one or more embodiments, the personalized reaction system 102 can add additional effects to the selfie element within the display 218. For example, in response to a detected selection of the background selection element 222, the personalized reaction system 102 can add a solid or gradient colored background to the selfie elements within the display 218. To illustrate, in response to a detected selection of the background selection element 222, the personalized reaction system 102 can utilize facial/body detection techniques to identify a person (e.g., the user of the client-computing device 108a) within the display 218. Based on this identification, the personalized reaction system 102 can further segment out a background (e.g., the space surrounding the identified person) within the display 218 and replace the cropped background with a solid or gradient color. In at least one embodiment, the personalized reaction system 102 can provide an additional display of color and background options for selection in response to repeated selection of the background selection element 222.

[0052] In one or more embodiments, the personalized reaction system 102 can enable the user to set a timer in association with the capture element 226. For example, in response to a detected selection of the timer element 224, the personalized reaction system 102 can capture a multi-media item after a predetermined amount of time (e.g., 3 seconds). The delay can allow the user to have both hands free to be in the capture multi-media item or otherwise pose for the capture of the multi-media item. Additionally, in at least one embodiment, in response to a detected selection of the timer element 224, the personalized reaction system 102 can provide further options with regard to the predetermined amount of time and the type of multi-media item captured (e.g., a multi-media recording or a multi-media image).

[0053] As shown by FIG. 2D, once the user has captured a multi-media item (and optionally one or more AR enhancements), the personalized reaction system 102 can replace the live camera viewfinder display 218 with the selfie preview 230. As further shown in FIG. 2D, the personalized reaction system 102 can enable further configuration of a customized, personalized selfie reaction-element. For example, in response to a detected selection of the retake element 236, the personalized selfie reaction-element can display the capture and configuration elements (e.g., as shown in FIG. 2C) to reconfigure any component of the customized, personalized selfie reaction-element. After any additional reconfiguration of the customized, personalized selfie reaction-element, the personalized reaction system 102 can send the personalized selfie reaction-element to the networking system 104 in response to a detected selection of the send element 232.

[0054] For example, as shown in FIG. 2F, the personalized reaction system 102 can add the personalized selfie reaction-element to the communication thread 206. In particular, in response to a detected selection of the send element 232 (e.g., as in FIG. 2D), the personalized reaction system 102 can generate an electronic communication 208d including the personalized selfie reaction-element. The personalized reaction system 102 can further add the generated electronic communication 208d to the communication thread 206 including electronic communications 208a-208c between the user of the client-computing device 108a and the user of the client-computing device 108b. If the personalized selfie reaction-element includes any augmented reality enhancements or other effects (e.g., auto-looping multi-media recording, black-and-white coloring, gradient color background, etc.), the personalized reaction system 102 can generate the electronic communication 208d to further reflect those enhancements and effects.

[0055] Returning to FIG. 2D, after generating the personalized selfie reaction-element, the personalized reaction system 102 can provide an option to save the personalized selfie reaction-element. In particular, the personalized reaction system 102 can provide a save sticker option 235. As shown in FIG. 2D, the selfie configuration overlay 214 can include a saved sticker portion. Upon selection of the save sticker option 235, the personalized reaction system 102 can add the personalized selfie reaction-element to the saved sticker portion. In one or more embodiments, the personalized reaction system 102 provides one or more previously generated customized, personalized selfie reaction-elements 228a, 228b, 228c within the saved sticker portion of the selfie configuration overlay 214. In response to a detected user interaction with the selfie configuration overlay 214 (e.g., a swipe-up touch gesture), the personalized reaction system 102 can scroll the selfie configuration overlay 214 so as to display additional previously generated customized, personalized selfie reaction-elements within the selfie configuration overlay 214. For example, as shown in FIG. 2F, in response to a detected user interaction with the selfie configuration overlay 214, the personalized reaction system 102 can remove display elements in the configuration portion of the selfie configuration overlay 214 while adding an additional row of previously generated customized, personalized selfie reaction-elements.

[0056] As shown in FIGS. 2A-2F, the personalized reaction system 102 can provide personalized selfie reaction-element customization functionality in connection with electronic communications within a communication thread. In additional embodiments, the personalized reaction system 102 can provide this functionality in connection with other types of social media content. For example, as shown in FIGS. 3A-3E, the personalized reaction system 102 can provide this functionality in connection with ephemeral social media content. As discussed above, the networking system 104 enables and provides ephemeral content including multi-media recordings and multi-media images for a predetermined amount of time (e.g., 24 hours) as part of a user’s ephemeral content collection (e.g., the user’s “story”).

[0057] FIG. 3A illustrates an ephemeral content display 302 on the touch screen display 202 of the client-computing device 108a. For example, within the ephemeral content display 302, the networking system 104 can provide the ephemeral content within the ephemeral content collection of the user of the client-computing device 108a. In one or more embodiments, the personalized reaction system 102 can enable further configuration of the ephemeral content by providing the options overlay 210, as shown in FIG. 3B, and as discussed above with reference to FIG. 2B.

[0058] In response to a detected selection of the selfie option 212, the personalized reaction system 102 can provide the same selfie configuration options discussed above with regard to FIGS. 2C, 2D, and 2F. For example, as shown in FIG. 3C, in response to the detected selection of the selfie option 212, the personalized reaction system 102 can provide the selfie configuration overlay 214 on the ephemeral content display 302. Based on the detected selections of elements within the selfie configuration overlay 214, the personalized reaction system 102 can configure and generate a customized, personalized selfie reaction-element, as discussed above.

[0059] Additionally or alternatively, the personalized reaction system 102 can provide previously generated customized, personalized selfie reaction-elements for use in connection with ephemeral content. For example, as shown in FIG. 3D, in response to a detected selection of the personalized selfie reaction-element 228b, the personalized reaction system 102 can enable the send element 232 associated with that element.

[0060] As shown in FIG. 3E, in response to a detected selection of the send element 232, the personalized reaction system 102 can add the personalized selfie reaction-element 228b to the ephemeral content within the ephemeral content display 302. In one or more embodiments, the position of the personalized selfie reaction-element 228b is changeable. For example, in response to a touch gesture in connection with the personalized selfie reaction-element 228b (e.g., a press-and-slide touch gesture), the personalized reaction system 102 can move the personalized selfie reaction-element 228b in accordance with the detected touch gesture. Alternatively, the personalized reaction system 102 can automatically place the personalized selfie reaction-element 228b based on an analysis of the underlying ephemeral content. For example, the personalized reaction system 102 can automatically place the personalized selfie reaction-element 228b so as to avoid placement over a face, over an item of prominence, over an area of interest, etc.

[0061] After adding the personalized selfie reaction-element 228b to the user’s ephemeral content, the personalized reaction system 102 can make the now-enhanced ephemeral content available via the networking system 104. For example, the personalized reaction system 102 can add the ephemeral content to the ephemeral content collection associated with the user of the client-computing device 108a. Other co-users associated with the user of the client-computing device 108a (e.g., the user’s “friends”) can view the enhanced ephemeral content within user’s ephemeral content collection via the networking system 104 for as long as the enhanced ephemeral content is available.

[0062] In addition to enabling a user to configure and add customized, personalized selfie reaction-elements to their own ephemeral content, the personalized reaction system 102 also enables users to configure and add customized, personalized selfie reaction-elements to other users’ ephemeral content. For example, FIG. 4A illustrates a co-user ephemeral content display 402 on the touch screen display 202 of the client-computing device 108a. In one or more embodiments, the networking system 104 enables users to view the ephemeral content collections of co-users (e.g., “friends”) via the networking system application (e.g., the networking system application 110a). In at least one embodiment, the networking system 104 provides the co-user’s ephemeral content collection by automatically playing or displaying multi-media items in the co-user ephemeral content display 402.

[0063] In one or more embodiments, the networking system 104 enables a user to comment, reply, or otherwise react to a co-user’s ephemeral content. For example, in response to a detected selection of the text box 404, the networking system application 110a can provide a touch screen keyboard 406, as shown in FIG. 4B. The networking system 104 can accept text input via the touch screen keyboard 406 to generate an electronic communication between the users of the client-computing devices 108a and 108b. Furthermore, as shown in FIG. 4B, the personalized reaction system 102 can provide one or more quick reply elements 408.

[0064] As discussed above with regard to FIGS. 2B and 3B, in response to detected user input (e.g., a swipe-up touch gesture), the personalized reaction system 102 can provide the options overlay 210 on top of the co-user ephemeral content display 402, as shown in FIG. 4C. In response to a detected selection of the selfie option 212, the personalized reaction system 102 can provide the same selfie configuration options discussed above. For example, as shown in FIG. 4D, in response to the detected selection of the selfie option 212, the personalized reaction system 102 can provide the selfie configuration overlay 214 on the co-user ephemeral content display 402. Based on the detected selections of elements within the selfie configuration overlay 214, the personalized reaction system 102 can configure and generate a customized, personalized selfie reaction-element, as discussed above.

[0065] Additionally or alternatively, the personalized reaction system 102 can provide previously generated customized, personalized selfie reaction-elements for use in connection with ephemeral content. For example, as shown in FIG. 4E, in response to a detected selection of the personalized selfie reaction-element 228b, the personalized reaction system 102 can enable the send element 232 associated with that element.

[0066] As shown in FIG. 4F, in response to a detected selection of the send element 232, the personalized reaction system 102 can associate the personalized selfie reaction-element 228b with the co-user’s ephemeral content (e.g., provided via the co-user ephemeral content display 402) by generating or adding to a communication thread 206 between the user of the client-computing device 108a and the co-user associated with the co-user ephemeral content. For example, in response to the detected selection of the send element 232, the personalized reaction system 102 can generate an electronic communication 208e including a representative frame from the co-user’s ephemeral content. The personalized reaction system 102 can then generate an electronic communication 208f including the personalized selfie reaction-element 228b. As discussed above, if the personalized selfie reaction-element 228c includes any augmented reality enhancements or other effects (e.g., auto-looping multi-media recording, black-and-white coloring, gradient color background, etc.), the personalized reaction system 102 can generate the electronic communication 208f to further reflect those enhancements and effects.

[0067] As discussed above, the personalized reaction system 102 can provide and incorporate various augmented reality enhancements in connection with customized, personalized selfie reaction-element. FIGS. 5A-5D illustrate different types of augmented reality enhancements utilized by the personalized reaction system 102. For example, FIGS. 5A and 5B illustrate an overlay-type augmented reality enhancement. FIGS. 5C and 5D illustrate an alteration-type augmented reality enhancement.

[0068] FIG. 5A shows a captured multi-media recording 502a of a user of the client-computing device 108a. After a detected user-selection of an overlay-type augmented reality enhancement 506 (e.g., the “100” augmented reality enhancement) as described above, the personalized reaction system 102 can overlay the selected augmented reality enhancement 506 on the multi-media recording 502a, as shown in FIG. 5B. In one or more embodiments, the personalized reaction system 102 can automatically place the selected augmented reality enhancement 506. For example, the personalized reaction system 102 can perform facial recognition to appropriately place the augmented reality enhancement 506 (e.g., with the zeroes of the “100” over the user’s eyes).

[0069] FIG. 5C shows a captured multi-media recording 502b of a user of the client-computing device 108a. After a detected user-selection of an alteration-type augmented reality enhancement 504 (e.g., the “big lips” augmented reality enhancement) as described above, the personalized reaction system 102 can incorporate the selected augmented reality enhancement 504, as shown in FIG. 5D. In one or more embodiments, the personalized reaction system 102 incorporates the selected augmented reality enhancement 504 by performing facial recognition to identify the relevant portions of the user’s face (e.g., the user’s lips). The personalized reaction system 102 can further utilize image editing functionality to alter an appearance of the relevant portions of the user’s face (e.g., enlarge the user’s lips).

[0070] As just discussed, the personalized reaction system 102 can incorporate selected augmented reality enhancements into a multi-media item to generate a personalized selfie reaction-element. FIG. 6 provides additional information regard the technical process utilized by the personalized reaction system 102 to incorporate augmented reality enhancements, in one or more embodiments. For example, during a process of incorporating an augmented reality enhancement into a multi-media item, in one or more embodiments, the personalized reaction system 102 generates a mesh model for a face. The personalized reaction system 102 can generate the mesh model for the face in a variety of ways. As an illustrative example shown in FIG. 6, the personalized reaction system 102 can generate a feature map 602 (e.g., a two-dimensional mapping) for a face including reference features 604 positioned about a face of a user shown within a multi-media item. In one or more embodiments, the personalized reaction system 102 analyzes one or more video frames to generate the feature map 602 including detected reference features 604.

[0071] As shown in FIG. 6, the feature map 602 includes detected edges of facial features including, for example, eyes, eyebrows, a nose, lips, and other detected features of the face. In addition to generally mapping detected objects, the personalized reaction system 102 can further map contours, wrinkles, and other more detailed features of the detected face. In one or more embodiments, the personalized reaction system 102 can map coloring features and other appearance-based features shown within one or more images of the face.

[0072] While FIG. 6 shows an example in which a single image of a face is mapped, the personalized reaction system 102 can further refine identified locations of the reference features 604 by analyzing multiple video frames (e.g., consecutive video frames) of the captured camera viewfinder stream or multi-media item. As an example, the personalized reaction system 102 can analyze an initial frame of a camera stream (e.g., a key frame) and further compute locations of the reference features 604 at previous or subsequent video frames. The personalized reaction system 102 can further apply weighting factors based on detected movement between frames and estimation of optical flow between video frames. In one or more embodiments, the personalized reaction system 102 utilizes the following algorithms to refine determined locations of reference features 604:

x i t = .lamda. i x f , i t + ( 1 - .lamda. i ) x o , i t for 1 .ltoreq. i .ltoreq. m ##EQU00001## x o t = x T + T .ltoreq. i < t w i ##EQU00001.2##

Where t is the time of a video frame, 0.ltoreq..lamda..ltoreq.1 is a weighting factor, x.sub.f.sup.t is a feature position obtained at time t, and x.sub.o.sup.t is an estimated feature location. Further, x.sup.T denotes a feature position in a key frame f.sub.t.sup.T and w.sup.i is the forward optical flow vector from t to t+1 in x.sub.o.sup.t. In one or more embodiments, the personalized reaction system 102 utilizes the optical flow-based feature correction process described in Reconstructing Detailed Dynamic Face Geometry From Monocular Video by Garrido et al., which is incorporated herein by reference in its entirety.

[0073] In addition to generating and refining the feature map 602, the personalized reaction system 102 further utilizes a default mesh model 606 in conjunction with the feature map 602 to generate a personalized mesh model 610 for a detected face in a multi-media item. In particular, in one or more embodiments, the personalized reaction system 102 identifies a default mesh model 606 including a three-dimensional model of a generic face that includes a number of vertices that define various points on the face. The default mesh model 606 can include any number of vertices and gridlines depending on computing capabilities of a client device. In addition, the personalized reaction system 102 can select a particular default mesh model 606 based on a detected position or angle of the detected face to further improve upon the accuracy of generating the personalized mesh model 610.

[0074] As shown in FIG. 6, the personalized reaction system 102 can implement a face mesh generator 608 to generate a personalized mesh model 610 based on the default mesh model 606 and including grid lines and vertices that reflect locations of the reference features 604 of the feature map 602. In particular, in one or more embodiments, the personalized reaction system 102 generates the personalized mesh model 610 by manipulating vertices of the default mesh model 606 based on locations of the reference features 604. In one or more embodiments, the personalized reaction system 102 refines the personalized mesh model 610 over time to reflect refined positions of the reference features 604 and/or to reflect more accurate or efficient modeling methods utilized over time to generate the personalized mesh model 610.

[0075] As shown in FIG. 6, the personalized reaction system 102 can generate a single personalized mesh model 610 including a three-dimensional mesh that reflects an entire face, portion of a face, and/or larger profile including the neck and torso of an individual. In one or more embodiments, the personalized reaction system 102 generates multiple personalized mesh models for different angles of a face or profile. For example, the personalized reaction system 102 can generate and utilize a personalized mesh model 610 based on detecting a face looking at the camera in the multi-media item. The personalized reaction system 102 can similarly generate and utilize a different personalized mesh model based on different angles of an individual looking down, up, to the side, or based on a variety of angles of the face to improve accuracy of providing a digital presentation.

[0076] Accordingly, in at least one embodiment, the personalized reaction system 102 can render a personalized selfie reaction-element by positioning a selected augmented reality enhancement on a personalized mesh model 610. For example, the personalized reaction system 102 can render the selected augmented reality enhancement at a position on the user’s detected face within the multi-media item in accordance with vertices of the selected augmented reality enhancement (e.g., indicated by metadata associated with the selected augmented reality enhancement) and corresponding reference features 604 of the personalized mesh model 610. As the user moves within the multi-media item (e.g., a multi-media recording), the personalized reaction system 102 can update a position of the augmented reality enhancement to correspond to changing locations of the references features 604, thereby providing a realistic representation of the augmented reality enhancement in connection with the user.

[0077] FIG. 7 illustrates a schematic diagram illustrating an example embodiment of the personalized reaction system 102. As previously shown in FIG. 1, the personalized reaction system 102 is implemented by a networking system 104 on the server(s) 106, is communicatively connected to one or more client-computing devices 108 (e.g., the client-computing devices 108a, 108b as shown in FIG. 1), and includes various components for performing the processes and features described herein. As shown in FIG. 7, the client-computing device 108 includes the networking system application 110, a display manager 702, a user input detector 704, and data storage 706. Also, as shown in FIG. 7, the server(s) 106 hosts the networking system 104 that includes the personalized reaction system 102 including a communication manager 708, an AR enhancement manager 710, a customized personalized reaction manager 712, and a data storage 714 including personalized reaction data 716. The networking system 104 also includes a social graph 718 including node information 720 and edge information 722. In some embodiments, as indicated by the dashed line, the networking system application 110 can include a portion or all of the personalized reaction system 102.

[0078] In at least one embodiment, the personalized reaction system 102 accesses the networking system 104 in order to identify and analyze social networking system user data. Accordingly, as shown in FIG. 7, the networking system 104 includes the social graph 718 for representing a plurality of users, actions, and concepts. For example, in one or more embodiments, the social graph 718 is accessible by the networking system 104. In one or more embodiments, the social graph 718 includes node information 720 and edge information 722. Node information 720 of the social graph 718 stores information including, for example, nodes for users and nodes for repositories. Edge information 722 of the social graph 718 stores information including relationship between nodes and/or actions occurring within the networking system 104. Further details regarding the networking system 104, the social graph 718, edges, and nodes are presented below with respect to FIG. 10.

[0079] Each of the components 102, 110, and 702-706 of the client-computing device 108, and the components 102, and 708-722 of the networking system 104, can be implemented using a computing device including at least one processor executing instructions that cause the personalized reaction system 102 to perform the processes described herein. In some embodiments, the components of the personalized reaction system 102 can be implemented by the server(s) 106, or across multiple server devices. Alternatively, a combination of one or more server devices and one or more client devices can implement the components of the personalized reaction system 102. Additionally, the components of the personalized reaction system 102 can comprise a combination of computer-executable instructions and hardware.

[0080] In one or more embodiments, the networking system application 110 is a native application installed on the client-computing device 108. For example, the networking system application 110 can be mobile application that install and run on a mobile device, such as a smart phone or tablet computer. Alternatively, the networking system application 110 can be desktop application, widget, or other form of a native computer program. Furthermore, the networking system application 110 may be a remote application accessed by the client-computing device 108. For example, the networking system application 110 may be web applications that are executed within a web browser of the client-computing device 108.

[0081] As mentioned above, and as shown in FIG. 7, the client-computing device 108 includes the networking system application 110. In one or more embodiments, the networking system application 110 enables the user of the client-computing device 108 to interact with one or more features of the networking system 104. Further, in one or more embodiments, the networking system application 110 enables the user of the client-computing device 108 to interact with one or more features of the personalized reaction system 102. For example, the networking system application 110 enables the user of the client-computing device 108 to scroll through a newsfeed, read posts, compose electronic messages, capture digital photographs and videos, configure and view ephemeral content, and otherwise interact with the networking system 104.

[0082] Additionally, in one or more embodiments, the networking system application 110 also collects contextual information from the client-computing device 108 and provides the collected information to the networking system 104. For example, in one or more embodiments, the networking system application 110 accesses system files, application usage files, and other system information to identify GPS data (e.g., global positioning satellite data), camera viewfinder data, gyroscopic data, application usage data, and a networking system unique identifier associated with the user of the client-computing device 108. The networking system application 110 then provides this data to the networking system 104 for use in providing augmented reality enhancements and determining augmented reality enhancement placement.

[0083] Also shown in FIG. 7, the client-computing device 108 includes the display manager 702. In one or more embodiments, the display manager 702 generates, provides, manages, and/or controls one or more graphical user interfaces that allow a user to interact with features of the personalized reaction system 102. For example, the display manager 702 generates a graphical user interface (“GUI”) that includes the selfie configuration overlay 214 including various selectable controls. In at least one embodiment, the display manager 702 further overlays one or more selected enhancements and effects on a captured multi-media item in order to provide a preview of how a personalized selfie reaction-element might look. The display manager 702 can additionally generate other GUIs that assist a user in creating a social media content and sending the social media content to others.

[0084] More specifically, the display manager 702 facilitates the display of a graphical user interface. For example, the display manager 702 may compose the graphical user interface of a plurality of graphical components, objects, and/or elements that allow a user to engage with features of the personalized reaction system 102. More particularly, the display manager 702 may direct a client-computing device to display a group of graphical components, objects, and/or elements that enable a user to interact with various features of the personalized reaction system 102.

[0085] In addition, the display manager 702 directs a client-computing device to display one or more graphical objects, controls, or elements that facilitate user input for interacting with various features of the personalized reaction system 102. To illustrate, the display manager 702 provides a graphical user interface that allows a user to configure a customized, personalized selfie reaction-element. The display manager 702 also facilitates the input of text or other data for the purpose of interacting with one or more features of the personalized reaction system 102. For example, the display manager 702 provides a GUI that that functions in connection with a touch screen. A user can interact with the touch screen using one or more touch gestures to select augmented reality enhancements, capture multi-media items, input text, manipulate displays, and so forth.

[0086] Furthermore, the display manager 702 is capable of transitioning between two or more GUIs. For example, in one embodiment, the display manager 702 provides a communication thread GUI (e.g., the communication thread GUI 204). Later, in response to detected input from the user, the display manager 702 transitions to or overlays a second GUI that includes the selfie configuration overlay 214.

[0087] As shown by the dotted-lines in FIG. 7, optionally, the networking system application 110 includes the personalized reaction system 102. For example, in one or more embodiments, the networking system application 110 implemented by the client-computing device 108 comprises the communication manager 708, the AR enhancement manager 710, the customized personalized reaction manager 712, and optionally the data storage 714 and personalized reaction data 716. Thus, in one or more embodiments, the methods described herein above can be implemented locally by the client-computing device 108. Alternatively, as described below, the server(s) 106 can implement the personalized reaction system 102.

[0088] As further illustrated in FIG. 7, the personalized reaction system 102 includes the user input detector 704. In one or more embodiments, the user input detector 704 detects, receives, and/or facilitates user input. In some examples, the user input detector 704 detects one or more user interactions with respect to a user interface or GUI. As referred to herein, a “user interaction” means a single interaction, or combination of interactions, received from a user by way of one or more input devices. For example, the user input detector 704 detects a user interaction from a keyboard, mouse, touch page, touch screen, and/or any other input device. In the event the client-computing device includes a touch screen, the user input detector 704 detects one or more touch gestures (e.g., swipe gestures, tap gestures, pinch gestures, reverse pinch gestures) from a user that forms a user interaction. In some examples, a user can provide the touch gestures in relation to and/or directed at one or more graphical objects or graphical elements of a user interface or GUI.

[0089] The user input detector 704 may additionally, or alternatively, receive data representative of a user interaction. For example, the user input detector 704 may receive one or more user configurable parameters from a user, one or more commands from the user, and/or other suitable user input. The user input detector 704 may receive input data from one or more components of the personalized reaction system 102 or from one or more remote locations.

[0090] The personalized reaction system 102 performs one or more functions in response to the user input detector 704 detecting user input and/or receiving other data. Generally, a user can control, navigate within, and otherwise use the personalized reaction system 102 by providing one or more user inputs that the user input detector 704 can detect.

[0091] As illustrated in FIG. 7, the client-computing device 108 also includes the data storage 706. The data storage 706 can includes social media content, multi-media items, augmented reality enhancements, and other electronic communication data. In one or more embodiments, social media content includes social media information, such as described herein.

[0092] As shown in FIG. 7, and as mentioned above, the server(s) 106 hosts the networking system 104. In one or more embodiments, the networking system 104 provides communication threads, live video broadcasts, digital media items, networking system posts, electronic messages, and other networking system features to one or more networking system users (e.g., by way of a newsfeed, a communication thread, an ephemeral content collection, a timeline, a profile, a “wall,” a live video broadcast display). Additionally, the networking system 104 includes the social graph 718, as described above. As further illustrated in FIG. 7, and as mentioned above, the networking system 104 includes the personalized reaction system 102 including a communication manager 708, an AR enhancement manager 710, a customized personalized reaction manager 712, and a data storage 714 storing personalized reaction data 716.

[0093] As just mentioned, the personalized reaction system 102 includes a communication manager 708. In one or more embodiments, the communication manager 708 manages all communication between client-computing devices and the personalized reaction system 102 via the networking system 104. For example, the communication manager 708 can send and receive social media content such as, but not limited to, electronic communications, ephemeral content, posts, interactions, and other social networking activity.

[0094] As mentioned above, the personalized reaction system 102 includes an AR enhancement manager 710. In one or more embodiments, the AR enhancement manager 710 manages AR enhancements for use in personalized selfie reaction-element creation. For example, the AR enhancement manager 710 can store AR enhancements in various ways. To illustrate, the AR enhancement manager 710 can store AR enhancements (e.g., in the data storage 714) based on title (e.g., “crying eyes,” “celebration,” “big lips,” etc.), based on type (e.g., overlay or altering), based on metadata associated with each AR enhancement.

[0095] In one or more embodiment, the AR enhancement manager 710 also provides AR enhancements in various ways. For example, in response to determining that a personalized selfie reaction-element configuration is initiated on the client-computing device 108, the AR enhancement manager 710 can provide a threshold number of AR enhancements to the client-computing device 108. To illustrate, in one embodiment, the AR enhancement manager 710 can identify and provide a top threshold number of popular AR enhancements from across the networking system 104. For example, the AR enhancement manager 710 can analyze use information associated with each available AR enhancement to identify AR enhancements that are used most frequently across networking system users in connection with social media content (e.g., in selfie elements, electronic communications, ephemeral content). In some embodiments, the AR enhancement manager 710 can determine popularity of AR enhancement across geographic areas, demographic groups, social media content type (e.g., ephemeral content, communication threads, posts), or networking system groups. The AR enhancement manager 710 can then provide a predetermined number (e.g., five) of the most popular AR enhancements to the client-computing device 108.

……
……
^

您可能还喜欢...