Facebook Patent | Systems And Methods For Providing Demographic Analysis For Media Content Based On User View Or Activity
Publication Number: 20180300747
Publication Date: 20181018
Applicants: Facebook
Abstract
Systems, methods, and non-transitory computer-readable media can present a plurality of content items in a virtual reality content item. Tracking data associated with a plurality of users that access the virtual reality content item can be obtained. An analysis associated with the plurality of content items based on the tracking data can provided, wherein the analysis indicates one or more attributes associated with the plurality of users.
FIELD OF THE INVENTION
[0001] The present technology relates to the field of content provision. More particularly, the present technology relates to techniques for analyzing user view and/or activity associated with content presented through computing devices.
BACKGROUND
[0002] Today, people often utilize computing devices (or systems) for a wide variety of purposes. Users can operate their computing devices to, for example, interact with one another, create content, share content, and access information. Under conventional approaches, content items (e.g., images, videos, audio files, etc.) can be made available through a content sharing platform. Users can operate their computing devices to access the content items through the platform. Typically, the content items can be provided, or uploaded, by various entities including, for example, content publishers and also users of the content sharing platform. In some instances, the content items can be categorized and/or curated.
SUMMARY
[0003] Various embodiments of the present disclosure can include systems, methods, and non-transitory computer readable media configured to present a plurality of content items in a virtual reality content item. Tracking data associated with a plurality of users that access the virtual reality content item can be obtained. An analysis associated with the plurality of content items based on the tracking data can provided, wherein the analysis indicates one or more attributes associated with the plurality of users.
[0004] In some embodiments, the plurality of content items is included in the virtual reality content item for comparison in an A/B test.
[0005] In other embodiments, the computing system is controlled by a first entity and the A/B test is requested by a second entity.
[0006] In certain embodiments, the plurality of content items is presented at respective locations in the virtual reality content item suggested by the computing system.
[0007] In an embodiment, the plurality of content items includes a first content item and a second content item that is different from the first content item, and the first content item is provided at a first location in the virtual reality content item and the second content item is provided at a second location in the virtual reality content item.
[0008] In some embodiments, the plurality of content items includes a first content item and a second content item that are the same, and the first content item is provided at a first location in the virtual reality content item and the second content item is provided at a second location in the virtual reality content item.
[0009] In certain embodiments, the tracking data is obtained based on positions of respective viewports associated with the plurality of users.
[0010] In an embodiment, the one or more attributes associated with the plurality of users include one or more of: an age, an age range, a gender, a geographical region, or an interest.
[0011] In some embodiments, one or more suggested locations in the virtual reality content item for providing an advertisement can be provided, based on the analysis of the plurality of content items.
[0012] In certain embodiments, an advertisement to provide in the virtual reality content item can be selected based on the analysis of the plurality of content items.
[0013] It should be appreciated that many other features, applications, embodiments, and/or variations of the disclosed technology will be apparent from the accompanying drawings and from the following detailed description. Additional and/or alternative implementations of the structures, systems, non-transitory computer readable media, and methods described herein can be employed without departing from the principles of the disclosed technology.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 illustrates an example system including an example media content user analysis module configured to analyze user view tracking data associated with virtual reality content items, according to an embodiment of the present disclosure.
[0015] FIG. 2 illustrates an example of an A/B testing analysis module configured to provide A/B testing and analysis of user attributes, according to an embodiment of the present disclosure.
[0016] FIGS. 3A-E illustrate examples of streaming a virtual reality content item and performing A/B testing, according to an embodiment of the present disclosure.
[0017] FIG. 4 illustrates an example first method for analyzing user view tracking data associated with virtual reality content items, according to an embodiment of the present disclosure.
[0018] FIG. 5 illustrates an example second method for analyzing user view tracking data associated with virtual reality content items, according to an embodiment of the present disclosure.
[0019] FIG. 6 illustrates a network diagram of an example system including an example social networking system that can be utilized in various scenarios, according to an embodiment of the present disclosure.
[0020] FIG. 7 illustrates an example of a computer system or computing device that can be utilized in various scenarios, according to an embodiment of the present disclosure.
[0021] The figures depict various embodiments of the disclosed technology for purposes of illustration only, wherein the figures use like reference numerals to identify like elements. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated in the figures can be employed without departing from the principles of the disclosed technology described herein.
DETAILED DESCRIPTION
Providing Demographic Analysis for Media Content Based on User View or Activity
[0022] People use computing devices (or systems) for a wide variety of purposes. As mentioned, under conventional approaches, a user can utilize a computing device to share content items (e.g., documents, images, videos, audio, etc.) with other users. Such content items can be made available through a content sharing platform. Users can operate their computing devices to access the content items through the platform. Typically, the content items can be provided, or uploaded, by various entities including, for example, content publishers and also users of the content sharing platform.
[0023] In some instances, a user can access virtual reality content through a content provider. Such virtual reality content can be presented, for example, in a viewport that is accessible through a computing device (e.g., a virtual reality device, headset, or any computing device capable of presenting virtual reality content). In general, a virtual reality content item (or immersive video) corresponds to any virtual reality media that encompasses (or surrounds) a viewer (or user). Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. In some embodiments, a virtual reality content item can be created by stitching together various video streams (or feeds) that were captured by cameras that are placed at particular locations and/or positions to capture a view of the scene (e.g., 180 degree view, 225 degree view, 360 degree view, etc.). Once stitched together, a user can access, or present (e.g., playback), the virtual reality content item. Generally, while accessing the virtual reality content item, the user can zoom and change the direction (e.g., pitch, yaw, roll) of the viewport to access different portions of the scene in the virtual reality content item. The direction of the viewport can be used to determine which stream of the virtual reality content item is presented.
[0024] A viewport can indicate what a user is viewing within a virtual reality content item and can imply an interest in what the user is viewing. Conventional approaches specifically arising in the realm of computer technology may keep track of what users are viewing within virtual reality content items and provide some type of analysis. However, conventional approaches may have limited demographic information associated with users and may not provide a detailed analysis relating to what users are viewing within virtual reality content items and various attributes associated with the users. An improved approach rooted in computer technology can overcome the foregoing and other disadvantages associated with conventional approaches specifically arising in the realm of computer technology. Based on computer technology, the disclosed technology can provide an analysis for what users view within virtual reality content items based on various attributes associated with users. For example, virtual reality content items can be presented through a social networking system, which can have access to various types of demographic information associated with users who view the virtual reality content items. In some embodiments, the disclosed technology can perform A/B testing of various content items within a virtual reality content item. As used herein, a reference to a content item within a virtual reality content item, or the like, relates to presentation of the content item within an immersive experience, a multidimensional (e.g., 3D) scene, or any other type of engaging environment presented by the virtual reality content item. Examples of content items can include any type of content that can be presented within a virtual reality content item, such as objects, images, videos, etc. For example, two different content items can be presented at different locations within a virtual reality content item in order to determine which content item users prefer. In some embodiments, the different locations may be within a certain proximity (e.g., a threshold distance). As another example, the same content item can be presented at different locations within a virtual reality content item in order to determine which location users are more likely to view. Many variations are possible. The disclosed technology can provide an analysis of user attributes and/or other information for content items included in the A/B testing based on view tracking data and/or heat map data from the A/B testing. In certain embodiments, the disclosed technology can provide an analysis of user attributes for a virtual reality content item without performing A/B testing. For example, the disclosed technology can obtain view tracking data and/or heat map data for a virtual reality content item and provide an analysis of user attributes based on what users viewed within the virtual reality content item. An analysis of user attributes for virtual reality content items can be used to determine content items to present to users within virtual reality content items or placement of content items within virtual reality content items. For example, an advertisement can be targeted to a particular demographic group of users and can be presented at specific locations within a virtual reality content item. In this manner, the disclosed technology can provide helpful insights relating to virtual reality content items based on demographic information. Details relating to the disclosed technology are provided below.
[0025] FIG. 1 illustrates an example system 100 including an example media content user analysis module 102 configured to analyze user view tracking data associated with virtual reality content items, according to an embodiment of the present disclosure. As shown in the example of FIG. 1, the media content user analysis module 102 can include a content module 104, a streaming module 106, and an A/B testing analysis module 108. In some instances, the example system 100 can include at least one data store 112. A client module 114 can interact with the media content user analysis module 102 over one or more networks 150 (e.g., the Internet, a local area network, etc.). The client module 114 can be implemented in a software application running on a computing device (e.g., a virtual reality device, headset, or any computing device capable of presenting virtual reality content). In various embodiments, the network 150 can be any wired or wireless computer network through which devices can exchange data. For example, the network 150 can be a personal area network, a local area network, or a wide area network, to name some examples. The components (e.g., modules, elements, etc.) shown in this figure and all figures herein are exemplary only, and other implementations may include additional, fewer, integrated, or different components. Some components may not be shown so as not to obscure relevant details. The disclosed technology is explained in connection with virtual reality content items for illustrative purposes, but the disclosed technology can apply to any type of media content.
[0026] In some embodiments, the media content user analysis module 102 can be implemented, in part or in whole, as software, hardware, or any combination thereof. In general, a module, as discussed herein, can be associated with software, hardware, or any combination thereof. In some implementations, one or more functions, tasks, and/or operations of modules can be carried out or performed by software routines, software processes, hardware, and/or any combination thereof. In some cases, the media content user analysis module 102 can be implemented, in part or in whole, as software running on one or more computing devices or systems, such as on a user computing device or client computing system. For example, the media content user analysis module 102, or at least a portion thereof, can be implemented as or within an application (e.g., app), a program, or an applet, etc., running on a user computing device or a client computing system, such as the user device 610 of FIG. 6. Further, the media content user analysis module 102, or at least a portion thereof, can be implemented using one or more computing devices or systems that include one or more servers, such as network servers or cloud servers. In some instances, the media content user analysis module 102 can, in part or in whole, be implemented within or configured to operate in conjunction with a social networking system (or service), such as the social networking system 630 of FIG. 6. It should be understood that there can be many variations or other possibilities.
[0027] In some embodiments, the media content user analysis module 102 can be configured to communicate and/or operate with the at least one data store 112 in the example system 100. In various embodiments, the at least one data store 112 can store data relevant to the function and operation of the media content user analysis module 102. For example, such data can include information relating to virtual reality content items, content items included in virtual reality content items, A/B testing, demographic information associated with users, advertisements, etc. In some implementations, the at least one data store 112 can store information associated with the social networking system (e.g., the social networking system 630 of FIG. 6). The information associated with the social networking system can include data about users, social connections, social interactions, locations, geo-fenced areas, maps, places, events, pages, groups, posts, communications, content, feeds, account settings, privacy settings, a social graph, and various other types of data. In some implementations, the at least one data store 112 can store information associated with users, such as user identifiers, user information, profile information, user specified settings, content produced or posted by users, and various other types of user data. It should be appreciated that there can be many variations or other possibilities.
[0028] In various embodiments, the content module 104 can provide access to various types of media content (e.g., virtual reality content items, immersive videos, etc.) to be presented through a viewport. This viewport may be provided through a display of a computing device (e.g., a virtual reality computing device) in which the client module 114 is implemented, for example. In some instances, the computing device may be running a software application (e.g., social networking application) that is configured to present media content. Some examples of virtual reality content can include videos composed using monoscopic 360 degree views or videos composed using stereoscopic 180 degree views, to name some examples. In various embodiments, virtual reality content items can capture views (e.g., 180 degree views, 225 degree views, 360 degree views, etc.) of one or more scenes over some duration of time. Such scenes may be captured from the real world and/or be computer generated. Further, a virtual reality content item can be created by stitching together various video streams (or feeds) that were captured by cameras that are placed at particular locations and/or positions to capture a view of the scene. Such streams may be pre-determined for various directions, e.g., angles (e.g., 0 degree, 30 degrees, 60 degrees, etc.), accessible in a virtual reality content item. Once stitched together, a user can access, or present, the virtual reality content item to view a portion of the virtual reality content item along some direction (or angle). Generally, the portion of the virtual reality content item (e.g., stream) shown to the user can be determined based on the location and direction of the user’s viewport in three-dimensional space. In some embodiments, a virtual reality content item (e.g., stream, immersive video, spherical video, etc.) may be composed using multiple media content items. For example, a media content item may be composed using a first media content item (e.g., a first live broadcast) and a media second content item (e.g., a second live broadcast).
[0029] In one example, the computing device in which the client module 114 is implemented can request presentation of a virtual reality content item (e.g., spherical video). In this example, the streaming module 106 can provide one or more streams of the virtual reality content item to be presented through the computing device. The stream(s) provided will typically correspond to a direction of the viewport in the virtual reality content item being accessed. As presentation of the virtual reality content item progresses, the client module 114 can continually provide the media content user analysis module 102 with information describing the direction at which the viewport is facing. The streaming module 106 can use this information to determine which stream to provide the client module 114. For example, while accessing the virtual reality content item, the client module 114 can notify the media content user analysis module 102 that the viewport is facing a first direction. Based on this information, the streaming module 106 can provide the client module 114 with a first stream of the virtual reality content item that corresponds to the first direction.
[0030] In some embodiments, the A/B testing analysis module 108 can provide A/B testing and analysis of user attributes associated with virtual reality content items. A virtual reality content item including content items can be included in an A/B test, and view tracking data and/or heat map data can be generated for users who view the virtual reality content item. An analysis of user attributes and/or other information can be performed based on the view tracking data and/or the heat map data. User attribute analysis information can be provided to an entity associated with the A/B test. More details describing the A/B testing analysis module 108 will be provided below in reference to FIG. 2.
[0031] FIG. 2 illustrates an example of an A/B testing analysis module 202 configured to provide A/B testing and analysis of user attributes, according to an embodiment of the present disclosure. In some embodiments, the A/B testing analysis module 108 of FIG. 1 can be implemented with the A/B testing analysis module 202. As shown in the example of FIG. 2, the A/B testing analysis module 202 can include an A/B test module 204, a view tracking data module 206, a heat map data module 208, a demographic analysis module 210, and an advertisement module 212.
[0032] The A/B test module 204 can perform A/B testing associated with content items included in a virtual reality content item. In some embodiments, focus of users on particular objects or locations presented in a virtual reality content item can be determined. Such determinations can be made by tracking interactions of a user (e.g., eye gaze or other physical gestures) with a user interface (e.g., viewport) through which the virtual reality content item is presented. In some embodiments, virtual reality content items can provide an environment for performing A/B testing. For example, entities such as businesses can request A/B testing for particular content items. Content items can include any type of content that can be included within virtual reality content items, such as objects, images, videos, etc. An A/B test can compare user preferences between two content items or among multiple content items. For example, two content items can be included within a virtual reality content item, and view tracking data can be obtained from various users in order to analyze whether users are interested in neither content item, one or the other content item, or both.
[0033] Various configurations and/or scenarios can be used in A/B testing. In one example, different content items can be provided in different locations within a virtual reality content item in order to determine which content item(s) users prefer. For instance, an area on the left in a scene presented by the virtual reality content item can present a first content item (e.g., a first object) and an area on the right can present a second content item (e.g., a second object). View tracking data can be obtained for users in order to determine whether users viewed the first content item, the second content item, neither, or both. In another example, the same content item can be provided in different locations within a virtual reality content item in order to determine which location(s) users prefer. For instance, users may be more likely to view a first location within the virtual reality content item over a second location within the virtual reality content item. In a further example, variations of a content item can be provided in different locations within a virtual reality content item in order to determine which variation(s) users prefer. For instance, in developing a marketing strategy, a company may want to find out which version(s) of a mascot or other identifier for the company are more appealing to users. In this instance, different versions of a mascot can be presented to in order determine which version(s) users are more likely to view. Many variations are possible.
[0034] The A/B test module 204 can specify various settings associated with A/B tests. For example, the A/B test module 204 can specify content items to include in a virtual reality content item for an A/B test and/or locations for presenting the content items within the virtual reality content item. Content items and/or locations for presenting content items can be selected by an entity requesting A/B testing through the media content user analysis module 102 or a system performing A/B testing through the media content user analysis module 102. For example, a company requesting A/B testing can determine locations for presenting content items. As another example, a system performing A/B testing can automatically determine locations for presenting content items or suggest potential locations for presenting content items. In some embodiments, potential locations can be suggested based on user attribute analysis information from other virtual reality content items that are similar to the virtual reality content item in an A/B test. An analysis of user attributes may have been performed for other virtual reality content items, and the user attribute analysis for the other virtual reality content items can be used to suggest potential locations. In other embodiments, potential locations can be suggested based on a visual analysis of content of the virtual reality content item. For instance, the visual analysis can be performed on image data of the virtual reality content item in order to identify objects presented within the virtual reality content item. A content item can be provided anywhere within a virtual reality content item, and there can be numerous options in terms of what is presented by the content items or where the content items are placed. Accordingly, various aspects associated with content items can be tested. A virtual reality content item for performing A/B testing can be provided by an entity requesting A/B testing, a system providing A/B testing, etc. For example, an entity requesting A/B testing can upload a virtual reality content item and designate various areas within the virtual reality content item to present various content items, such as a first area to present a first content item, a second area to present a second content item, etc. As another example, a system performing A/B testing can provide a virtual reality content item in which content items as well as their locations therein can be provided and designated by an entity requesting A/B testing. In some embodiments, a virtual reality content item for an A/B test can be provided to a target audience. The target audience can be determined based on various criteria as appropriate. For example, the target audience can be determined based on user attributes, attributes associated with the virtual reality content item, attributes associated with content items included in the virtual reality content item, etc.
[0035] A user’s interest in content items in a virtual reality content item can be determined based on view tracking data and/or heat map data for the user in connection with the virtual reality content item. View tracking data and heat map data for users can be obtained by the view tracking data module 206 and the heat map data module 208 as explained below. A user’s interest in a content item can be determined based on various factors. For example, a user’s interest in a content item can be indicated by whether the user looked at the content item, how long the user looked at the content item, whether the user interacted with the content item, etc. In some cases, a duration of how long a user views a certain content item may indicate a degree of the user’s interest in the content item. For example, the longer a user looks at a content item, the more likely the user is interested in the content item. A user’s interaction with a certain content item (e.g., selection, like, comment, etc.) can also indicate the user’s interest in the content item. By interacting with the content item, the user can convey that the user is strongly interested in the content item.
[0036] In some embodiments, the view tracking data module 206 can be configured to obtain respective view tracking data for a virtual reality content item including content items for A/B testing. For example, view tracking data for a given virtual reality content item may be collected for each user (or viewer) that has accessed the virtual reality content item. The view tracking data for a user may identify regions that were accessed through the user’s viewport during presentation of the virtual reality content item. Such view tracking data may be collected for each frame corresponding to the virtual reality content item. In some embodiments, a user’s view tracking data for a virtual reality content item can be determined based on changes to the user’s viewport during presentation of the virtual reality content item. Such changes to the viewport may be measured using various approaches that can be used either alone or in combination. For example, changes to the viewport may be measured using sensor data (e.g., gyroscope data, inertial measurement unit data, etc.) that describes movement of the computing device being used to present the virtual reality content item. In another example, changes to the viewport can be measured using gesture data describing the types of gestures (e.g., panning, zooming, etc.) that were performed during presentation of the virtual reality content item. Some other examples for measuring changes to the viewport include using input device data that describes input operations (e.g., mouse movement, dragging, etc.) performed during presentation of the virtual reality content item, headset movement data that describes changes in the viewport direction during presentation of the virtual reality content item, and eye tracking data collected during presentation of the virtual reality content item, to name some examples.
[0037] In some embodiments, the heat map data module 208 can be configured to generate (or obtain) heat maps for users viewing virtual reality content items including content items for A/B testing. In some embodiments, heat maps for a given virtual reality content item may be generated based on view tracking data for the virtual reality content item. As mentioned, the view tracking data module 206 can obtain respective view tracking data for users that viewed a virtual reality content item. Each user’s view tracking data can indicate which regions of a given frame (or set of frames) were accessed using a user’s viewport during presentation of a virtual reality content item. That is, for any given frame in the virtual reality content item, the heat map data module 208 can generate (or obtain) user-specific heat maps that graphically represent regions in the frame that were of interest to a given user. In some embodiments, heat maps can be generated for a set of frames that correspond to some interval of time. For example, a respective heat map can be generated for every second of the virtual reality content item. In some embodiments, user-specific heat maps for a given virtual reality content item can be combined to generate aggregated heat maps that represent aggregated regions of interest in frames corresponding to the virtual reality content item. Thus, for example, the respective user-specific heat maps can be aggregated on a frame-by-frame basis so that each frame of the virtual reality content item is associated with its own aggregated heat map that identifies the regions of interest in the frame. These regions of interest can correspond to various points of interest that appear in frames and were determined to be of interest to some, or all, of the users that viewed the virtual reality content item. In some embodiments, these regions of interest can correspond to various points of interest that appear in frames and were determined to be of interest to users sharing one or more common characteristics with the user who is to view the virtual reality content item. In various embodiments, heat map data, aggregated or otherwise, need not be actual heat maps that are represented graphically but may instead be some representation of view tracking data. For example, in some embodiments, the heat map data may identify clusters of view activity within individual frames of virtual reality content items. In some embodiments, the clusters of view activity that are identified from heat map data can be used independently to provide user attribute analysis associated with content items included in a virtual reality content item.
[0038] The demographic analysis module 210 can provide a user attribute analysis associated with content items included in a virtual reality content item for A/B testing. A user attribute analysis can be performed based on view tracking data and/or heat map data for a virtual reality content item. The demographic analysis module 210 can analyze view tracking data and/or heat map data for individual users that viewed a virtual reality content item to determine attributes associated with the users. User attributes can include any attribute or any type of attribute relating to users. Examples of user attributes can include an age, an age range, a gender, a geographical region (e.g., a country, a state, a county, a city, etc.), an interest, a household size, etc. Many variations are possible. Information associated with a user attribute analysis of content items tested in an A/B test can be provided to an entity associated with the A/B test. In some embodiments, user attribute analysis information can be provided as a report to an entity associated with the A/B test. User attribute analysis information can include one or more user attributes and corresponding statistics for each content item tested in an A/B test. For example, for each content item, a list of top user attributes and percentages of users having these attributes can be provided. If a user attribute can be associated with multiple possible values (e.g., an age range, a geographical region, etc.), statistics can be further broken down by the multiple possible values. For example, if a relevant user attribute is age range, user attribute analysis information can provide percentages of users for a first age range, a second age range, etc. Similarly, if a relevant user attribute is geographical region, user attribute analysis information can provide percentages of users for a first geographical region, a second geographical region, etc. In some embodiments, when a virtual reality content item includes multiple frames or other content sequences, user attribute analysis information can be provided for each frame of the virtual reality content item. For example, for each frame, a list of top users attributes and corresponding percentages of users can be provided for a content item that appears in the frame. All examples herein are provided for illustrative purposes, and there can be many variations and other possibilities.
[0039] In some embodiments, the demographic analysis module 210 may not provide user attribute analysis information if a number of users associated with an A/B test does not satisfy a threshold value. In some cases, a number of users that looked at or interacted with a content item may not be sufficient to protect privacy of the users and/or to provide meaningful demographic information. For example, in certain circumstances, it may not be possible to provide a user attribute analysis without disclosing identifying information associated with individual users. In this example, providing one or more user attributes could allow the individual user to be identified. As another example, the number of users may be too low to provide a statistically meaningful sample. In such cases, user attribute analysis information is not provided. Instead, a total number of users that looked at the content item can be provided without indicating demographic information, such as specific user attributes. Information relating to the number of users who looked at a content item alone, without associated information relating to user attributes, can still provide helpful information because comparisons can be made with a number of users who looked at one or more other content items included in an A/B test.
[0040] In some embodiments, the demographic analysis module 210 can provide a user attribute analysis for a virtual reality content item without performing A/B testing. For example, the demographic analysis module 210 can perform a user attribute analysis for a virtual reality content item based on view tracking data and/or heat map data for the virtual reality content item. The demographic analysis module 210 can analyze view tracking data and/or heat map data for users that viewed the virtual reality content item to determine user attributes associated with the virtual reality content item. Similar to user attribute analysis information from A/B testing, user attribute analysis information for a virtual reality content item without A/B testing can include one or more user attributes and corresponding statistics for objects or other content items included in the virtual reality content item. For example, for an object, a list of top user attributes and percentages of users having these attributes can be provided. Similar to A/B testing, user attribute analysis information may not be provided if a number of users who viewed the virtual reality content item does not satisfy a threshold value in order to protect user privacy and/or avoid providing demographic information that is not based on a representative sample.
[0041] The advertisement module 212 can provide advertisements for users within virtual reality content items based on a user attribute analysis associated with A/B testing. One or more advertisements can be provided to users within a virtual reality content item. An advertisement can be selected or provided based on user attribute analysis information from A/B testing. For example, two advertisements can be tested in an A/B test, and user attribute analysis information from the A/B test can indicate that a group of users having a first set of user attributes preferred a first advertisement and a second group of users having a second set of user attributes preferred a second advertisement. Then, the first advertisement can be provided to users having the first set of user attributes (or similar user attributes), and the second advertisement can be provided to users having the second set of user attributes (or similar user attributes). Because advertisements can be targeted to users who are more likely to respond favorably to the advertisements, a rate of conversion can be increased. In another example, two locations within a virtual reality content item for providing an advertisement can be tested in an A/B test, and user attribute analysis information from the A/B test can indicate that a first group of users having a first set of user attributes preferred looking at a first location and a second group of users having a second set of user attributes preferred looking at a second location. Then, the advertisement can be provided to users having the first set of user attributes (or similar user attributes) at the first location, and the advertisement can be provided to users having the second set of user attributes (or similar user attributes) at the second location. As explained above, in some embodiments, user attribute analysis information can be provided for a virtual reality content item without performing A/B testing. In these embodiments, advertisements can also be selected or provided based on user attribute analysis information as described above. For example, the user attribute analysis information can indicate that a group of users having a set of user attributes preferred looking at a particular object, and an advertisement associated with the particular object can be provided to users having the same or similar set of attributes. In certain embodiments, the advertisement module 212 can suggest locations within a virtual reality content item for providing an advertisement based on user attribute analysis information. One more suggested locations can be provided on a frame-by-frame basis. For example, one or more suggested locations can be provided for each frame of the virtual reality content item. Suggested locations may stay constant from frame to frame or can vary from frame to frame. All examples herein are provided for illustrative purposes, and there can be many variations and other possibilities.
[0042] FIG. 3A-E illustrate examples of streaming a virtual reality content item and performing A/B testing, according to an embodiment of the present disclosure. FIG. 3A illustrates an example 300 of a viewport 304 displaying a portion of a video stream 306 of a spherical video. The viewport 304 is shown in the diagram of FIG. 3A as being positioned within a representation 302 of a spherical video to facilitate understanding of the various embodiments described herein. In some embodiments, a spherical video captures a 360-degree view of a scene (e.g., a three-dimensional scene). The spherical video can be created by stitching together various video streams, or feeds, that were captured by cameras positioned at particular locations and/or positions to capture a 360 degree view of the scene. FIGS. 3A-E refer to spherical videos as just one example application of the various technology described herein. Depending on the implementation, such technology can be applied to other types of videos apart from spherical videos.
[0043] Once stitched together, a user can access, or present, the spherical video through a viewport 304 to view a portion of the spherical video at some angle. The viewport 304 may be accessed through a software application (e.g., video player software) running on a computing device. The stitched spherical video can be projected as a sphere, as illustrated by the representation 302. Generally, while accessing the spherical video, the user can change the direction (e.g., pitch, yaw, roll) of the viewport 304 to access another portion of the scene captured by the spherical video. FIG. 3B illustrates an example 350 in which the direction of the viewport 354 has changed in an upward direction (as compared to viewport 304). As a result, the video stream 356 of the spherical video being accessed through the viewport 354 has been updated (e.g., as compared to video stream 306) to show the portion of the spherical video that corresponds to the updated viewport direction.
[0044] The direction of the viewport 304 may be changed in various ways depending on the implementation. For example, while accessing the spherical video, the user may change the direction of the viewport 304 using a mouse or similar device or through a gesture recognized by the computing device. As the direction changes, the viewport 304 can be provided a stream corresponding to that direction, for example, from a content provider system. In another example, while accessing the spherical video through a display screen of a mobile device, the user may change the direction of the viewport 304 by changing the direction (e.g., pitch, yaw, roll) of the mobile device as determined, for example, using gyroscopes, accelerometers, touch sensors, and/or inertial measurement units in the mobile device. Further, if accessing the spherical video through a virtual reality head mounted display, the user may change the direction of the viewport 304 by changing the direction of the user’s head (e.g., pitch, yaw, roll). Naturally, other approaches may be utilized for navigating presentation of a spherical video including, for example, touch screen or other suitable gestures.
……
……
……