空 挡 广 告 位 | 空 挡 广 告 位

Sony Patent | Information Processing Device, Information Processing Method, And Program

Patent: Information Processing Device, Information Processing Method, And Program

Publication Number: 20200135150

Publication Date: 20200430

Applicants: Sony

Abstract

An information processing device, an information processing method, and a program capable of dynamically changing visibility of a user’s view are proposed. An information processing device including: a position of interest estimation unit configured to estimate a position of interest of a user; and a visibility control unit configured to perform visibility control to gradually reduce visibility of a second view opposite to a first view of the user corresponding to the position of interest such that the visibility of the second view of the user becomes lower than visibility of the first view.

TECHNICAL FIELD

[0001] The present disclosure relates to an information processing device, an information processing method, and a program.

BACKGROUND ART

[0002] Conventionally, various techniques related to virtual reality (VR) and augmented reality (AR) have been developed. With VR, a user can watch, for example, a video of a three-dimensional virtual space generated by a computer with highly realistic feeling. Furthermore, with AR, various types of information (for example, a virtual object and the like) can be presented to a user in association with a position of the user in a real space.

[0003] Furthermore, various techniques to control display in accordance with a detection result of a user’s sight line have also been proposed. For example, Patent Document 1 described below describes a technique to display a display object in an area determined to have high detection accuracy of a sight line on a display screen.

CITATION LIST

Patent Document

[0004] Patent Document 1: Japanese Patent Application Laid-Open No. 2015-152938

SUMMARY OF THE INVENTION

Problems to be Solved by the Invention

[0005] As described above, in the technique described in Patent Document 1, control according to the detection accuracy of a sight line is performed. Meanwhile, there is still room for improvement in dynamically changing visibility of a user’s view.

[0006] Therefore, the present disclosure proposes a novel, improved information processing device, an information processing method, and a program that can dynamically change visibility of a user’s view.

Solutions to Problems

[0007] The present disclosure provides an information processing device including: a position of interest estimation unit configured to estimate a position of interest of a user; and a visibility control unit configured to perform visibility control to gradually reduce visibility of a second view opposite to a first view of the user corresponding to the position of interest such that the visibility of the second view of the user becomes lower than visibility of the first view.

[0008] Furthermore, the present disclosure provides an information processing method including: estimating a position of interest of a user; and performing, by a processor, visibility control to gradually reduce visibility of a second view opposite to a first view of the user corresponding to the position of interest such that the visibility of the second view of the user becomes lower than visibility of the first view.

[0009] Furthermore, the present disclosure provides a program for causing a computer to function as: a position of interest estimation unit configured to estimate a position of interest of a user; and a visibility control unit configured to perform visibility control to gradually reduce visibility of a second view opposite to a first view of the user corresponding to the position of interest such that the visibility of the second view of the user becomes lower than visibility of the first view.

Effects of the Invention

[0010] As described above, the present disclosure can improve user experience by dynamically changing the visibility of the user’s view. Note that advantageous effects described here are not necessarily restrictive, and any of the effects described in the present disclosure may be applied.

BRIEF DESCRIPTION OF DRAWINGS

[0011] FIG. 1 is an explanatory diagram showing an exemplary configuration of an information processing system according to an embodiment of the present disclosure.

[0012] FIG. 2 is a diagram showing an example of a captured image of an eye when a user is looking forward and an exemplary diagram showing a relationship between a view of the user and a collision range of a sight line.

[0013] FIG. 3A is an exemplary diagram showing a relationship between a true collision range in the view of the user, a detection error range of the collision range, and a size of a virtual object in a situation shown in FIG. 2.

[0014] FIG. 3B is a diagram showing an example of a positional relationship between the true collision range in the view of the user, the detection error range of the collision range, and the virtual object in the situation shown in FIG. 2.

[0015] FIG. 4 is a diagram showing an example of the captured image of the eye when the user is looking at a peripheral portion of the view, and an exemplary diagram showing the relationship between the view of the user and the collision range of the sight line.

[0016] FIG. 5A is a diagram showing an example of a positional relationship between the true collision range in the view of the user, the detection error range of the collision range, and the virtual object in a situation shown in FIG. 4.

[0017] FIG. 5B is a diagram showing an example of a positional relationship between the true collision range in the view of the user, the detection error range of the collision range, and the virtual object in the situation shown in FIG. 4.

[0018] FIG. 6 is a diagram showing an example of a relationship between the view of the user and the collision range of the sight line in a case where a scan range is expanded in the situation shown in FIG. 4.

[0019] FIG. 7 is a diagram showing an example of a positional relationship between the true collision range in the view of the user, the detection error range of the collision range, and the virtual object in a situation shown in FIG. 6.

[0020] FIG. 8 is a functional block diagram showing an exemplary configuration of a head mounted display (HMD) 10 according to the embodiment.

[0021] FIG. 9A is a view showing a modified example of a display mode of a display range corresponding to a second view of the user while a video of VR content is displayed on the HMD 10.

[0022] FIG. 9B is a view showing a modified example of the display mode of the display range corresponding to the second view of the user while the video of VR content is displayed on the HMD 10.

[0023] FIG. 9C is a view showing a modified example of the display mode of the display range corresponding to the second view of the user while the video of VR content is displayed on the HMD 10.

[0024] FIG. 10A is a view showing an example of the captured image of the eye captured when (or immediately before or after) the video shown in FIG. 9A is displayed.

[0025] FIG. 10B is a view showing an example of the captured image of the eye captured when (or immediately before or after) the video shown in FIG. 9B is displayed.

[0026] FIG. 10C is a view showing an example of the captured image of the eye captured when (or immediately before or after) the video shown in FIG. 9C is displayed.

[0027] FIG. 11 is a flowchart showing part of a processing flow according to the embodiment.

[0028] FIG. 12 is a flowchart showing part of the processing flow according to the embodiment.

[0029] FIG. 13 is an explanatory diagram showing an exemplary hardware configuration of the HMD 10 according to the embodiment.

MODE FOR CARRYING OUT THE INVENTION

[0030] A preferred embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings. Note that in the present specification and the drawings, components having substantially the same functional configuration are denoted with the same reference symbol, and redundant description thereof will be omitted.

[0031] Furthermore, in the present specification and the drawings, a plurality of components having substantially the same functional configuration is distinguished by assigning a different letter of the alphabet after the same reference symbol in some cases. For example, a plurality of components having substantially the same functional configuration is distinguished like an HMD 10a and an HMD 10b as necessary. However, in a case where it is unnecessary to particularly distinguish each of the plurality of components having substantially the same functional configuration, only the same reference symbol is assigned. For example, in a case where it is unnecessary to particularly distinguish the HMD 10a and the HMD 10b, the components are referred to as just an HMD 10.

[0032] Furthermore, the “mode for carrying out the invention” will be described in order of items shown below.

[0033] 1.* Configuration of information processing system*

[0034] 2.* Detailed description of embodiment*

[0035] 3.* Hardware configuration*

[0036] 4.* Modifications*

1.* CONFIGURATION OF INFORMATION PROCESSING SYSTEM*

[0037] First, an exemplary configuration an information processing system according to an embodiment of the present disclosure will be described with reference to FIG. 1. As shown in FIG. 1, the information processing system according to the present embodiment includes an HMD 10, a server 20, and a communication network 22.

[0038] <1-1. HMD 10>

[0039] The HMD 10 is one example of an information processing device in the present disclosure. The HMD 10 is a head-mounted device, and can display various types of content (for example, VR content, AR content, and the like).

[0040] The HMD 10 may be a non-transmissive (shielded) HMD or a transmissive HMD. In the latter case, the HMD 10 may be, for example, an optical see-through HMD having a light control unit (for example, light control device), or may be a video see-through HMD. Note that various forms, such as a chromic element and a liquid-crystal shutter, may be employed as the light control unit. In other words, a configuration (such as a device) capable of dynamically changing transmittance can be appropriately employed as the light control unit.

[0041] The HMD 10 can include a cover portion that covers both eyes (or one eye) of a user. For example, the cover portion includes a display unit 124 as described later. Alternatively, the cover portion includes a see-through display and a light control unit 126 as described later.

[0042] {1-1-1. Display Unit 124}

[0043] Here, the display unit 124 displays a video in response to control by an output control unit 106 as described later. The display unit 124 can have a configuration as a transmissive display device. In this case, the display unit 124 projects a video by using at least some area of each of a right-eye lens and a left-eye lens (or goggle lens) included in the HMD 10 as a projection plane. Note that the left-eye lens and the right-eye lens (or goggle lens) can be formed by using, for example, a transparent material such as resin or glass.

[0044] Alternatively, the display unit 124 may have a configuration as a non-transmissive display device. For example, the display unit 124 can include a liquid crystal display (LCD), an organic light emitting diode (OLED), or the like. Note that in a case where the HMD 10 has a configuration as the video see-through HMD, a camera included in the HMD 10 (sensor unit 122 as described later) can capture a video forward of the user, and then the captured video can be sequentially displayed on the display unit 124. This allows the user to look at a forward scene through the video.

[0045] <1-2. Server 20>

[0046] The server 20 is an apparatus that manages various information items. For example, the server 20 stores various types of content such as VR content or AR content.

[0047] The server 20 can communicate with other devices via the communication network 22. For example, in a case where an acquisition request for content is received from another device (for example, HMD 10 or the like), the server 20 transmits the content indicated by the acquisition request to the another device.

[0048] Note that the server 20 can also perform various types of control on other devices (for example, HMD 10 or the like) via the communication network 22. For example, the server 20 may perform display control, voice output control, and the like on the HMD 10.

[0049] <1-3. Communication Network 22>

[0050] The communication network 22 is a wired or wireless transmission path of information transmitted from a device connected to the communication network 22. For example, the communication network 22 may include a telephone line network, the Internet, a public line network such as a satellite communication network, various local area networks (LANs) including Ethernet (registered trademark), a wide area network (WAN), and the like. Furthermore, the communication network 22 may include a dedicated line network such as an Internet protocol-virtual private network (IP-VPN).

[0051] <1-4. Summary of Issues>

[0052] The configuration of the information processing system according to the present embodiment has been described above. Meanwhile, according to a known sight line detection technique, detection accuracy in a central portion of the user’s view is usually high, whereas the detection accuracy in a peripheral portion of the user’s view is low. Therefore, for example, in content that displays one or more virtual objects and allows interaction (such as selection or operation) on the virtual objects on the basis of sight line detection, it is difficult for the user to select virtual objects positioned in the peripheral portion of the user’s view. Note that in the present embodiment, the view can mean an image (view) that substantially fills a user’s visual field according to the content displayed on the HMD 10 (such as VR content or AR content).

[0053] {1-4-1. In a Case where the User is Looking at the Central Portion of the View}

[0054] Here, details described above will be described in more detail with reference to FIGS. 2 to 7. FIG. 2 is a diagram showing an example of a captured image of an eye when the user is looking forward (captured image 30) and an example of a relationship between a view 40 of the user and a collision range 46 of a sight line. Note that in the example shown in FIG. 2, detection accuracy of a sight line is high in a central portion 42 in the view 40 of the user, whereas the detection accuracy of a sight line is low in a peripheral portion 44 in the view 40. In the example shown in FIG. 2, since the collision range 46 is positioned in the central portion 42, the detection accuracy of the sight line is high.

[0055] Furthermore, FIGS. 3A and 3B are diagrams each showing an example of a positional relationship between the true collision range 46 in the view of the user, a detection error range 48 of the collision range, and a virtual object 50 in a situation shown in FIG. 2. Here, the true collision range 46 indicates a true range the user is looking at in the view. The detection error range 48 of a collision range indicates a size of a range that can be detected as a collision range (due to a detection error) in a case where a position of the true collision range 46 is the same. As shown in FIGS. 3A and 3B, in the situation shown in FIG. 2 (that is, situation where the user is looking forward), since a difference between the detection error range 48 and the true collision range 46 is sufficiently small, it is unlikely that the collision range is falsely detected. For example, in the example shown in FIG. 3B, the HMD 10 can correctly identify the virtual object 50a as a virtual object intended by the user from among the two virtual objects 50.

[0056] {1-4-2. In a Case where the User is Looking at the Peripheral Portion of the View}

[0057] Meanwhile, FIG. 4 is a diagram showing an example of the captured image of the eye (captured image 30) when the user is looking at the peripheral portion of the view (portion corresponding to the right direction in FIG. 4) and an example of a relationship between the view 40 of the user and the collision range 46 of a sight line. In the example shown in FIG. 4, since the collision range 46 is positioned in the peripheral portion 44 of the view 40, the detection accuracy of a sight line is low.

[0058] Furthermore, FIGS. 5A and 5B are diagrams each showing an example of a positional relationship between the true collision range 46 in the view of the user, the detection error range 48 of the collision range, and the virtual object 50 in a situation shown in FIG. 4. As shown in FIGS. 5A and 5B, in the situation shown in FIG. 4, since the detection accuracy of a sight line is low, the difference between the detection error range 48 and the true collision range 46 is very large.

[0059] In the example shown in FIG. 5A, a distance between one end of the detection error range 48 (right end shown in FIG. 5A) and the virtual object 50 is larger than a width of the true collision range 46. For this reason, even if the user tries to select the virtual object 50, the HMD 10 may not select the virtual object 50 by falsely detecting the sight line of the user. In the example shown in FIG. 5B, the true collision range 46 is positioned on the virtual object 50a , but one end of the detection error range 48 is positioned on another virtual object 50b (adjacent to the virtual object 50a ). For this reason, even if the user tries to select the virtual object 50a , the HMD 10 may falsely select another virtual object 50b by falsely detecting the sight line of the user. As described above, in a situation where the user is looking at the peripheral portion of the view, there is a problem that the virtual object 50a the user is looking at is not selected, or, another virtual object 50b the user is not looking at is selected.

[0060] {1-4-3. In a Case where Scan Range is Expanded}

[0061] Note that as a method of solving the above problem, for example, as shown in FIG. 6, a method of expanding the scan range can be considered. However, by this method, resolution is lowered even in the central portion of the view, and thus there is a possibility that the virtual object 50 the user does not intend may be selected even in a case where the user is looking at the central portion of the view.

[0062] Here, details described above will be described in more detail with reference to FIGS. 6 and 7. FIG. 6 is a diagram showing the captured image 30 of the eye when the user is looking in the same direction as in the example shown in FIG. 4, and an example of a relationship between the view 40 of the user and the collision range 46 of a sight line in a case where the scan range is expanded. Furthermore, FIG. 7 is a diagram showing an example of a positional relationship between the collision range 46 in a case where the scan range is expanded, the detection error range 48 of the collision range, and the virtual object 50 in a situation shown in FIG. 6.

[0063] In the example shown in FIG. 7, the collision range 46 in a case where the scan range is expanded is positioned across two virtual objects 50. Therefore, even if the user intends to select the virtual object 50a , the HMD 10 may select none of the two virtual objects 50, or falsely select the virtual object 50b the user does not intend.

[0064] Therefore, it is preferably possible to accurately identify the virtual object intended by the user without reducing resolution in the central portion of the user’s view.

[0065] Therefore, by using the above circumstance as one point to pay attention, the HMD 10 according to the present embodiment has been created. The HMD 10 according to the present embodiment can perform visibility control to estimate the position of interest of the user and then gradually reduce visibility of a second view opposite to a first view of the user corresponding to the position of interest such that the visibility of the second view of the user becomes lower than visibility of the first view. This allows the visibility of the user’s view to be dynamically changed adaptively to the user’s position of interest. Generally, when the user notices existence of an object of interest, the user tends to closely observe the object. Therefore, by gradually reducing the visibility of the second view, inducing head movement (involuntarily moving the head) such that the first view (that is, direction of the position of interest) is positioned in front of the user can be expected. Note that the visibility of view mentioned in the present specification may be interpreted as viewability of view.

[0066] Here, the position of interest of the user may be a position in which the user is estimated to be interested within a real space where the user is positioned, or when VR content is displayed on the HMD 10, the position of interest of the user may be a position in which the user is estimated to be interested within a virtual space corresponding to the VR content.

[0067] Furthermore, the second view may be positioned 180 degrees opposite to the first view, or may be positioned off the first view by a predetermined angle other than 180 degrees. For example, the second view may be an area 180 degrees opposite to an area corresponding to the first view in the display unit 124 with respect to the center of the display range of the display unit 124.

2.* DETAILED DESCRIPTION OF EMBODIMENT*

[0068] <2-1. Configuration>

[0069] Next, the configuration according to the present embodiment will be described in detail. FIG. 8 is a functional block diagram showing an exemplary configuration of the HMD 10 according to the present embodiment. As shown in FIG. 8, the HMD 10 includes a control unit 100, a communication unit 120, the sensor unit 122, the display unit 124, the light control unit 126, a voice output unit 128, and a storage unit 130.

[0070] {2-1-1. Sensor Unit 122}

[0071] The sensor unit 122 can include, for example, a camera (image sensor), a microphone, an acceleration sensor, a gyroscope, a geomagnetic sensor, and/or a global positioning system (GPS) receiver.

[0072] For example, the sensor unit 122 senses a position, posture (such as direction and inclination), and acceleration of the HMD 10 in a real space. Furthermore, the sensor unit 122 captures an image of the eye of the user wearing the HMD 10. Furthermore, the sensor unit 122 further captures a video of an external world (for example, forward of the HMD 10) or collects sound of the external world.

[0073] {2-1-2. Control Unit 100}

[0074] The control unit 100 can include, for example, a processing circuit such as a central processing unit (CPU) 150 as described later. The control unit 100 comprehensively controls the operation of the HMD 10. Furthermore, as shown in FIG. 8, the control unit 100 includes a sight line recognition unit 102, a position of interest estimation unit 104, and the output control unit 106.

[0075] {2-1-3. Sight Line Recognition Unit 102}

[0076] The sight line recognition unit 102 detects (or recognizes) a sight line direction of the user wearing the HMD 10 on the basis of the captured image of the user’s eye captured by the sensor unit 122 (camera). For example, a plurality of (for example, four) infrared light emitting diodes (LEDs) that emits light to the eye of the user wearing the HMD 10 can be installed in the HMD 10. In this case, the sight line recognition unit 102 can first identify the position of an iris in the user’s eye on the basis of the captured image of the user’s eye. Next, the sight line recognition unit 102 can analyze a reflection position of the light emitted from each of the plurality of LEDs by the eye (eyeball) (reflection position 302 in the example shown in FIG. 2) and a direction of the reflection by the eye on the basis of the captured image of the eye. Then, the sight line recognition unit 102 can identify the sight line direction of the user on the basis of an identification result of the position of the iris and an identification result of the reflection of the individual light by the eye.

[0077] {2-1-4. Position of Interest Estimation Unit 104}

[0078] (2-1-4-1. Estimation Example 1)

[0079] The position of interest estimation unit 104 estimates the position of interest of the user. For example, the position of interest estimation unit 104 estimates the position of interest of the user on the basis of information input by the user. As one example, the position of interest estimation unit 104 estimates the position of an object identified on the basis of the sight line direction detected by the sight line recognition unit 102 as the position of interest of the user. For example, the position of interest estimation unit 104 estimates the position of interest of the user on the basis of a stay degree of the sight line detected by the sight line recognition unit 102 and the object positioned on the sight line identified from the detected sight line direction. In more detail, the position of interest estimation unit 104 first identifies a length of time during which the detected sight line direction stays (for example, time during which a change amount in the sight line direction is within a predetermined threshold), then determines the stay degree of the sight line in accordance with the identified length of time. For example, the position of interest estimation unit 104 determines that the stay degree of the sight line increases as the identified length of time increases. Then, only in a case where the stay degree of the sight line is equal to or greater than a predetermined threshold, the position of interest estimation unit 104 estimates the position of the object positioned on the sight line as the position of interest of the user. Alternatively, the position of interest estimation unit 104 may estimate the position of the object positioned near the sight line of the user as the position of interest of the user in accordance with accuracy of sight line recognition by the sight line recognition unit 102. In other words, the position of the object identified on the basis of the sight line direction of the user detected by the sight line recognition unit 102 can be estimated as the position of interest of the user. Here, the object may be a real object or a virtual object.

[0080] For example, in a case where a video of VR content or AR content is displayed on the display unit 124, from among one or more virtual objects included in the video, the position of interest estimation unit 104 estimates the display position of the virtual object displayed in the collision range identified from the detected sight line direction (for example, virtual object that can interact) as the position of interest of the user. Alternatively, for example, in a case where the user is using AR content and the HMD 10 is a transmissive HMD, the position of interest estimation unit 104 may estimate the position of the real object positioned on the sight line direction detected (in the real space in which the user is positioned) as the position of interest of the user.

[0081] (2-1-4-2. Estimation Example 2)

[0082] Alternatively, the position of interest estimation unit 104 can also estimate the position of interest of the user on the basis of information obtained from other than the user. For example, in a case where a sound related to the user is generated, the position of interest estimation unit 104 may estimate the position corresponding to a generation source of the sound as the position of interest of the user. Note that although details will be described later, in this case, by performing “visibility control to reduce the visibility of the second view” by a visibility control unit 108, it is possible to guide the user to closely observe the direction corresponding to the generation source of the sound (that is, first view). In particular, in VR content, a sound tends to be heard less accurately than in a real space, and the user is less likely to notice the generated sound, and therefore an effect of the guidance by the visibility control unit 108 can be larger.

[0083] Here, the sound related to the user may be a predetermined voice output in VR content or AR content the user is using (for example, a voice registered in advance to draw user’s attention (for example, an utterance of a virtual object (such as a character)), a warning sound, and the like). In this case, the position of interest estimation unit 104 may estimate, for example, the display position of the virtual object that is associated with the voice and displayed on the display unit 124 as the position of interest of the user. Alternatively, the position of interest estimation unit 104 may estimate the position of the virtual object associated with the voice in the virtual space corresponding to the VR content as the position of interest of the user.

[0084] Alternatively, the sound related to the user may be a sound related to the user that is emitted within the real space where the user is positioned. For example, the sound related to the user may be another person’s utterance to the user, an alert, an advertisement, music, or the like in a facility where the user is positioned or outdoors, or a cry of an animal positioned near the user. Alternatively, the sound related to the user may be a sound emitted from a device owned by the user (for example, a telephone such as a smartphone, a tablet terminal, or a clock). In these cases, the position of interest estimation unit 104 may, for example, identify a direction in which the sound comes on the basis of a sound collection result by (a microphone included in) the sensor unit 122, and then estimate, as the position of interest of the user, the position of the real object that has emitted the sound (within the real space), the position being identified on the basis of the direction in which the sound comes.

[0085] (2-1-4-3. Estimation Example 3)

[0086] Alternatively, in the real space where the user is positioned, the position of interest estimation unit 104 can also estimate the position of a real object in which the user is estimated to be interested as the position of interest of the user. Alternatively, when the user is using VR content, in the virtual space corresponding to the VR content, the position of interest estimation unit 104 may estimate the position of a virtual object in which the user is estimated to be interested as the position of interest of the user.

[0087] For example, user’s preference information and user’s action history (for example, browsing history of web sites, posting history in social networking services (SNS), purchasing history of goods, or the like) can be stored in the storage unit 130. In this case, for example, in a case where a video of VR content is displayed on the display unit 124, first, the position of interest estimation unit 104 can determine one after another whether or not a virtual object exists with the degree of interest of the user equal to or greater than a predetermined threshold among one or more virtual objects included in the video, on the basis of the user preference information and action history. Then, in a case where it is determined that at least one virtual object exists with the degree of interest of the user equal to or greater than the predetermined threshold, the position of interest estimation unit 104 can estimate the display position of any of the virtual objects (for example, virtual object with the highest degree of interest) (or, position of the virtual object in the virtual space corresponding to the VR content) as the position of interest of the user.

[0088] Alternatively, for example, in a case where the user is using AR content and the HMD 10 is a transmissive HMD, the position of interest estimation unit 104 may determine one after another whether or not a real object exists with the degree of interest of the user equal to or greater than the predetermined threshold among one or more real objects positioned around the user on the basis of the user preference information and action history. Then, in a case where a real object exists with the degree of interest of the user equal to or greater than the predetermined threshold, the position of interest estimation unit 104 may estimate the position of any of the corresponding real objects (for example, a real object with the highest degree of interest) in the real space as the position of interest of the user.

[0089] {2-1-5. Output Control Unit 106}

[0090] The output control unit 106 controls output of various signals. For example, when VR content or AR content is activated, the output control unit 106 causes the display unit 124 to display a video of the VR content or the AR content, and causes the voice output unit 128 to output a voice of the VR content or the AR content.

[0091] Furthermore, the output control unit 106 includes the visibility control unit 108.

[0092] {2-1-6. Visibility Control Unit 108}

[0093] (2-1-6-1. Example of Control to Reduce Visibility)

[0094] The visibility control unit 108 performs visibility control to change the visibility of the user’s view on the basis of an estimation result by the position of interest estimation unit 104. For example, the visibility control unit 108 performs the visibility control to gradually reduce the visibility of the second view such that the visibility of the second view of the user different from the first view of the user corresponding to the position of interest estimated by the position of interest estimation unit 104 becomes lower than the visibility of the first view. As one example, in the visibility control, the visibility control unit 108 gradually reduces the visibility from a position farthest from the first view in the second view toward a position closest to the first view (in the second view). For example, first, the visibility control unit 108 makes the visibility of the position farthest from the first view in the second view lower than the visibility of the first view. Then, the visibility control unit 108 gradually expands an area where the visibility is lower than the visibility of the first view from the position farthest from the first view in the second view toward the position closest to the first view (in the second view).

[0095] Note that the visibility control unit 108 can start the “visibility control to reduce the visibility of the second view” on the basis of a determination result of head movement of the user according a result of the sensing by the sensor unit 122. For example, when it is determined that the user’s head is stationary, the visibility control unit 108 starts the visibility control to reduce the visibility of the second view. Furthermore, while it is determined that the user’s head is moving, the visibility control unit 108 does not start the visibility control to reduce the visibility of the second view.

您可能还喜欢...