雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Sony Patent | Image Processing Device, Image Processing Method, And Image System

Patent: Image Processing Device, Image Processing Method, And Image System

Publication Number: 20190384382

Publication Date: 20191219

Applicants: Sony

Abstract

There is provided an image processing device, an image processing method, and an image system capable of reducing discomfort of a user, such as “VR sickness.” The image processing device includes: an image generation unit configured to generate an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user, and generate a second presentation image different from the avatar viewpoint image in a case in which posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar. The present technique is applicable, for example, to an image processing device etc. for generating an image to be displayed on a head mounted display.

TECHNICAL FIELD

[0001] The present technique relates to an image processing device, an image processing method, and an image system. In particular, the present technique relates to an image processing device, an image processing method, and an image system capable of reducing discomfort of a user, such as “VR sickness.”

BACKGROUND ART

[0002] In recent years, head-mounted displays and head-up displays have become capable of presenting images with a wide viewing angle. By changing/following (tracking) a direction and a posture of the image to be displayed naturally in response to the posture of the body or head, more realistic first-person viewpoint experiences are becoming feasible. The first-person viewpoint, which is particularly called first-person shooting (F.P.S.) in the video game world, is becoming popular.

[0003] On the other hand, it is known that some users may experience discomfort called “VR sickness” or “video sickness” when experiencing virtual reality (VR) using a head mounted display or a head up display. This discomfort is said to occur when there is a gap between the sense that the user feels or the user predicts from the experience and the actual sense (e.g., visual stimulation).

[0004] Contrivance for reducing the discomfort of this “VR sickness” or “video sickness” has been proposed (see, for example, PTL 1).

CITATION LIST

Patent Literature

[PTL 1]

[0005] JP 2016-31439A.

SUMMARY

Technical Problem

[0006] However, measures against such discomfort of “VR sickness” etc. are further required.

[0007] The present technique has been made in view of such a situation, and is intended to be able to reduce the discomfort of the user such as “VR sickness.”

Solution to Problem

[0008] An image processing device according to one aspect of the present technique includes animate generation unit configured to generate an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user, and generate a second presentation image different from the avatar viewpoint image in a case in which a posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar.

[0009] An image processing method according to one aspect of the present technique includes the step of: by an image processing device, generating an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user and generating a second presentation image different from the avatar viewpoint image in a case in which a posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar.

[0010] An image system according to one aspect of the present technique includes an image generation unit configured to generate an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user, and generate a second presentation image different from the avatar viewpoint image in a case in which a posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar; and a display unit configured to display the first presentation image and the second presentation image.

[0011] In one aspect of the present technique, an avatar viewpoint image is generated according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user, and a second presentation image different from the avatar viewpoint image is generated in a case in which a posture difference occurs between the actual posture of the user based on a result of detecting motion of the user and a posture of the avatar.

[0012] Note that the image processing device according to one aspect of the present technique may be achieved by causing a computer to execute a program.

[0013] In addition, in order to achieve the image processing device according to one aspect of the present technique, the program to be executed by the computer may be provided by transmitting via a transmission medium or recording on a recording medium.

[0014] The image processing device may be an independent device or an internal block constituting one device.

Advantageous Effect of Invention

[0015] According to one aspect of the present technique, the discomfort of the user such as “VR sickness” may be reduced.

[0016] In addition, the effect described here is not necessarily limited, and may be any effect described in the present disclosure.

BRIEF DESCRIPTION OF DRAWINGS

[0017] FIG. 1 illustrates an exemplary configuration of an embodiment of an image system to which the present technique is applied.

[0018] FIG. 2 illustrates an example of a content image of a boxing game.

[0019] FIG. 3 illustrates a block diagram representing a detailed exemplary configuration of the imaging system of FIG. 1.

[0020] FIG. 4 illustrates a figure explaining a coordinate system of a virtual world.

[0021] FIG. 5 illustrates a figure explaining three postures including a user posture, an avatar posture, and a presentation image posture.

[0022] FIG. 6 illustrates a flowchart explaining a presentation image generation process.

[0023] FIG. 7 illustrates a figure explaining an example of contents in which a plurality of users shares a virtual world.

[0024] FIG. 8 illustrates a block diagram representing an exemplary configuration of an embodiment of a computer to which the present technique is applied.

DESCRIPTION OF EMBODIMENTS

[0025] Hereinafter, a mode for carrying out the present technique (hereinafter, referred to as an embodiment) will be described.

[0026] FIG. 1 illustrates an exemplary configuration of an embodiment of an image system to which the present technique is applied.

[0027] The image system 1 of FIG. 1 includes a head mounted display 11 (hereinafter, referred to as an HMD 11), an image processing device 12, and a controller 13.

[0028] The HMD 11 displays an image in the virtual world generated by the image processing device 12 to present it to a user. The image processing device 12 generates an image to be displayed on the HMD 11 to provide it to the HMD 11. The controller 13 includes a plurality of handling buttons (not illustrated), receiving handling of the user to provide the image processing device 12 with a predetermined instruction in response to the handling of the user.

[0029] A wired cable such as HDMI (High Definition Multimedia Interface) or MHL (Mobile High-definition Link), or wireless communication such as Wi-Fi (Wireless Fidelity), wireless HD or Miracast, for example, connects between the HMD 11 and the image processing device 12, and between the image processing device 12 and the controller 13.

[0030] In the present embodiment, for example, the image processing device 12 causes an image as illustrated in FIG. 2 to be displayed. The image in FIG. 2 illustrates an image of a first-person viewpoint boxing game (content) in which the user wearing the HMD 11 is a player. The image displayed on the HMD 11 may be a 2D image. This image may also be a 3D image, which enables three-dimensional visual recognition by displaying a right-eve image for presenting to the user’s right eye and a left-eye image for presenting to the user’s left eye.

[0031] In the image system 1 configured as described above, the image processing device 12 generates an image and cause the HMD 11 to display the image so as to reduce discomfort such as “VR sickness” in a state in which the user visually recognizes the image displayed on the HMD 11 and experiences the virtual world.

[0032] In addition, the HMD 11, the image processing device 12, and the controller 13 may be combined as needed, and the image system 1 may be configured by one or two devices.

[0033] FIG. 3 illustrates a block diagram representing a detailed exemplary configuration of the imaging system 1.

[0034] The HMD 11 includes a sensor 41 and a display unit 42.

[0035] The sensor 41 includes, for example, a combination of one or more sensor elements such as a gyro sensor, an acceleration sensor, and a geomagnetic sensor, to detect the position and direction of the head of the user. In the present embodiment, the sensor 41 is assumed to include a sensor capable of detecting a total of nine axes of a three-axis gyro sensor, a three-axis acceleration sensor, and a three-axis geomagnetic sensor. The sensor 41 provides the detected result to the image processing device 12.

[0036] The display unit 42 includes, for example, an organic EL (Electro-Luminescence) element, a liquid crystal display, etc., and displays a predetermined image on the basis of an image signal provided from the image processing device 12.

[0037] In addition, the HMD 11 may have a speaker that outputs sound, a microphone that acquires the sound of the user, etc.

[0038] The controller 13 includes a handling unit 81 including a plurality of handling buttons, receiving the handling of the user to provide the image processing device 12 with a predetermined instruction in response to the handling of the user.

[0039] The image processing device 12 includes at least a storage unit 61 and an image generation unit 62. The image generation unit 62 includes an avatar operation control unit 71, a user posture detection unit 72, a posture difference calculation unit 73, and a presentation image generation unit 74.

[0040] The storage unit 61 includes, for example, a hard disk, a non-volatile memory, etc., and stores programs for controlling the operation of the image processing device 12 and contents (programs) for displaying an image on the HMD 11, etc.

[0041] The avatar operation control unit 71 controls the operation (posture) of an avatar corresponding to the user in the image (content image) in the virtual world displayed on the display unit 42. The control of the operation. (posture) of the avatar also takes into consideration the real posture of the user (the actual posture of the user) detected by the user posture detection unit 72. The operation of the avatar changes, for example, in the content image of the virtual world displayed on the display unit 42, depending on the operation selected by the user, the action performed by the user, etc.

[0042] As an opponent in the boxing game illustrated in FIG. 2, the avatar operation control unit 71 also controls, for example, the operation (posture) of an avatar of another user in a case in which an avatar other than the player appears in the image displayed on the display unit 42. In the following, in order to distinguish the avatar of the user from the avatar of another user, in a case in which the avatar of another user is intended, it is referred to as another avatar, and in the case of simply referring to as an avatar, at means the user’s own avatar.

[0043] The user posture detection unit 72 detects the actual posture of the user on the basis of the detection result of the position and direction of the head of the user provided from the sensor 41.

[0044] The posture difference calculation unit 73 calculates the a posture difference between the posture of the avatar determined by the avatar operation control unit 71 and the actual posture of the user detected by the user posture detection unit 72.

[0045] The presentation image generation unit 74 generates a presentation image to be presented to the user, which is an image of the content acquired from the storage unit 61, and provides the presentation image to the display unit 42. The presentation image is generated on the basis of the operation of the user’s own avatar and another avatar determined by the avatar operation control unit 71, the actual posture of the user determined by the user posture detection unit 72, and the posture difference determined by the posture difference calculation unit 73 between the posture of the avatar and the actual posture of the user. In a normal scene, the presentation image generation unit 74 aligns the posture of the avatar with the actual posture of the user, and generates a presentation image (first presentation image) such that an image according to the viewpoint of the avatar is presented to the user. Under a predetermined condition, in order to reduce the discomfort of the user such as “VR sickness,” as image different from the image of the avatar viewpoint is generated as a presentation image (second presentation image) and causes the display unit 42 to display the image.

Description of the Presentation Image>

[0046] The presentation image generated by the presentation image generation unit 74 will be further described with reference to FIGS. 4 and 5.

[0047] First, FIG. 4 describes the virtual world in the content displayed on the display unit 42 and a coordinate system that expresses the posture of the avatar on the virtual world.

[0048] Assuming that the virtual world in the content displayed on the display unit 42 is a three-dimensional space including the X axis, Y axis, and Z axis as illustrated in A of FIG. 4, the posture of the avatar may be expressed by five parameters (x, y, z, .theta., .phi.) which includes a three-dimensional position (x, y, z) and a two-dimensional direction. (.theta., .phi.).

[0049] Here, the three-dimensional position (x, y, z) corresponds to the position of the head of the avatar on the virtual world, and the two-dimensional direction (.theta., .phi.) corresponds to a gaze direction. (head direction) of the avatar on the virtual world. As illustrated in B of FIG. 4, the direction .theta. is a so-called azimuth angle, which is formed by the gaze direction with respect to a predetermined reference axis (Z axis in the example of FIG. 4B) on the XZ plane. As illustrated in C of FIG. 4 the direction .phi. is a so-called elevation angle, which is an angle in the Y-axis direction formed by the gaze direction with respect to the XZ lane. Therefore, the posture in the present specification includes not only the head position on the virtual world but also the gaze direction.

[0050] In the virtual world of the content displayed on the display unit 42, three postures of the user posture pos.sub.u, the avatar posture pos.sub.a, and the presentation image posture pos.sub.v are defined as follows.

user posture pos.sub.u=(x.sub.u, y.sub.u, z.sub.u, .theta..sub.u, .phi..sub.u)

avatar posture pos.sub.a=(x.sub.a, y.sub.a, z.sub.a, .theta..sub.a, .phi..sub.a)

presentation image posture pos.sub.v=(x.sub.v, y.sub.v, z.sub.v, .theta..sub.v, .phi..sub.v)

[0051] The user posture pos.sub.u is a projection of the actual posture of the user determined by the detection result provided from the sensor 41 on the virtual world. The avatar posture pos.sub.a is a posture of the avatar expressed by coordinates on the virtual world. The presentation image posture pos.sub.v is a virtual posture corresponding to the presentation image displayed on the display unit 42.

[0052] FIG. 5 specifically describes the three postures of the user posture pos.sub.u, the avatar posture pos.sub.a, and the presentation image posture pos.sub.v.

[0053] These three postures are described using an example of the content of the boxing game illustrated in FIG. 2.

[0054] For example, since the posture of the avatar is controlled according to the actual posture of the user detected by the user posture detection unit 72 in a normal scene, the avatar posture pos, and the user posture pos.sub.u are the same. That is, avatar posture pos.sub.a=user posture pos.sub.u. Further, since the presentation image displayed on the display unit 42 is an image of the virtual world from the viewpoint of the avatar, presentation image posture pos.sub.v=avatar posture pos.sub.a.

[0055] Therefore, in the normal scene where the special process to reduce “VR sickness” is not performed, presentation image posture pos.sub.v=avatar posture pos.sub.a=user posture pos.sub.u.

[0056] Then, an example is described assuming the scene where the avatar is on the canvas (falling to the ground) by the opponent’s punch.

[0057] Assuming that the actual user does not fall to the ground and stands up as in a posture 101 of FIG. 5, the user posture pos.sub.u calculated by the user posture detection unit 72=(x.sub.u, y.sub.u, z.sub.u, .theta..sub.u, .phi..sub.a) has the posture 101.

[0058] By contrast, the avatar posture pos.sub.a=(x.sub.a, y.sub.a, z.sub.a, .theta..sub.a, .phi..sub.a) controlled by the avatar operation control unit 71 has a posture 102 in which the avatar falls to the ground as illustrated in FIG. 5.

[0059] In a normal scene, as described above, the image of the virtual world from the viewpoint of the avatar is used as the presentation image. However, if the presentation image is greatly changed in accordance with rapid movement of the avatar, the user may cause possible “VR sickness.”

[0060] Therefore, the presentation image generation unit 74 generates not the image corresponding to the avatar posture pos.sub.a (the posture 102 in FIG. 5) but, for example, an image 111 corresponding to a posture 103 close to the actual posture of the user (the posture 101 in FIG. 5) as a presentation image, and causes the display unit 42 to display the image 111. The image 111 as the presentation image is an image whose view change is more gradual than the image corresponding to the avatar posture pos.sub.a. The posture 103 corresponding to the image 111 to be the presentation image is the presentation image posture pos.sub.v=x.sub.v, y.sub.v, z.sub.v, .theta..sub.v, .phi..sub.v).

[0061] In the scene of the example of FIG. 5 subjected to the process for reducing VR sickness, the presentation image posture posy is different from any posture of the avatar posture pos.sub.a and the user posture pos.sub.u, but it is sufficient that the presentation image is an image different from an avatar viewpoint image that is at least the presentation image in the normal scene. For example, the presentation image may be an image corresponding to the actual posture of the user.

[0062] Then, with reference to a flowchart of FIG. 6, described is a presentation image generation process by the image processing device 12. This process is started, for example, when handling is performed to cause the display unit 42 of the HMD 11 to display a predetermined content image.

[0063] First, in step S1, the avatar operation control unit 71 determines the operation of the avatar (avatar posture pos.sub.a) in the virtual world displayed on the display unit 42. In a case in which another avatar other than the player himself appears in the virtual world, the avatar operation control unit 71 also determines the operation (posture) of another avatar. After step S1, determination of the operation of the avatar is always executed according to the content.

[0064] In step S2, the user posture detection unit 72 detects the actual posture pos.sub.u of the user on the basis of the sensor detection result indicating the position and direction of the head of the user provided from the sensor 41. Thereafter, the detection of the actual posture pos.sub.u of the user based on the sensor detection result is also constantly executed. The detected actual posture pos.sub.u of the user is provided to the avatar operation control unit 71, and the operation (posture) of the avatar is matched to the operation of the user.

[0065] In step S3, the presentation image generation unit 74 generates an avatar viewpoint image that is an image from the viewpoint of the avatar, as a presentation image, providing the generated avatar viewpoint image to the display unit 42 of the HMD 11 to cause the display unit 42 to display the supplied image. The presentation image posture pos.sub.v here is equal to the avatar posture pos.sub.a and the user posture pos.sub.u (presentation image posture pos.sub.v=avatar posture pos.sub.a=user posture pos.sub.u).

[0066] In step S4, the posture difference calculation unit 73 calculates a posture difference between the avatar posture pos.sub.a determined by the avatar operation control unit 71 and the actual posture pos.sub.u of the user detected by the user posture detection unit 72, and provide the calculated posture difference to the presentation image generation unit 74.

[0067] In step S5, the presentation image generation unit 74 determines whether the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is equal to or greater than a predetermined first range.

[0068] Here, when, as a first range in step S5, the threshold thres.sub..theta. of the azimuth angle .theta. and the threshold thres.sub..phi. of the elevation angle .phi. are set, and either condition of the following equation (1) or equation (2) is satisfied, the presentation image generation unit 74 may determine that the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is equal to or greater than the first range. In the equation (1) and the equation (2), |d| represents the absolute value of d.

|.theta..sub.a-.theta..sub.u|>thres.sub..theta. (1)

|.phi..sub.a-.phi..sub.u|>thres.sub..phi. (2)

[0069] In the equation (1), thresh.sub..theta. is a threshold of the azimuth angle .theta., and thres in the equation (2) is a threshold of the elevation angle .phi.. The threshold thres.sub..theta. and the threshold thres.sub..phi. may be set to any value, and for example, may be set to 30 degrees. This numerical value of 30 degrees is a value where the average human moves the gaze direction by moving only the eyeball when looking at an object that is off the front within 30 degrees, while the average human performs head motion naturally when looking at an object in a direction that is further out of 30 degrees.

[0070] In addition, in the case of using severe determination conditions for preventing sickness, the threshold thres.sub..theta. and the threshold thres.sub..phi. may be set to one degree. This is because the range of the central retinal area, called the fovea used to focus on the details of things, is said to be approximately one degree in an average human. The threshold thres.sub..theta. and the threshold thres.sub..phi. may be changed to the values desired by the user using a setting screen, etc. Changing these threshold values enables to control the degree for preventing sickness.

[0071] The above-described determination condition is mainly the condition that determines that the head direction has been moved. In a case in which such a condition that the avatar may be greatly moved with its entire body is also taken into consideration, the condition of the following equation (3) can be added to the conditions of equation (1) and equation (2). That is, when any one of the equations (1) to (3) is satisfied, the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is determined to be equal to or greater than the first range.

.DELTA.x.sub.au>thres.sub.x (3)

[0072] In the equation (3), .DELTA.x.sub.au represents a difference between the head position (x.sub.u, y.sub.u, z.sub.u) of the actual posture pos.sub.u of the user and the head position (x.sub.a, y.sub.a, z.sub.a) of the avatar posture pos.sub.a. The two-dimensional norm of the equation (4) or the one-dimensional norm of the equation (5) may be adopted as .DELTA.x.sub.au. In the equation (3), thres.sub.x represents the threshold of .DELTA.x.sub.au.

[Math. 1]

.DELTA.x.sub.au= {square root over ((x.sub.u-x.sub.a).sup.2+(y.sub.u-y.sub.a).sup.2+(z.sub.u-z.sub.a).sup.2)- } (4)

.DELTA.x.sub.au=|x.sub.u-x.sub.a|+|y.sub.u-y.sub.a|+|z.sub.u-z.sub.a| (5)

[0073] A moving speed of the head position of the avatar or an angular velocity in the gaze direction may be used as the determination condition in the other step S5. Specifically, assuming that the angular velocity in the gaze direction (.theta..sub.a, .phi..sub.a) of the avatar is .DELTA..theta. and .DELTA..phi., and a time change amount (moving velocity) of the head position (x.sub.a, y.sub.a, z.sub.a) of the avatar is v.sub.a the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user may be determined to be equal to or greater than the first range when any one of the equations (6) to (8) is satisfied,

|.DELTA..theta..sub.a-.DELTA..theta..sub.a>thres.sub..DELTA..theta. (6)

|.DELTA..phi..sub.a-.DELTA..phi..sub.a|>thres.sub..DELTA..phi. (7)

v.sub.a>thres.sub.v (8)

[0074] Furthermore, in step S5, it may be configured that, when a phenomenon in which the posture difference between the avatar posture pos.sub.a, and the actual posture pos.sub.u of the user is determined to be equal to or greater than the first range occurs not only one time but also a predetermined number of times or more, or occurs continuously for a predetermined period of time, the posture difference may be also determined to be equal to or greater than the first range.

[0075] In a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is determined not to be equal to or greater than the first range in step S5, the process returns to step S3, and the process of steps S3 to 35 described above is repeated. Accordingly, in a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is smaller than the first range, the avatar viewpoint image from the viewpoint of the avatar is displayed as the presentation image, resulting in presentation image posture pos.sub.v=avatar posture pos.sub.a.

[0076] Conversely, in a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user determined to be equal to or greater than the first, range in step S5, the process proceeds to step S6. Then the presentation image generation unit 74 generates the reduction processing image for reducing discomfort such as “VR sickness” as a presentation image, providing the generated image to the display unit 42 to cause the display unit 42 to display the supplied image.

[0077] The presentation image generation unit 74 generates, for example, an image corresponding to the user posture pos.sub.a as the reduction processing image and causes the display unit 42 to display the generated image. In this case, the presentation image posture pos.sub.v is the user posture pos.sub.u.

pos.sub.v=(x.sub.v, y.sub.v, z.sub.v, .theta..sub.v, .phi..sub.v)=(x.sub.u, y.sub.u, z.sub.u, .theta..sub.u, .phi..sub.u)

[0078] Alternatively, the presentation image generation unit 74 generates, for example, an image with the head position of the avatar posture pos.sub.a for the head position of the presentation image posture pos.sub.v, or an image with the gaze direction of the user posture pos.sub.u for the gaze direction of the presentation image posture pos.sub.v, as the reduction processing image and causes the display unit 42 to display the generated image. That is, the presentation image posture pos.sub.v is as follows.

pos.sub.v=(x.sub.v, y.sub.v, z.sub.v, .theta..sub.v, .phi..sub.v)=(x.sub.a, y.sub.a, z.sub.a, .theta..sub.a, .phi..sub.a)

[0079] Alternatively, the presentation image generation unit 74 generates, for example, an image with the head position and the azimuth angle .theta. of the avatar posture pos.sub.a for the head position and the azimuth angle .theta. of the presentation image posture pos.sub.v, or an image with the elevation angle .phi. of the user posture pos.sub.u for the elevation angle .phi. of the presentation image posture pos.sub.v, as the reduction processing image and causes the display unit 42 to display the generated image. That is, the presentation image posture pos.sub.v is as follows.

pos.sub.v=(x.sub.v, y.sub.v, z.sub.v, .theta..sub.v, .phi..sub.v)=(x.sub.a, y.sub.a,z.sub.a, .theta..sub.a, .phi..sub.u)

[0080] This equation means that, since the user is more likely to get sick with the field of view unintentionally changed in the vertical direction, the gaze direction of the user is adopted only in this direction.

[0081] In contrast, for example, as the reduction processing image, an image with the head position and the elevation angle .phi. of the avatar posture pos.sub.a for the head position and the elevation angle .phi. of the presentation image posture pos.sub.v, or an image with the azimuth angle .theta. of the user posture pos.sub.u for the azimuth angle .theta. of the presentation image posture pos.sub.v, may be generated to be displayed. That is, the presentation image posture pos.sub.v is as follows.

pos.sub.v=(x.sub.v, y.sub.v, z.sub.v, .theta..sub.v, .phi..sub.v)=(x.sub.a, y.sub.a, z.sub.a, .theta..sub.a, .phi..sub.a)

[0082] This means that the gaze direction of the user is adopted only for the azimuth angle .theta..

[0083] Then, in step S7, the presentation image generation unit 74 determines whether or not the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is within a predetermined second range.

[0084] At step S7, in a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos .sub.u of the user is not vet within the second range, that is, the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.a of the user is determined to be greater than the second range, the process returns to step S6, and the above-described process is repeated. That is, the reduction processing image is continuously generated as the presentation image and displayed on the display unit 42.

[0085] Conversely, in a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is determined to be within the second range in step S7, the process proceeds to step S8, and the presentation image generation unit 74 generates the reduction processing image so as to gradually approach the avatar viewpoint image (an image corresponding to the avatar posture pos.sub.a), then causes the display unit 42 to display the generated image. After the process of step S8, presentation image posture pos.sub.v=avatar posture pos.sub.a =user posture pos.sub.u.

[0086] For control of the presentation image posture pos.sub.v that brings the presentation image (reduction processing image) continuously close to the avatar viewpoint image, which is the process of step S8, for example, calculations according to simple linear motion, curvilinear motion such as Bezier, or motion within the drive range allowed by geometric constraints based on a body model may be used.

[0087] After step S8, the process returns to step S3, and the above-described steps S3 to S8 are executed again. The values of the first range and the second range may be the same value or different values.

[0088] The above process is executed as a presentation image generation process.

[0089] As described above, in the presentation image generation process by the image system 1, is a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is smaller than the predetermined threshold (the first range), the presentation image generation unit 74 generates the avatar viewpoint image according to the viewpoint of the avatar (an image corresponding to the avatar posture pos.sub.a) as the first presentation image, and causes the display unit 42 to display the generated image. In a case in which the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user is equal to or greater than a predetermined threshold (first range) as well as the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user occurs above a certain level, the presentation image generation unit 74 generates the reduction processing image different from the avatar viewpoint image as the second presentation image and causes the display unit 42 to display the generated image. This system enables reduction of the discomfort of the user such as “VR sickness.”

[0090] Note that, in the generation of the reduction processing image in step S6 described above, the reduction processing image is generated and displayed using the content image in the virtual world. The image different from the content image being displayed, for example, a black image (blackout screen) in which the entire screen is black only, or an alert screen that explains the screen being changed in order to prevent “sickness,” may be displayed.

[0091] In this case, in the process executed in step S8 to bring the image continuously close to the avatar viewpoint image, the image may be returned to the avatar viewpoint image by using an alpha blending process, etc. in which alpha blending of the black image and the avatar viewpoint image is performed.

[0092] Assuming that the avatar viewpoint image is I.sub.a and the black image is I.sub.b, the reduction processing image I.sub.u in step S8 is expressed by the following equation (9) using a blending ratio .alpha. (0.ltoreq..alpha..ltoreq.1).

I.sub.u=.alpha.I.sub.a+(1-.alpha.)I.sub.b (9)

[0093] By changing the blending ratio .alpha. in the equation (9) continuously from 0 to 1 over time, the black image may be controlled so as to gradually return to the original avatar viewpoint image.

[0094] Although the above-described example is an example where one user experiences the virtual world, there is a case in which the same virtual world may be shared and experienced by a plurality of the users.

[0095] For example, as illustrated in FIG. 7, the scene is assumed that, on the virtual world (presentation image) displayed on the HMD 11 of one user 121A, the avatar 131B of the other user 121B is displayed, while, on the virtual world (presentation image) displayed on the HMD 11 of the other user 121B, the avatar 131A of one user 121A is displayed.

[0096] The avatar 131A of the user 121A operates on the virtual world on the basis of the sensor detection result of the user 121A. The avatar 131B of the user 121B operates on the virtual world on the basis of the sensor detection result of the user 121B.

[0097] Therefore, the avatar 131 (131A and 131B) performs basically the same motion as the operation of the user 121 (121A and 121B), but an operation that does not match the operation of the user 121 may be performed as the design of the content.

[0098] For example, a scene where the user 121A causes the avatar 131A of the user 121A to bow a greeting is described below.

[0099] Although the actual user 121A may perform a bow operation, the avatar 131A at which the user 121B looks in the virtual world may be caused to perform the bow operation since the user 121B may not see the actual appearance of the user 121A. Therefore, the user 121A performs an operation instruction for causing the avatar 131A to bow by handling the controller 13, voice input, etc.

[0100] In such a scene, a situation described above occurs in which the user posture pos.sub.u and the avatar posture pos.sub.a are different, so that the presentation image generation process described in FIG. 6 may be similarly applied.

[0101] That is, when the user 121A instructs the bowing operation with the handling buttons, etc., the presentation image is displayed in which the avatar 131A of the user 121A performs the bowing operation on the display unit 42 of the HMD 11 worn by the user 121B. When the avatar 131A of the user 121A performs the bowing operation, the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.a of the user 121A becomes equal to or greater than the first range. Then, as the process of step S6 in FIG. 6, the presentation image generation unit 74 generates the reduction processing image and causes the display unit 42 of the HMD 11 worn by the user 121A to display the reduction processing image. In other words, when the avatar viewpoint image according to the viewpoint of the avatar 131A performing the bow operation is displayed on the display unit 42 of the HMD 11 worn by the user 121A, the view change is large, and “sickness” occurs, so that the image is switched to the reduction processing image.

[0102] Then, when the bowing operation is finished and the posture difference between the avatar posture pos.sub.a and the actual posture pos.sub.u of the user 121A becomes within the second range, the avatar posture pos.sub.a of the avatar 131A of the presentation image is controlled so as to gradually approach the actual posture pos.sub.u of the user 121A as the process of step 38 in FIG. 6.

[0103] In a case in which, what operation the user 121 causes the user’s own avatar 131 to perform is a selectable operation command, whether or not to present the reduction processing image for each operation command is determined in advance, and the presentation image prepared in advance may be displayed in response to the instructed operation command. Thus, by displaying the image different from the viewpoint image of the avatar in response to the operation command in a case in which the posture difference between the actual posture pos.sub.u of the user based on the detection result of detecting the motion of the user and the avatar posture pos.sub.a occurs, the VR sickness may be suppressed.

[0104] The above-described series of processes may be executed by hardware or software. In a case in which the series of processes are executed by software, a program that configures the software is installed on a computer. Here, the computer includes, for example, a microcomputer incorporated in dedicated hardware, and a general-purpose personal computer capable of executing various functions by installing various programs.

[0105] FIG. 8 illustrates a block diagram representing an exemplary configuration of a hardware configuration of a computer that executes the above-described series of processes by a program.

[0106] In the computer, a CPU (Central Processing Unit) 201, a ROM (Read Only Memory) 202, and a RAM (Random Access Memory) 203 are mutually connected by a bus 204.

[0107] Further, an input/output interface 205 is connected to the bus 204. An input unit 206, an output unit 207, a storage unit 208, a communication unit 209, and a drive 210 are connected to the input/output interface 205.

[0108] The input unit. 206 includes a keyboard, a mouse, a microphone, etc. The output unit 207 includes a display, a speaker, etc. The storage unit 208 includes a hard disk, a non-volatile memory, etc. The communication unit 209 includes a network interface etc. The drive 210 drives a removable recording medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.

[0109] In the computer configured as described above, for example, the CPU 201 loads the program stored in the storage unit 208 into the RAM 203 via the input/output interface 205 and the bus 204, and then executes the program, so that the series of processes described above are executed.

[0110] In the computer, the program may be installed in the storage unit 208 via the input/output interface 205 by attaching the removable recording medium 211 to the drive 210. In addition, the program may be received by the communication unit 209 via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting, and then may be installed in the storage unit 208. In addition, the program may be installed in advance in the ROM 202 or the storage unit 208.

[0111] In addition, the program executed by the computer may be a program that performs processing in chronological order according to the order described in this specification. The program may also be a program that performs processing in parallel, or at necessary timing such as when a call is made.

[0112] The steps described in the flowchart may be, of course, performed in the case of chronological order according to the described order. Even if the processing is not performed necessarily in chronological order, the steps may also be executed in parallel, or at necessary timing such as when a call is made.

[0113] In this specification, system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing or not. Therefore, a plurality of devices housed in separate housings and connected via a network, and one device housing a plurality of modules in one housing are all systems.

[0114] The embodiments of the present technique are not limited to the above-described embodiments, and various modifications may be made without departing from the scope of the present technique.

[0115] Although the example of the head mounted display (HMD) has been demonstrated as a display device which displays the presentation image presented to the user in the embodiment mentioned above, the head up display (HUD) or a dome-shaped (hemispherical) display etc. may be sufficient as the display device of the imaging system 1. A display device that displays a video so as to cover the field of view of the user may also be sufficient.

[0116] For example, a form in which some of the above-described embodiments are combined as appropriate may be employed.

[0117] The present technique may have a cloud computing configuration in which one function is shared and processed in cooperation with one another by a plurality of devices via a network.

[0118] Further, each step described in the above-described flowchart may be executed by one device or may be executed in a shared manner by a plurality of devices.

[0119] Furthermore, in a case in which a plurality of processes are included in one step, the plurality of processes included in the one step may be executed in a shared manner by a plurality of devices in addition to being executed by one device.

[0120] The effects described in the present specification are merely examples and are not limited, and effects other than those described in the present specification may be obtained.

[0121] In addition, the present technique may also have the following configurations.

[0122] (1) An image processing device including:

[0123] an image generation unit configured to generate an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user, and generate a second presentation image different from the avatar viewpoint image in a case in which a posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar.

[0124] (2) The image processing device according to (1),* in which*

[0125] the posture is defined by a head position and a gaze direction,* and*

[0126] the image generation unit generates the second presentation image different from the avatar viewpoint image in a case in which a difference in azimuth angle or elevation angle between the actual posture of the user and the posture of the avatar is equal to or greater than a first threshold.

[0127] (3) The image processing device according to (1),* in which*

[0128] the posture is defined by a head position and a gaze direction,* and*

[0129] the image generation unit generates the second presentation image different from the avatar viewpoint image in a case in which a difference in azimuth angle or elevation angle between the actual posture of the user and the posture of the avatar is equal to or greater than a first threshold, or in a case in which a difference in a head position between the actual posture of the user and the posture of the avatar is equal to or greater than a first threshold.

[0130] (4) The image processing device according to any one of the above (1) to (3),* in which*

[0131] the second presentation image includes an image corresponding to a posture of the user.

[0132] (5) The image processing device according to any one of the above (1) to (3),* in which*

[0133] the second presentation image includes an image corresponding to a head position of the posture of the avatar and a gaze direction of a posture of the user.

[0134] (6) The image processing device according to any one of the above (1) to (3),* in which*

[0135] the second presentation image includes an image corresponding to a head position and an azimuth angle of the posture of the avatar, and an elevation angle of a posture of the user.

[0136] (7) The image processing device according to any one of the above (1) to (3),* in which*

[0137] the second presentation image includes an image corresponding to a head position and an elevation angle of the posture of the avatar, and an azimuth angle of a posture of the user.

[0138] (8) The image processing device according to any one of the above (1) to (7),* in which*

[0139] the image generation unit generates the second presentation image different from the avatar viewpoint image when a case in which the posture difference between the actual posture of the user and the posture of the avatar is equal to or greater than a first threshold occurs for a predetermined number of times or continuously for a predetermined period of time.

[0140] (9) The image processing device according to any one of the above (1) to (8),* in which*

[0141] the image generation unit further generates the second presentation image so as to gradually approach the avatar viewpoint image in a case in which the posture difference between the actual posture of the user and the posture of the avatar is within a second threshold.

[0142] (10) The image processing device according to any one of the above (1) to (9), further including:

[0143] a handling unit configured to instruct an operation of the avatar corresponding to the user,

[0144] in which the image generation unit generates the second presentation image different from the avatar viewpoint image in a case in which the posture difference occurs between the actual posture of the user and the posture of the avatar by operating the avatar in response to an instruction from the handling unit.

[0145] (11) An image processing method including the step of:

[0146] by an image processing device,

[0147] generating an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user and generating a second presentation image different from the avatar viewpoint image in a case in which a posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar.

[0148] (12) An image system including:

[0149] an image generation unit configured to generate an avatar viewpoint image according to a viewpoint of an avatar corresponding to a user in a virtual world as a first presentation image to be presented to the user, and generate a second presentation image different from the avatar viewpoint image in a case in which a posture difference occurs between an actual posture of the user based on a result of detecting motion of the user and a posture of the avatar;* and*

[0150] a display unit configured to display the first presentation image and the second presentation image.

REFERENCE SIGNS LIST

[0151] 1 Image system, 11 HMD, 12 image processing device, 13 Controller, 41 Sensor, 42 Display unit, 62 Image generation unit, 71 Avatar operation control unit, 72 User posture detection unit, 73 Posture difference calculation unit, 74 Presentation image generation unit, 81 Handling unit, 201 CPU, 202 ROM, 203 RAM, 206 Input unit, 207 Output unit, 208 Storage unit, 209 Communication unit, 210 Drive

您可能还喜欢...