雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Sony Patent | Information processing apparatus, information processing method, and program

Patent: Information processing apparatus, information processing method, and program

Drawings: Click to check drawins

Publication Number: 20210092466

Publication Date: 20210325

Applicant: Sony

Assignee: Sony Corporation

Abstract

There is provided an information processing apparatus, an information processing method, and a program that enable improvement of user convenience, the information processing apparatus including a display control part that controls a display device to display a transition image that changes substantially continuously and includes a background image having an information amount that is smaller than at least one of a background image of a first video or a background image of a second video, when switching from the first video viewable from a first viewpoint to the second video viewable from a second viewpoint different from the first viewpoint. The present technology can be applied to, for example, a device that displays a video on a head mounted display.

Claims

  1. An information processing apparatus comprising a display control part that controls a display device to display a transition image that changes substantially continuously and includes a background image having an information amount that is smaller than at least one of a background image of a first video or a background image of a second video, when switching from the first video viewable from a first viewpoint to the second video viewable from a second viewpoint different from the first viewpoint.

  2. The information processing apparatus according to claim 1, wherein the transition image includes an image obtained by simplifying a video corresponding to transition of a viewpoint from the first viewpoint to the second viewpoint and emphasizing features of the video.

  3. The information processing apparatus according to claim 2, wherein, in the transition image, a first transition image to be displayed at a start of switching includes an image obtained by simplifying the first video and emphasizing features of the first video, and a second transition image to be displayed at an end of switching includes an image obtained by simplifying the second video and emphasizing features of the second video.

  4. The information processing apparatus according to claim 3, wherein the transition image is a computer graphics (CG) video.

  5. The information processing apparatus according to claim 4, wherein the CG video is a video represented by wire frame.

  6. The information processing apparatus according to claim 5, wherein the first transition image includes an image in which a target object included in the first video is represented only by an outline, and the second transition image includes an image in which a target object included in the second video is represented only by an outline.

  7. The information processing apparatus according to claim 1, wherein the information amount is determined by image information including at least one of a color gradation or a resolution of an image.

  8. The information processing apparatus according to claim 7, wherein the transition image includes, as the background image, an image represented by a predetermined single color, or an image obtained by reducing a resolution of the first video or the second video.

  9. The information processing apparatus according to claim 2, wherein the transition image includes an image according to a change in a convergence angle of both eyes of a user.

  10. The information processing apparatus according to claim 1, wherein switching is made from the first video to the second video on a basis of a user operation or a switching time on a reproduction time axis of the first video.

  11. The information processing apparatus according to claim 1, wherein the display control part controls display of a reduced image obtained by reducing a target object included in the first video or the second video.

  12. The information processing apparatus according to claim 11, wherein the reduced image is a CG video.

  13. The information processing apparatus according to claim 12, wherein the display control part, when switching from the first video or the second video to the reduced image, brings a position of the target object included in the first video or the second video closer to a viewpoint direction of the user according to a change in a display scale of the target object.

  14. The information processing apparatus according to claim 13, wherein the display control part includes, as the reduced image, a CG video according to a motion of a person included in the first video or the second video.

  15. The information processing apparatus according to claim 13, wherein the display control part includes, as the reduced image, a CG video according to an arrangement of an object included in the first video or the second video.

  16. The information processing apparatus according to claim 1, wherein each of the first video and the second video is an omnidirectional live action video.

  17. The information processing apparatus according to claim 16, wherein a camera that captures the omnidirectional live action video is installed in a stadium where a competition including sports is performed, a building where an event including a music concert is performed, inside a structure, or outdoors, and the omnidirectional live action video includes a video of a competition including sports, a video of an event including a music concert, a video of an inside of a structure, or a video of an outdoor.

  18. The information processing apparatus according to claim 1, wherein the display device is a head mounted display.

  19. An information processing method of an information processing apparatus, wherein the information processing apparatus controls a display device to display a transition image that changes substantially continuously and includes a background image having an information amount that is smaller than at least one of a background image of a first video or a background image of a second video, when switching from the first video viewable from a first viewpoint to the second video viewable from a second viewpoint different from the first viewpoint.

  20. A program for causing a computer to function as a display control part that controls a display device to display a transition image that changes substantially continuously and includes a background image having an information amount that is smaller than at least one of a background image of a first video or a background image of a second video, when switching from the first video viewable from a first viewpoint to the second video viewable from a second viewpoint different from the first viewpoint.

Description

TECHNICAL FIELD

[0001] The present technology relates to an information processing apparatus, an information processing method, and a program, and in particular, to an information processing apparatus, an information processing method, and a program that enable improvement of user convenience.

BACKGROUND ART

[0002] In recent years, research and development of a technology for providing a virtual reality (VR) function using devices such as a head mounted display (HMD) have been actively performed (for example, see Patent Document 1).

[0003] Patent Document 1 discloses a technology of generating and displaying an image of a game field of which a position indicated by a marker is changed to a viewpoint position when the position indicated by the marker is selected as the viewpoint position in a head mounted display connected to a game machine.

CITATION LIST

Patent Document

[0004] Patent Document 1: Japanese Patent Application Laid-Open No. 2017-102297

SUMMARY OF THE INVENTION

Problems to be Solved by the Invention

[0005] By the way, in a device such as a head mounted display, when switching a viewpoint of a video, for example, a user may lose own viewing direction and position, and may get motion sickness due to sudden change of the video or difference in the change of the video from actual body motions.

[0006] Therefore, in a device such as a head mounted display, there is a need for a technique for avoiding such an inconvenient event associated with switching of the viewpoint of a video and improving user convenience.

[0007] The present technology has been made in view of such circumstances, and is intended to enable improvement of user convenience.

Solutions to Problems

[0008] An information processing apparatus according to an aspect of the present technology is an information processing apparatus including a display control part that controls a display device to display a transition image that changes substantially continuously and includes a background image having an information amount that is smaller than at least one of a background image of a first video or a background image of a second video, when switching from the first video viewable from a first viewpoint to the second video viewable from a second viewpoint different from the first viewpoint.

[0009] The information processing apparatus according to an aspect of the present technology may be an independent apparatus, or may be an internal block included in one device.

[0010] An information processing method and a program according to an aspect of the present technology are an information processing method and a program corresponding to the above-described information processing apparatus according to an aspect of the present technology.

[0011] In the information processing apparatus, an information processing method, and a program according to an aspect of the present technology, a display device is controlled to display a transition image that changes substantially continuously and includes a background image having an information amount that is smaller than at least one of a background image of a first video or a background image of a second video, when switching from the first video viewable from a first viewpoint to a second video viewable from a second viewpoint different from the first viewpoint.

Effects of the Invention

[0012] According to an aspect of the present technology, user convenience can be improved.

[0013] Note that the effects described herein are not necessarily limited, and any of the effects described in the present disclosure may be applied.

BRIEF DESCRIPTION OF DRAWINGS

[0014] FIG. 1 is a block diagram showing a configuration example of a video reproduction system according to an embodiment to which the present technology is applied.

[0015] FIG. 2 is a diagram showing a display example of an omnidirectional live action video captured by an imaging device installed in a soccer stadium.

[0016] FIG. 3 is a diagram showing an example of the omnidirectional live action video before viewpoint movement in the soccer stadium.

[0017] FIG. 4 is a diagram showing a first example of a CG video at the time of viewpoint movement in the soccer stadium.

[0018] FIG. 5 is a diagram showing a second example of a CG video at the time of viewpoint movement in the soccer stadium.

[0019] FIG. 6 is a diagram showing a third example of a CG video at the time of viewpoint movement in the soccer stadium.

[0020] FIG. 7 is a diagram showing an example of an omnidirectional live action video after viewpoint movement in the soccer stadium.

[0021] FIG. 8 is a flowchart for explaining a flow of reproduction and display control processing.

[0022] FIG. 9 is a timing chart showing an example of highlight video distribution of soccer.

[0023] FIG. 10 is a diagram showing a display example of a miniature CG video in the field.

[0024] FIG. 11 is a diagram showing an example of a distance to a field of a user’s line-of-sight at the time of displaying an omnidirectional live action video.

[0025] FIG. 12 is a diagram showing an example of a distance to a field of a user’s line-of-sight at the time of displaying a miniature CG video.

[0026] FIG. 13 is a diagram showing a first example of a miniature CG video of a goal scene of soccer.

[0027] FIG. 14 is a diagram showing a second example of a miniature CG video of a goal scene of soccer.

[0028] FIG. 15 is a diagram showing a third example of a miniature CG video of a goal scene of soccer.

[0029] FIG. 16 is a diagram showing a first example of a miniature CG video of a musical instrument arrangement of an orchestra.

[0030] FIG. 17 is a diagram showing a second example of a miniature CG video of a musical instrument arrangement of an orchestra.

[0031] FIG. 18 is a diagram showing a third example of a miniature CG video of a musical instrument arrangement of an orchestra.

[0032] FIG. 19 is a timing chart showing an example of music live video distribution.

[0033] FIG. 20 is a diagram showing an example of an omnidirectional live action video in a first viewpoint in music live video distribution.

[0034] FIG. 21 is a diagram showing an example of a CG video in music live video distribution.

[0035] FIG. 22 is a diagram showing an example of an omnidirectional live action video in a second viewpoint in music live video distribution.

[0036] FIG. 23 is a diagram showing a configuration example of a computer.

MODE FOR CARRYING OUT THE INVENTION

[0037] Hereinafter, embodiments of the present technology will be described with reference to the drawings. Note that the description will be given in the following order. [0038] 1. First embodiment: Video reproduction of soccer game [0039] 2. Second embodiment: Video reproduction of soccer game (display scale change) [0040] 3. Third embodiment: Video reproduction of orchestra concert (display scale change) [0041] 4. Fourth embodiment: Music live video reproduction [0042] 5. Modification [0043] 6. Computer configuration

  1. First Embodiment

[0044] (Configuration Example of Video Reproduction System)

[0045] FIG. 1 is a block diagram showing a configuration example of a video reproduction system according to an embodiment to which the present technology is applied.

[0046] A video reproduction system 1 is a system that processes data such as image data captured by an imaging device such as an omnidirectional camera or computer graphics (CG) model data, and causes a display device such as a head mounted display to display a video such as an omnidirectional live action video or a CG video obtained as a result of the processing.

[0047] In FIG. 1, the video reproduction system 1 includes: an information processing apparatus 10 that performs central processing; a video and CG control data storage part 21 and a CG model data storage part 22 that store data input to the information processing apparatus 10; and a display device 31 and a speaker 32 that present data output from the information processing apparatus 10.

[0048] The information processing apparatus 10 is configured as an electronic device such as a game machine, a personal computer, or a unit equipped with a dedicated processor, for example. The information processing apparatus 10 includes a UI and content control part 101, a reproduction part 102, and a rendering part 103.

[0049] The UI and content control part 101 includes, for example, a central processing unit (CPU), a microprocessor, and the like. The UI and content control part 101 operates as a central control device in the information processing apparatus 10 such as various arithmetic processes and operation control.

[0050] The UI and content control part 101 controls the reproduction part 102 and the rendering part 103 to control the display and reproduction of a user interface (UI) and content.

[0051] For example, an operation signal according to an operation on an operation device (for example, a controller or the like) by a user wearing the head mounted display is input to the UI and content control part 101. The UI and content control part 101 controls the operation of each part of the information processing apparatus 10 on the basis of the input operation signal.

[0052] Furthermore, information obtained from a tracking signal according to a motion of the head of the user wearing the head mounted display (hereinafter, referred to as head tracking information), and information regarding the imaging position and the imaging direction of the omnidirectional live action video (hereinafter, referred to as omnidirectional live action imaging point information) are input to the UI and content control part 101.

[0053] Note that the omnidirectional live action video is a video obtained by processing image data captured by an imaging device such as an omnidirectional camera (omnidirectional camera) installed, for example, in a predetermined facility or outdoors, and is a 360-degree panoramic video in all directions, up, down, left, and right.

[0054] The UI and content control part 101 performs predetermined arithmetic processing (for example, arithmetic processing for calculating a user’s viewpoint or calculating a display angle of view) using at least one of input head tracking information or omnidirectional live action imaging point information. The UI and content control part 101 controls the reproduction part 102 and the rendering part 103 on the basis of an arithmetic processing result obtained by predetermined arithmetic processing.

[0055] The UI and content control part 101 includes a reproduction control part 111 and a display control part 112.

[0056] The reproduction control part 111 controls reproduction processing performed by the reproduction part 102. The display control part 112 controls rendering processing performed by the rendering part 103.

[0057] Under the control of the reproduction control part 111, the reproduction part 102 processes video data and audio data of content input thereto and performs reproduction processing for reproducing the content.

[0058] The reproduction part 102 includes a data acquisition part 121, a demux 122, a first video decoder 123, a second video decoder 124, an audio decoder 125, a CG control data decoder 126, and a synchronization control part 127.

[0059] The data acquisition part 121 acquires input data related to the content to be reproduced from the video and CG control data storage part 21 and supplies the input data to the demux 122.

[0060] Here, for example, various types of data such as data of the omnidirectional live action video obtained from image data captured by an imaging device such as an omnidirectional camera, and CG control data for controlling a CG video are recorded in the video and CG control data storage part 21.

[0061] However, the data recorded in the video and CG control data storage part 21 is subjected to encoding processing according to a predetermined method and is encoded. Furthermore, the CG control data is control data of a CG model that changes depending on time, and includes, for example, motion data, position information, and vertex and mesh change information.

[0062] The demux 122 separates the input data supplied from the data acquisition part 121 into encoded video data, encoded audio data, and encoded CG control data. However, here, as input data, two series of encoded video data (first encoded video data and second encoded video data) from different imaging devices (for example, omnidirectional camera) are included.

[0063] The demux 122 supplies, among the pieces of data obtained by separating the input data, the first encoded video data to the first video decoder 123, the second encoded video data to the second video decoder 124, the encoded audio data to the audio decoder 125, and the encoded CG control data to the CG control data decoder 126.

[0064] The first video decoder 123 decodes the first encoded video data supplied from the demux 122 according to a predetermined decoding method, and supplies the resulting first video data to the synchronization control part 127.

[0065] The second video decoder 124 decodes the second encoded video data supplied from the demux 122 according to a predetermined decoding method, and supplies the resulting second video data to the synchronization control part 127.

[0066] The audio decoder 125 decodes the encoded audio data supplied from the demux 122 according to a predetermined decoding method, and supplies the resulting audio data to the synchronization control part 127.

[0067] The CG control data decoder 126 decodes the encoded CG control data supplied from the demux 122 according to a predetermined decoding method, and supplies the resulting CG control data to the synchronization control part 127.

[0068] The first video data from the first video decoder 123, the second video data from the second video decoder 124, the audio data from the audio decoder 125, and the CG control data from the CG control data decoder 126 are input to the synchronization control part 127.

[0069] The synchronization control part 127 performs synchronization control of synchronizing the first video data, the second video data, the audio data, and the CG control data input thereto, and supplies each of the synchronized first video data, second video data, audio data, and CG control data to the rendering part 103.

[0070] The first video data, the second video data, the audio data, and the CG control data are synchronously input to the rendering part 103 from the synchronization control part 127 of the reproduction part 102. Furthermore, the CG model data is input to the rendering part 103 from the CG model data storage part 22.

[0071] Here, in the CG model data storage part 22, various types of data such as CG model data, for example, are recorded. However, the CG model data is data of the CG model that does not change depending on time, and includes, for example, mesh data, texture data, material data, and the like.

[0072] Under the control of the display control part 112, the rendering part 103 processes video data and audio data of content input thereto and CG data and performs rendering processing for outputting video and sound of content and CG.

[0073] Specifically, the rendering part 103 performs rendering processing on first video data or second video data, and outputs the resulting video output data to the display device 31 via a predetermined interface. Therefore, the display device 31 displays a video of content such as an omnidirectional live action video on the basis of the video output data output from (the rendering part 103 of) the information processing apparatus 10.

[0074] Furthermore, the rendering part 103 performs rendering processing on audio data, and outputs the resulting sound output data to the speaker 32 via a predetermined interface. Therefore, the speaker 32 outputs sound synchronized with the video of content such as an omnidirectional live action video on the basis of the sound output data output from (the rendering part 103 of) the information processing apparatus 10.

[0075] Moreover, the rendering part 103 performs rendering processing on CG model data on the basis of CG control data, and outputs the resulting CG video output data to the display device 31. Therefore, the display device 31 displays a CG video on the basis of the CG video output data output from (the rendering part 103 of) the information processing apparatus 10.

[0076] Here, in a case where the UI and content control part 101 performs display switching processing of switching between an omnidirectional live action video and a CG video, for example, the following processing is performed according to the switching target.

[0077] That is, at the time of switching from the omnidirectional live action video to the CG video, the UI and content control part 101 adjusts the position of the CG rendering camera so that the viewpoint direction of the omnidirectional live action video and the CG video match, and gives an instruction to the rendering part 103.

[0078] On the other hand, at the time of switching from the CG video to the omnidirectional live action video, the UI and content control part 101 performs, for example, the following three processes for transition to the omnidirectional live action video in the same viewpoint.

[0079] First, from among a plurality of omnidirectional live action videos, an omnidirectional live action video closest to the CG video at the time of switching is selected. Next, an instruction is given to the rendering part 103 to move the CG rendering camera position to the viewpoint of the imaging device (for example, the omnidirectional camera) from which the selected omnidirectional live action video has been captured. Then, an instruction is given to the rendering part 103 to change the front viewpoint direction of the omnidirectional live action video after the transition, according to the direction that the user has viewed in the CG.

[0080] Note that, here, since the control data (CG control data) of the CG model held by the data of the time stamp synchronized with the video and sound is passed to the rendering part 103 in synchronization, for example, the following three processes can be performed.

[0081] That is, first, it is possible to synchronize a plurality of omnidirectional live action videos and CG videos so that a scene at the same timing can be represented even when the videos are switched. Secondly, it is possible to perform trick play such as fast forward and rewind, for example, by synchronization of the omnidirectional live action video and the CG video. Thirdly, even if switching is made between a plurality of omnidirectional live action videos and CG videos, sound synchronized with those videos can be continuously reproduced.

[0082] The display device 31 is configured as an electronic device having a display, such as a head mounted display or a smartphone, for example. Note that, in the following description, a head mounted display (a head mounted display 31A in FIG. 2 as described later) will be described as an example of the display device 31.

[0083] Furthermore, in the configuration shown in FIG. 1, the speaker 32 is shown as the sound output device. However, the sound output device is not limited to the speaker 32. For example, a user wearing a head mounted display on the head may further insert an earphone into the ear (or wear headphones) so that sound is output therefrom.

[0084] Note that the information processing apparatus 10, the display device 31, and the speaker 32 can be connected by a wire via a cable compliant with a predetermined standard, or can be connected by wireless communication compliant with a predetermined standard, for example.

[0085] The video reproduction system 1 is configured as described above.

[0086] Note that, in FIG. 1, description has been made that head tracking information is used as the tracking information used in the arithmetic processing in the UI and content control part 101. However, for example, position tracking information indicating a spatial position of a head mounted display, eye tracking information according to a motion of the user’s line-of-sight, or the like may be further used.

[0087] Furthermore, in FIG. 1, description has been made that various types of data such as data of an omnidirectional live action video, CG control data, and CG model data are recorded in, for example, the video and CG control data storage part 21 and the CG model data storage part 22 including a large-capacity recording medium such as a hard disk drive (HDD), a semiconductor memory, or an optical disk, and the information processing apparatus 10 obtains input data therefrom. However, the data may be obtained through another route.

[0088] For example, a communication I/F may be provided in the information processing apparatus 10 so as to be connectable to the Internet so that various types of data such as data of omnidirectional live action video distributed from a server on the Internet are received and input to the reproduction part 102. Furthermore, a tuner may be provided in the information processing apparatus 10 to enable reception of a broadcast wave via an antenna so that various types of data such as data of an omnidirectional live action video obtained from the broadcast wave are input to the reproduction part 102.

[0089] (Animation Display at the Time of Viewpoint Movement)

[0090] FIG. 2 shows a display example of an omnidirectional live action video captured by an imaging device installed in a soccer stadium 2.

[0091] Although FIG. 2 shows a field 3 in the soccer stadium 2, actually, a stand is provided so as to surround the field 3. In this example, a camera 41-1 is installed on the upper part of the stand on the near side with respect to the field 3, and a camera 41-2 is installed behind one goal fixed to the field 3.

[0092] The cameras 41-1 and 41-2 are, for example, omnidirectional cameras, and are imaging devices capable of imaging an omnidirectional live action video which is a 360-degrees panoramic video of all directions, up, down, right and left. Note that, in the following description, an omnidirectional live action video captured by an omnidirectional camera will be described as an example, but the device is not limited to the omnidirectional camera, and a live action video captured by another imaging device may be used. For example, a live action video (for example, a video having a viewing angle of 180 degrees) captured by imaging by attaching a fisheye lens or a wide-angle lens to a normal camera may be used.

[0093] The camera 41-1 can capture an omnidirectional live action video according to the installation position of the upper part of the stand on the near side. Furthermore, the camera 41-2 can capture an omnidirectional live action video according to the installation position behind the goal. Note that data of the omnidirectional live action video captured by the cameras 41-1 and 41-2 can be recorded in the video and CG control data storage part 21 (FIG. 1).

[0094] Then, for example, the omnidirectional live action video obtained in this manner is reproduced by the information processing apparatus 10 (FIG. 1) and displayed on the head mounted display 31A as the display device 31, so that the user wearing the head mounted display 31A can enjoy realistic feeling as if he/she was at the soccer stadium 2.

[0095] For example, the camera 41-1 allows the head mounted display 31A to display an omnidirectional live action video from a direction of the upper part of the stand. Furthermore, for example, the camera 41-2 allows the head mounted display 31A to display an omnidirectional live action video from a direction from back side of the goal.

[0096] Note that the head mounted display 31A is a display device that is mounted to the head so as to cover both eyes of the user and allows the user to view a still image or a moving image displayed on a display screen provided in front of the user, for example. The target displayed on the head mounted display 31A is, for example, content of, in addition to a sports program such as a soccer program, a video of a concert or music live, a TV program, a movie, a game image, or the like.

[0097] Furthermore, FIG. 2 shows a case where the camera 41-1 is installed on the upper part of the stand on the near side and the camera 41-2 is installed behind one goal, but the installation position of the camera 41 is not limited to this. For example, an arbitrary number of the cameras can be installed at an arbitrary position in the soccer stadium 2, such as an upper part of a stand on the far side (main stand or back stand) or behind the other goal. Furthermore, in the following description, the camera 41-1, camera 41-2 will be simply described as the camera 41 unless it is particularly necessary to distinguish them.

[0098] Here, a case is assumed where the omnidirectional live action video displayed on the head mounted display 31A is switched from the omnidirectional live action video in the upper part of the stand captured by the camera 41-1 to the omnidirectional live action video behind the goal captured by the camera 41-2.

[0099] At this time, the information processing apparatus 10 (FIG. 1) causes, as the display of the head mounted display 31A, display of an animation of the viewpoint movement by switching to the consecutive CG video display during the viewpoint movement between a first viewpoint in which the omnidirectional live action video in the upper part of the stand can be viewed to a second viewpoint in which the omnidirectional live action video behind the goal can be viewed.

[0100] FIG. 3 shows an example of an omnidirectional live action video before the viewpoint movement, displayed on the head mounted display 31A. In FIG. 3, the head mounted display 31A displays an omnidirectional live action video 301 having a viewpoint corresponding to the line-of-sight direction of the user viewing the omnidirectional live action video captured by the camera 41-1 on the upper part of the stand.

[0101] FIGS. 4 to 6 show an example of a CG video displayed during the viewpoint movement on the head mounted display 31A. Note that it is assumed that the CG videos shown in FIGS. 4 to 6 are displayed in chronological order in that order.

[0102] First, as shown in FIG. 4, a CG video 302 of which viewpoint is from a direction of the upper part of the stand is displayed on the head mounted display 31A. That is, the viewpoint of the CG video 302 substantially matches the viewpoint of the omnidirectional live action video 301 (FIG. 3) before the above-described viewpoint movement.

[0103] Furthermore, in the CG video 302, a stand, spectators, players, and the like are not included, and a line marking the field 3 (for example, a halfway line, a touch line, a goal line, or the like) and the goal set at the center of each goal line are represented by wire frame (represented only by the outline), as compared to the omnidirectional live action video 301 (FIG. 3).

[0104] That is, the CG video 302 includes, for example, an image represented by a predetermined single color such as black or blue as a background image, and has an information amount that is smaller than the background image of the omnidirectional live action video 301. Note that the wire frame is one of three-dimensional modeling and rendering methods, and is a method of representation by a set of lines including only three-dimensional sides.

[0105] Next, as shown in FIG. 5, a CG video 303 having a different viewpoint from the CG video 302 (FIG. 4) is displayed on the head mounted display 31A. For example, the viewpoint of the CG video 303 is an arbitrary position on a trajectory connecting the installation position of the camera 41-1 in the upper part of the stand and the installation position of the camera 41-2 behind the goal.

[0106] Furthermore, the CG video 303 represents, by the wire frame, a line or a goal marking the field 3, similarly to the CG video 302 (FIG. 4). Moreover, the CG video 303 includes an image represented by a predetermined single color such as black, for example, as a background image, similarly to the CG video 302 (FIG. 4).

[0107] Next, as shown in FIG. 6, a CG video 304 of which a viewpoint is in a direction from back side of the goal is displayed on the head mounted display 31A. That is, the viewpoint of the CG video 304 substantially matches the viewpoint of the omnidirectional live action video 305 (FIG. 7) after the viewpoint movement described later.

[0108] Furthermore, the CG video 304 represents, by the wire frame, a line and a goal for marking the field 3, similarly to the CG video 302 (FIG. 4) and the CG video 303 (FIG. 5). Moreover, the CG video 304 includes an image represented by a predetermined single color such as black, for example, as a background image.

[0109] As described above, in the head mounted display 31A, when switching the viewpoint from the omnidirectional live action video of the upper part of the stand to the omnidirectional live action video behind the goal, the information processing apparatus 10 (FIG. 1) causes display of continuously changing CG videos (so to speak, transition images) like the CG video 302 (FIG. 4), the CG video 303 (FIG. 5), and the CG video 304 (FIG. 6) represented by wire frame, so that an animation of the viewpoint movement is displayed.

[0110] Furthermore, at this time, in the CG video 302, CG video 303, and CG video 304 as transition images, the viewpoint moves and the scale of the line or the goal represented by the wire frame can be changed. Therefore, it can be said that the transition image includes an image according to a change in the convergence angle of both eyes of the user.

[0111] FIG. 7 shows an example of an omnidirectional live action video after the viewpoint movement displayed on the head mounted display 31A. In FIG. 7, the head mounted display 31A displays an omnidirectional live action video 305 having a viewpoint according to the line-of-sight direction of the user viewing the omnidirectional live action video captured by the camera 41-2 behind the goal.

[0112] As described above, when the viewpoint is switched from the omnidirectional live action video 301 in the upper part of the stand (FIG. 3) to the omnidirectional live action video 305 behind the goal (FIG. 7), the animation of the viewpoint movement (the transition images in which the CG videos including the CG video 302 to the CG video 304 changes continuously) is displayed, so that the video is prevented from becoming monotonous, and the user can grasp how the viewpoint has changed.

[0113] Furthermore, when transition images in which the CG videos represented by the wire frame changes continuously are displayed as the display of an animation of the viewpoint movement, detailed information of the soccer stadium 2 is deformed, and therefore, motion sickness (so-called VR motion sickness) of the user using the head mounted display 31A can be reduced.

[0114] Note that although a case where CG videos represented by the wire frame are used as the animation of the viewpoint movement has been described, the representation by the wire frame is an example of an expression method for deforming an omnidirectional live action video, and another representation method may be used. Furthermore, in this specification, the term “deformation” has a meaning of simplifying a video and emphasizing features of the video.

[0115] (Flow of Reproduction and Display Control Processing)

[0116] Next, a flow of a reproduction and display control processing performed by the UI and content control part 101 of the information processing apparatus 10 (FIG. 1) will be described with reference to a flowchart of FIG. 8.

[0117] Note that the information processing apparatus 10 including a game machine, a personal computer, and the like is connected to the head mounted display 31A as a premise that the processing shown in the flowchart of FIG. 8. Then, the user wearing the head mounted display 31A on the head operates a controller or the like while watching the screen displayed on the display to, such that the user can switch the viewpoint of the video displayed on the screen (omnidirectional live action video or CG video), for example.

[0118] In step S11, the UI and content control part 101 controls the reproduction part 102 to reproduce the omnidirectional live action video. Therefore, for example, the omnidirectional live action video 301 (FIG. 3) is displayed on the head mounted display 31A as the omnidirectional live action video in the upper part of the stand.

[0119] In step S12, the UI and content control part 101 determines whether or not there has been a viewpoint change instruction that is an instruction to change the viewpoint of a video from a user or a distributor, on the basis of an operation signal or the like input to the UI and content control part 101.

[0120] In a case where it is determined in step S12 that there is no viewpoint change instruction, the process returns to step S11, and the above-described process is repeated. In this case, for example, the display of the omnidirectional live action video 301 (FIG. 3) having a viewpoint according to the line-of-sight direction of the user viewing the omnidirectional live action video in the upper part of the stand is continued.

[0121] On the other hand, in step S12, for example, in a case where it is determined that the controller has been operated by the user and a viewpoint change instruction has been given, the process proceeds to step S13.

[0122] Note that, for a case where the viewpoint change instruction is given by the distributor, for example, when the content creator creates content in which the viewpoint is changed at a certain timing (for example, the switching time on the reproduction time axis of the omnidirectional live action video in the upper part of the stand), it is determined that the viewpoint change instruction has been given at the time when the timing (the switching time) is reached during the reproduction of the content.

[0123] In step S13, the UI and content control part 101 acquires the omnidirectional live action imaging point information and the head tracking information of the head mounted display 31A.

[0124] In step S14, the UI and content control part 101 calculates the display angle of view of the CG model read from the CG model data storage part 22 on the basis of the omnidirectional live action imaging point information and the head tracking information acquired in the processing of step S13.

[0125] In step S15, the UI and content control part 101 controls the rendering part 103 on the basis of the calculation result calculated in the processing of step S14, and renders the CG model at the initial position (the same position as the omnidirectional live action video). Therefore, for example, the CG video 302 (FIG. 4) corresponding to the viewpoint of the omnidirectional live action video 301 (FIG. 3) is displayed on the head mounted display 31A.

[0126] In step S16, the UI and content control part 101 acquires the head tracking information of the head mounted display 31A.

[0127] In step S17, the UI and content control part 101 calculates the line-of-sight direction of the user wearing the head mounted display 31A on the basis of the head tracking information acquired in the processing of step S16.

[0128] In step S18, the UI and content control part 101 controls the rendering part 103 on the basis of the calculation result calculated in the processing of step S17, and renders the CG model in the latest viewpoint direction. Therefore, for example, the CG video 303 (FIG. 5) is displayed on the head mounted display 31A.

[0129] In step S19, the UI and content control part 101 determines whether or not there has been a viewpoint determination instruction that is an instruction to determine the viewpoint of a video from a user or a distributor, on the basis of an operation signal or the like input to the UI and content control part 101.

[0130] In a case where it is determined in step S19 that there is no viewpoint determination instruction, the process returns to step S16, and the above-described process is repeated. That is, by repeating the processing of steps S16 to S18, for example, the CG video according to the user’s line-of-sight (for example, a CG video represented by the wire frame) is displayed on the head mounted display 31A, next to the CG video 303 (FIG. 5).

[0131] On the other hand, in a case where it is determined in step S19 that a viewpoint determination instruction has been given, the process proceeds to step S20. In step S20, the UI and content control part 101 selects the omnidirectional live action video closest to the latest viewpoint direction from a plurality of omnidirectional live action videos.

[0132] Here, for example, in a case where a viewpoint determination instruction is given immediately after the CG video 304 (FIG. 6) is displayed, the omnidirectional live action video behind the goal corresponding to the viewpoint of the CG video 304 (FIG. 6) is selected as the omnidirectional live action video closest to the latest viewpoint direction.

[0133] In step S21, the UI and content control part 101 controls the reproduction part 102 to reproduce the omnidirectional live action video selected in the processing of step S20. For example, the omnidirectional live action video 305 (FIG. 7) having a viewpoint according to the line-of-sight direction of the user is displayed on the head mounted display 31A, as the omnidirectional live action video behind the goal. However, when displaying the omnidirectional live action video 305 (FIG. 7) behind the goal, the front direction thereof is determined so as to match the latest user’s viewpoint direction before the display.

[0134] The flow of the reproduction and display control processing has been described above.

[0135] In this reproduction and display control processing, the display control part 112 of the UI and content control part 101 causes sequentially display of the CG video such as the CG video 302, the CG video 303, the CG video 304, for example, as the transition images that changes substantially continuously, when switching is made from the first video (for example, the omnidirectional live action video 301) that can be viewed from the first viewpoint (for example, a viewpoint corresponding to the upper part of the stand) to the second video (for example, the omnidirectional live action video 305) that can be viewed from the second viewpoint (for example, the viewpoint corresponding to the behind the goal).

……
……
……

您可能还喜欢...