Sony Patent | Information Processing Apparatus, Information Processing Method, And Program
Patent: Information Processing Apparatus, Information Processing Method, And Program
Publication Number: 20200059705
Publication Date: 20200220
Applicants: Sony
Abstract
A system for distributing a partial video portion in a series of imaged video to a user facilitates range correction of the video portion to reduce a burden on a person involved in video imaging. An information processing apparatus according to the present technology includes an information acquisition unit configured to acquire information of an in-point and an out-point specifying a partial range in an imaged video by an imaging device as a video portion to be distributed to a user, and a transmission control unit configured to perform control to transmit the information of an in-point and an out-point acquired by the information acquisition unit and the imaged video to an external device.
TECHNICAL FIELD
[0001] The present technology relates to an information processing apparatus, an information processing method, and a program, and in particular, relates to a technology suitable in a case where a part of each of a plurality of imaged videos by a plurality of imaging devices is set to a distribution object video to a user.
BACKGROUND ART
[0002] For example, as described in Patent Documents 1 and 2 below, an in-point and an out-point are used for editing to cut out a part of a video. For example, with regard to a video (material video) to be edited, editing to delete a video outside a range from the in-point to the out-point and cut out only a video portion in the range is performed.
[0003] Note that following patent documents can be cited, for example, as related conventional techniques.
CITATION LIST
Patent Document
[0004] Patent Document 1: Japanese Patent Application Laid-Open No. 2005-267677 [0005] Patent Document 2: Japanese Patent Application Laid-Open No. 2004-104468
SUMMARY OF THE INVENTION
Problems to be Solved by the Invention
[0006] Here, as one of video distribution services, for example, it is conceivable to image a performer in a predetermined imaging environment such as a live house and distribute a video obtained by the imaging from a server device to the performer.
[0007] For example, a performance event in a live house often takes a form in which a plurality of performers sequentially gives a performance according to a predetermined time schedule. It is conceivable that an individual performer uploads a his/her own performance video to a required video posting site to perform his/her own promotion and the like. Therefore, it is conceivable to develop a service of imaging the entire performance event, cutting out performance portions of the individual performers from the imaged video, and distributing the cut out individual video portions to terminals of the corresponding performers.
[0008] To realize such a distribution service, it is conceivable that a terminal device provided on the imaging environment side performs setting of the in-point and the out-point for each performance portion of an individual performer, and transmits the cutout video portion according to the in-point and the out-point to the server device.
[0009] However, in the case where the cutout video portion is transmitted to the server device, correction of a range of the video portion is difficult. For example, even if a part of the performance portion is missing due to incorrect setting of the in-point and the out-point for a certain performer, correction is difficult because the video portion in a cutout-completed state has already been transmitted to the server device.
[0010] In this case, re-imaging is conceivable but this puts a heavy burden on the people involved in video imaging including the performers.
[0011] Furthermore, in such a service, it is also conceivable to enable the performers to perform editing, as the video distribution to the performers. In particular, in a case where a performer wants to create his/her own promotional video, creation of a video with good appearance is desirable and thus addition of the editing function is effective.
[0012] However, if the degree of freedom of editing is too increased, giving a priority to the appearance of a video, a burden on the user regarding editing work will increase, convenience is impaired, and facilitation of use of the editing function becomes difficult.
[0013] The present technology has been made in view of the above-described circumstances, and first, an object is to facilitate range correction of a video portion to reduce a burden on a person involved in video imaging in a system for distributing a partial video portion in a series of imaged video to a user.
[0014] Furthermore, second, an object of the present technology is to reduce the burden on the user regarding editing while preventing a decrease in the appearance of an edited video, and to facilitate use of the editing function.
Solutions to Problems
[0015] A first information processing apparatus according to the present technology includes an information acquisition unit configured to acquire information of an in-point and an out-point specifying a range of a partial video portion in an imaged video by an imaging device, and a transmission control unit configured to perform transmission control of the information of an in-point and an out-point acquired by the information acquisition unit and the imaged video to an external device such that the video portion specified from the in-point and the out-point is managed as a distribution object video to a user.
[0016] The imaged video and the information of an in-point and an out-point are transmitted to the external device as described above, whereby correction can be performed to a correct video range even if the in-point or the out-point are set to a wrong position. In other words, occurrence of re-imaging due to wrong setting of the in-point or the out-point is prevented.
[0017] In the first information processing apparatus according to the above-described present technology, the transmission control unit desirably performs control to transmit, to the external device, a plurality of imaged videos by a plurality of imaging devices as the imaged video.
[0018] With the configuration, a system to distribute a viewpoint switching video can be realized.
[0019] In the first information processing apparatus according to the above-described present technology, the transmission control unit desirably performs control to transmit, to the external device, the video portion and video portion identification information for identifying the video portion in association with each other.
[0020] With the configuration, management of the video portion in the external device is facilitated.
[0021] In the first information processing apparatus according to the above-described present technology, the information acquisition unit desirably acquires object person identification information for identifying an object person to which the video portion is to be distributed, and the transmission control unit desirably performs control to transmit the video portion identification information and the object person identification information in association with each other to the external device.
[0022] With the configuration, correspondence between the video portion and the distribution object person can be managed in the external device.
[0023] In the first information processing apparatus according to the above-described present technology, the information acquisition unit desirably acquires a plurality of sets of the information of the in-point and the out-point, the sets each specifying a plurality of the different video portions in the imaged video, as the information of the in-point and the out-point, and acquires the object person identification information for each of the video portions, and the transmission control unit desirably performs control to transmit, to the external device, the video portion identification information and the object person identification information in association with each other for the each of the video portions for which the object person identification information has been acquired.
[0024] With the configuration, the video portions are each associated with different pieces of the object person identification information (user identification information) and transmitted to the external device.
[0025] The first information processing apparatus according to the above-described present technology desirably further includes an information display control unit configured to display, on a screen, visual information representing a position on a time axis of the video portion in the imaged video, a pointer for indicating a position on the time axis in the imaged video, an in-point indicating operator for indicating the position indicated by the pointer as a position of the in-point, and an out-point indicating operator for indicating the position indicated by the pointer as a position of the out-point, and the information display control unit desirably makes a display form in a region close to the in-point and a display form in a region close to the out-point in a display region representing the video portion in the visual information different, and desirably matches the display form in the region close to the in-point with a display form of the in-point indicating operator, and the display form in the region close to the out-point with a display form of the out-point indicating operator.
[0026] With the configuration, when performing a correction operation of the in-point or the out-point, operating the in-point indicating operator when locating the pointer near the region close to the in-point and operating the out-point indicating operator when locating the pointer near the region close to the out-point can be clearly displayed.
[0027] The first information processing apparatus according to the above-described present technology desirably further includes an input form generation indication unit configured to perform, in response to indication of the out-point to the imaged video, a generation indication of a purchase information input form regarding the video portion corresponding to the indicated out-point.
[0028] With the configuration, the purchase information input form is generated and the user can perform a purchase procedure even before recording of the imaged video is terminated.
[0029] Furthermore, a first information processing method, by an information processing apparatus, according to the present technology includes an information acquisition step of acquiring information of an in-point and an out-point specifying a partial range in an imaged video by an imaging device as a video portion to be distributed to a user, and a transmission control step of performing control to transmit the information of an in-point and an out-point acquired by the information acquisition step and the imaged video to an external device.
[0030] According to such an information processing method, effects similar to the effects of the first information processing apparatus according to the above-described present technology can be obtained.
[0031] Moreover, a first program according to the present technology is a program for causing a computer device to execute processing executed as the first information processing method.
[0032] This program realizes the first information processing apparatus.
[0033] A second information processing apparatus according to the present technology is an information processing apparatus including an indication acceptance unit configured to accept, as indication for generating one viewpoint switching video in which imaging viewpoints are switched over time on the basis of a plurality of imaged videos obtained by imaging a subject from different imaging viewpoints, indication of a switching interval of the imaging viewpoints, and a random selection unit configured to randomly select a video to be used in each video section of the viewpoint switching video divided by the switching interval from the plurality of imaged videos.
[0034] According to the above configuration, the viewpoint switching video in which the imaging viewpoints are randomly switched over time is generated by the user simply indicating the switching interval of the imaging viewpoints.
[0035] In the second information processing apparatus according to the above-described present technology, the indication acceptance unit desirably accepts indication of an entire time length of a video to be generated as the viewpoint switching video, and desirably presents information regarding the switching interval calculated on the basis of the indicated entire time length to a user.
[0036] With the configuration, an appropriate viewpoint switching interval according to the time length of the viewpoint switching video can be presented to the user.
[0037] In the second information processing apparatus according to the above-described present technology, sound information is desirably attached to the imaged video, and the indication acceptance unit desirably presents information regarding the switching interval calculated on the basis of a sound characteristic of the sound information to a user.
[0038] With the configuration, an appropriate viewpoint switching interval according to the sound characteristic of the sound information reproduced together with the viewpoint switching video can be presented to the user.
[0039] In the second information processing apparatus according to the above-described present technology, the random selection unit desirably selects an enlarged video obtained by enlarging a partial pixel region of the imaged video as at least one of the videos to be used in the video sections of the viewpoint switching video.
[0040] With the configuration, the number of switchable viewpoints can be increased.
[0041] In the second information processing apparatus according to the above-described present technology, the random selection unit desirably randomly selects whether or not to use the enlarged video as the video to be used in each video section of the viewpoint switching video.
[0042] With the configuration, randomness of the switching of the imaging viewpoints increases.
[0043] In the second information processing apparatus according to the above-described present technology, the indication acceptance unit desirably accepts re-execution indication of the selection by the random selection unit.
[0044] The random selection of the video to be used in each video section is re-executed, so that the viewpoint switching video according to the user’s intention can be re-generated.
[0045] In the second information processing apparatus according to the above-described present technology, the indication acceptance unit desirably accepts indication specifying a video other than the plurality of imaged videos, and desirably includes a video transmission control unit configured to perform control to transmit the indicated video and a selection result by the random selection unit to an external device.
[0046] With the configuration, a video in which an arbitrary video is connected to the viewpoint switching video can be generated by the external device.
[0047] The second information processing apparatus according to the above-described present technology desirably further includes an imaging unit configured to image a subject, and the indication acceptance unit desirably accepts specification indication of a video to be connected to the viewpoint switching video from the video imaged by the imaging unit.
[0048] With the configuration, in obtaining the video in which an arbitrary video is connected to the viewpoint switching video, the user can easily obtain a video to be connected using a mobile terminal with a camera such as a smartphone, for example.
[0049] The second information processing apparatus according to the above-described present technology desirably further includes an imaged video acquisition unit configured to acquire the plurality of imaged videos to which data amount reduction processing has been applied from an external device, and a video display control unit configured to perform display control of the viewpoint switching video according to a selection result by the random selection unit on the basis of the plurality of imaged videos to which data amount reduction processing has been applied.
[0050] With the configuration, a processing load regarding display of the viewpoint switching video in an information processing apparatus is reduced.
[0051] Furthermore, a second information processing method, by an information processing apparatus, according to the present technology includes an indication acceptance step of accepting, as indication for generating one viewpoint switching video in which imaging viewpoints are switched over time on the basis of a plurality of imaged videos obtained by imaging a subject from different imaging viewpoints, indication of a switching interval of the imaging viewpoints, and a random selection step of randomly selecting a video to be used in each video section of the viewpoint switching video divided by the switching interval from the plurality of imaged videos.
[0052] According to such an information processing method, effects similar to the effects of the second information processing apparatus according to the above-described present technology can be obtained.
[0053] Moreover, a second program according to the present technology is a program for causing a computer device to execute processing executed as the second information processing method.
[0054] This program realizes the second information processing apparatus.
Effects of the Invention
[0055] According to the present technology, first, a system for distributing a partial video portion in a series of imaged video to a user facilitates range correction of a video portion thereby reducing a burden on a person involved in video imaging.
[0056] Furthermore, second, according to the present technology, the burden on the user regarding editing is reduced while a decrease in the appearance of an edited video is prevented, whereby the use of the editing function can be facilitated.
[0057] Note that the effects described here are not necessarily limited, and any of effects described in the present disclosure may be exerted.
BRIEF DESCRIPTION OF DRAWINGS
[0058] FIG. 1 is a diagram illustrating an example of a video distribution system premised in an embodiment.
[0059] FIG. 2 is an explanatory diagram of screen transition regarding a start indication operation of an imaging operation by an imaging device and an indication input operation of an initial chapter mark.
[0060] FIG. 3 is explanatory diagram of screen transition regarding a chapter mark correction operation and an imaged video upload operation.
[0061] FIG. 4 is explanatory diagrams of an operation regarding purchase acceptance of an imaged video.
[0062] FIG. 5 is an explanatory diagram of generation timing of a purchase information input form.
[0063] FIG. 6 is diagrams schematically illustrating a state of transition of an imaged video in a case where purchase of a viewpoint switching video is performed.
[0064] FIG. 7 is a block diagram illustrating a hardware configuration of a computer device in the embodiment.
[0065] FIG. 8 is a functional block diagram for describing various functions as a first embodiment realized by a control terminal.
[0066] FIG. 9 is a flowchart illustrating processing regarding indication operation acceptance of an in-point and an out-point as the initial chapter marks.
[0067] FIG. 10 is a flowchart illustrating processing regarding initial chapter mark correction operation acceptance.
[0068] FIG. 11 is a flowchart illustrating processing regarding generation of a live ID and a video portion ID.
[0069] FIG. 12 is a flowchart illustrating processing regarding purchase information input acceptance of a video portion.
[0070] FIG. 13 is a flowchart illustrating processing executed by a server device in the embodiment.
[0071] FIG. 14 is diagrams illustrating a screen example regarding video editing when generating a viewpoint switching video.
[0072] FIG. 15 is a functional block diagram for describing various functions as the first embodiment realized by a user terminal.
[0073] FIG. 16 is a diagram schematically illustrating a relationship between a switching cycle of an imaging viewpoint (viewpoint switching cycle) and a video section in the viewpoint switching video.
[0074] FIG. 17 is a flowchart illustrating processing to be executed by the user terminal as the first embodiment together with FIG. 20.
[0075] FIG. 18 is a flowchart for describing processing (S504) according to an input operation in the first embodiment.
[0076] FIG. 19 is a flowchart illustrating an example of viewpoint switching video generation processing (S506) according to input information in the first embodiment.
[0077] FIG. 20 is a flowchart illustrating the processing to be executed by the user terminal as the first embodiment together with FIG. 17.
[0078] FIG. 21 is a flowchart illustrating processing to be performed by a user terminal according to a second embodiment.
[0079] FIG. 22 is an explanatory diagram of an enlarged video.
[0080] FIG. 23 is a flowchart illustrating processing to be performed by a user terminal according to a third embodiment.
[0081] FIG. 24 is a diagram illustrating an example of screen display according to an indication operation of an additional video.
[0082] FIG. 25 is a flowchart illustrating processing to be executed by a user terminal as a fourth embodiment together with FIG. 26.
[0083] FIG. 26 is a flowchart illustrating the processing to be executed by the user terminal as the fourth embodiment together with FIG. 25.
[0084] FIG. 27 is a diagram schematically illustrating an overall configuration of an operating room system.
[0085] FIG. 28 is a diagram illustrating a display example of an operation screen on a centralized operation panel.
[0086] FIG. 29 is a diagram illustrating an example of a state of a surgical operation to which the operating room system is applied.
[0087] FIG. 30 is a block diagram illustrating an example of functional configurations of a camera head and a CCU illustrated in FIG. 29.
[0088] FIG. 31 is a block diagram illustrating an example of a schematic configuration of a vehicle control system.
[0089] FIG. 32 is an explanatory diagram illustrating an example of installation positions of a vehicle exterior information detection unit and an imaging unit.
MODE FOR CARRYING OUT THE INVENTION
[0090] Hereinafter, embodiments according to the present disclosure will be described in the following order with reference to the attached drawings.
[0091] <1. First Embodiment>
[0092] [1-1. Configuration Outline of Video Distribution System]
[0093] [1-2. Operation Outline of Video Distribution System]
[0094] [1-3. Hardware Configuration of Computer Device]
[0095] [1-4. Function of Control Terminal]
[0096] [1-5. Processing of Control Terminal, Imaging Management Terminal, and Server Device]
[0097] [1-6. Generation of Viewpoint Switching Video]
[0098] [1-7. Function of User Terminal]
[0099] [1-8. Processing of User Terminal]
[0100] [1-9. Conclusion of First Embodiment]
[0101] <2. Second Embodiment>
[0102] [2-1. Function of User Terminal]
[0103] [2-2. Processing of User Terminal]
[0104] [2-3. Conclusion of Second Embodiment]
[0105] <3. Third Embodiment>
[0106] [3-1. Function of User Terminal]
[0107] [3-2. Processing of User Terminal]
[0108] [3-3. Conclusion of Third Embodiment]
[0109] <4. Fourth Embodiment>
[0110] [4-1. Function of User Terminal]
[0111] [4-2. Processing of User Terminal]
[0112] [4-3. Conclusion of Fourth Embodiment]
[0113] <5. Program>
[0114] <6. Modification>
[0115] <7. First Application>
[0116] <8. Second Application>
[0117] <9. Present Technology>
1.* First Embodiment*
[0118] [1-1. Configuration Outline of Video Distribution System]
[0119] FIG. 1 illustrates an example of a video distribution system premised in an embodiment.
[0120] The video distribution system includes at least each device including a control terminal 1 provided in an event site Si, a server device 9, and a user terminal 10. The server device 9 and the user terminal 10 are devices provided with a computer and are capable of performing data communication with each other via a network 8 such as the Internet, for example.
[0121] The event site Si is a live house in the present example. Although illustration is omitted, the event site Si as a live house is provided with at least a stage as a place for performers to give a performance and sing, and a viewing space as a place for spectators and the like to view the performers on the stage.
[0122] In the event site Si, a plurality of imaging devices 2, relays 3 provided for the respective imaging devices 2, a router device 4, an imaging management terminal 5, a storage device 6, and a line termination device 7 are provided including the control terminal 1.
[0123] Each imaging device 2 is configured as a camera device capable of imaging a video. The imaging device 2 includes a microphone and is capable of generating an imaged video with sound information based on a sound collection signal by the microphone.
[0124] In the event site Si of the present example, at least three imaging devices 2 are provided, and each of the imaging devices 2 is installed at a position where the imaging device 2 can image the stage. The three imaging devices 2 have different imaging viewpoints. One of the imaging devices 2 is placed in front of the stage, another imaging device 2 is installed on the right of the stage with respect to the front, and the other imaging device 2 is installed on the left of the stage with respect to the front. The imaging devices 2 can image a performer on the stage at a front angle, a right angle, and a left angle.
[0125] Hereinafter, the imaging device 2 that performs imaging with the imaging viewpoint as the front angle is described as “first camera”, the imaging device 2 that performs imaging with the imaging viewpoint as the right angle is described as “second camera”, and the imaging device 2 that performs imaging with the imaging viewpoint as the left angle is described as “third camera”.
[0126] The router device 4 is configured as, for example, a wireless local area network (LAN) router, and has functions to enable communication between devices in the LAN built in the event site Si and to enable communication between the devices connected to the LAN and an external device via the network 8. The router device 4 in the present example has a LAN terminal and also supports wired connection using a LAN cable.
[0127] The line termination device 7 is, for example, an optical network unit (ONU: optical line termination device), and converts an optical signal input from the network 8 side into an electrical signal (digital signal) in a predetermined format or converts an electrical signal input from the router device 4 side into an optical signal, and outputs the converted signal to the network 8 side.
[0128] The relay 3 is connected to the imaging device 2 and the router device 4, and relays signals exchanged between the imaging device 2 and the router device 4. For example, video data based on an imaged video by the imaging device 2 is transferred to the router device 4 via the relay 3.
[0129] The control terminal 1 includes a computer device, and is configured to be able to perform data communication with an external device connected to the LAN of the event site Si. The control terminal 1 is, for example, a tablet-type information processing terminal, and is used as a terminal for a staff member or the like at the event site Si to perform an operation input regarding video imaging (for example, an operation input such as start or termination of imaging) using the imaging device 2.
[0130] In the present example, the control terminal 1 is wirelessly connected to the router device 4, but the connection with the router device 4 may be wired connection.
[0131] The storage device 6 includes a computer device and stores various types of information. The storage device 6 is connected to the router device 4 and stores various types of information input via the router device 4 or reads out the stored information and outputs the read information to the router device 4 side.
[0132] In the present example, the storage device 6 is mainly used as a device for storing imaged videos by the imaging devices 2 (in other words, recording the imaged videos).
[0133] The imaging management terminal 5 includes a computer device, and is configured to be able to perform data communication with an external device connected to the LAN of the event site Si and has a function to perform communication with the external device (especially, the server device 9) via the network 8.
[0134] The imaging management terminal 5 is configured as, for example, a personal computer, and performs various types of processing for managing the imaged video by the imaging device 2 on the basis of an operation input via the control terminal 1 or an operation input using an operation input device such as a mouse connected to the imaging management terminal 5. The various types of processing include processing for transmitting (uploading) the imaged video imaged by each imaging device 2 and stored in the storage device 6 to the server device 9 and processing related to purchase of the imaged video.
[0135] The user terminal 10 is assumed as a terminal device used by a performer who has performed live at the event site Si in the present example, and is configured as an information processing terminal such as a smartphone. The user terminal 10 in the present example includes an imaging unit 10a that images a subject to obtain an imaged video.
[0136] In the video distribution system of the embodiment, the imaged video by each imaging device 2 is uploaded to the server device 9, and a video based on the uploaded imaged video is distributed from the server device 9 to the user terminal 10. Specifically, the imaged video by each imaging device 2, that is, the viewpoint switching video generated on the basis of a plurality of imaged videos imaged at different imaging viewpoints is distributed to the user terminal 10. This will be described again below.
[0137] Note that, in FIG. 1, various examples of the configuration of the network 8 are assumed. For example, an intranet, an extranet, a local area network (LAN), a community antenna television (CATV) communication network, a virtual private network, a telephone network, a mobile communication network, a satellite communication network, and the like, including the Internet, are assumed.
[0138] Furthermore, various examples are assumed for transmission media that configure all or part of the network 8. For example, wired means such as Institute of Electrical and Electronics Engineers (IEEE) 1394, a universal serial bus (USB), a power line carrier, or a telephone line, infrared means such as infrared data association (IrDA), or wireless means such as Bluetooth (registered trademark), 802.11 wireless, a portable telephone network, a satellite link, or a terrestrial digital network can be used.
[0139] [1-2. Operation Outline of Video Distribution System]
[0140] Next, an outline of the operation in the video distribution system according to the above-described configuration will be described with reference to FIGS. 2 to 6.
[0141] FIG. 2 is an explanatory diagram of screen transition regarding a start indication operation of an imaging operation by the imaging device 2 and an indication input operation of an initial chapter mark.
[0142] As described above, the live event in the live house takes a form in which individual performers sequentially give a performance (sometimes accompanied by singing) according to a predetermined time schedule. In the present embodiment, in a case where the individual performers sequentially give a performance in this manner, each performance of each performer in the imaged video is divided by chapter marks as an in-point and an out-point, and each divided video portion is managed as a purchase object video portion of each performer.
[0143] Note that the “performance portion of each performer” referred to here can be rephrased into a portion where the performer is performing. In a case where the performer plays a plurality songs per one performance, the “performance portion of each performer” means a portion from the start of the play of the plurality of songs to the end of the play of the plurality of songs, for example.
[0144] However, in this case, it is difficult to strictly divide the performance portion of each performer in real time (in parallel with imaging). Therefore, the present embodiment enables reassignment (modification) of the chapter marks after the end of imaging (after the end of recording), and assigns chapter marks roughly dividing the video portion of each performer as “initial chapter marks” in real time.
[0145] At this time, for the imaged video as a material (as a body), imaging start timing (recording start timing) is set to be timing sufficiently before the start of the performance of the first performer and imaging end timing (recording end timing) is set to be timing sufficiently after the end of the performance of the last performer so as not to cause leakage of a necessary portion.
[0146] FIG. 2 will be described on the basis of the above premise.
[0147] First, in the present example, it is assumed that a staff member at the event site Si performs an operation (an indication of the start or termination) regarding an imaging operation of the imaging device 2 and an initial chapter mark indication input operation on the control terminal 1. In other words, the screen transition illustrated in FIG. 2 illustrates screen transition in the control terminal 1.
[0148] In the control terminal 1 in the present example, an application (application program) for performing screen display as illustrated in FIG. 2 and receiving an operation input is installed. Hereinafter, the application is described as “control operation application Ap1”.
[0149] In starting the imaging operation (the recording operation of the imaged video) by each imaging device 2, the staff member at the event site Si performs the operation input to the control terminal 1 to activate the control operation application Ap1. Then, a top screen G11 as illustrated in FIG. 2A is displayed on the control terminal 1. On the top screen G11, either new live imaging or a live list can be selected. When the live list is selected, a list of the imaged videos recorded in the storage device 6 is displayed.
[0150] When the new live imaging is selected, a status screen G12 illustrated in FIG. 2B is displayed. On the status screen G12, a connection state of the Internet, a remaining amount of a disk (storable capacity of the storage device 6), and a current imaged image by each imaging device 2 as a camera image are displayed. Note that FIG. 2 illustrates a case in which four imaging devices 2 are provided. In the present example, a still image is displayed as the camera image, and when an “image update” button in FIG. 2 is operated, an image update indication is performed to each imaging device 2 and the latest imaged image is transferred to the control terminal 1 to update a display image.
[0151] Furthermore, on the status screen G12, an “OK” button for performing an indication input of completion of confirmation of the state is displayed.
[0152] When the “OK” button is operated, a start operation screen G13 displaying a “start” button B1 for performing a start indication of recording is displayed, as illustrated in FIG. 2C. When the “start” button is operated B1, the control terminal 1 performs the start indication of recording to the imaging management terminal 5.
[0153] The imaging management terminal 5 performs control to store the imaged video by each imaging device 2 in the storage device 6 in response to the start indication of the recording. That is, with the control, recording of the imaged video by each imaging device 2 is started.
[0154] Furthermore, when the “start” button B1 is operated, a post-start operation screen G14 as illustrated in FIG. 2D is displayed on the control terminal 1. An “in-point setting” button B2, an “out-point setting” button B3, and an “imaging termination” button B4 are displayed on the post-start operation screen G14.
[0155] The “in-point setting” button B2 and the “out-point setting” button B3 are buttons for assigning the chapter marks as the above-described initial chapter marks to the imaged video. The staff member at the event site Si respectively operates the “in-point setting” button B2 and the “out-point setting” button B3 every time the performance portion of the performer is started and ends, thereby performing indication inputs of in-point and out-point timings as the initial chapter marks to the control terminal 1.
[0156] Furthermore, the staff member operates the “imaging termination” button B4, thereby performing an indication input of recording termination of the imaged video by each imaging device 2 to the control terminal 1. The control terminal 1 performs a termination indication of the recording to the imaging management terminal 5 in response to the indication input.
[0157] The imaging management terminal 5 performs control to terminate the recording operation of the imaged video by each imaging device 2 to the storage device 6 in response to the termination indication of the recording.
[0158] Next, screen transition regarding a chapter mark correction operation and an imaged video upload operation will be described with reference to FIG. 3.
[0159] FIG. 3A illustrates an example of a chapter mark correction operation screen G15.
[0160] In the present example, it is assumed that the control terminal 1 accepts the chapter mark correction operation and an operation for uploading the recorded imaged video to the server device 9. Specifically, in the present example, these operation acceptance functions are implemented in the above-described control operation application Ap1.
[0161] When trying to correct a chapter mark for a recorded imaged video, the staff member performs the operation input to activate the control operation application Ap1 and selects the live list in the state where the top screen G11 illustrated in FIG. 2A is displayed, and selects a corresponding imaged video from the list of the recorded imaged videos displayed in response to the selection.
[0162] As illustrated in FIG. 3A, a preview image ip regarding the imaged video, a full length bar ba representing the length (time length) of the entire imaged video, a video portion display bar bp representing the video portion divided by the chapter marks (including the initial chapter marks) and indicated in the full length bar ba, and waveform information aw representing a waveform of the sound information attached to the imaged video in time synchronization with the full length bar ba are displayed on the correction operation screen G15.
[0163] Furthermore, a slider SL for indicating the positions of the in-point and the out-point, an object selection button B5 for selecting a video portion as an object to be corrected for chapter mark, an “in-point setting” button B6 for performing an indication for setting the position indicated by the slider SL as the in-point, and an “out-point setting” button B7 for performing an indication for setting the position indicated by the slider SL as the out-point, and a “confirmation” button B8 for confirming the in-point and the out-point are displayed on the correction operation screen G15.
[0164] FIG. 3A illustrates an example of a case where two sets of the in-points and the out-points are specified as the initial chapter marks, two video portion display bars bp are displayed, and buttons for respectively selecting the first video portion and the second video portion are displayed as the object selection buttons B5.
[0165] The video portion display bar bp displays the video portion according to the chapter marks being set, including the initial chapter marks. In FIG. 3A, it is assumed that the video portion display bar bp representing the video portion corresponding to the initial chapter marks being set is displayed.
[0166] In correcting a chapter mark, first, the slider SL is operated to indicate a position on a time axis (a position on the full length bar ba) to be set as the in-point or the out-point. At this time, as the preview image ip, a frame image corresponding to the indicated time by the slider SL in the imaged video is appropriately displayed. The display can allow a user to easily grasp the position on the video.
[0167] Note that, in the present example, as the preview image ip, an extracted image from the imaged video by a predetermined imaging device 2 (for example, the imaging device 2 as the first camera for performing imaging at the front angle) among the imaging devices 2 is used.
[0168] For example, to correct the in-point of the first video portion, the slider SL is moved to a vicinity of a start position (a vicinity of a left end in the illustrated example) of the video portion display bar bp corresponding to the first video portion, and a desired position in the imaged video is searched for while appropriately referring to the preview image ip.
[0169] After a specific position in the imaged video is indicated with the slider SL, which in-point or out-point of any video portion the indicated position is to be set is indicated with the object selection button B5, and the “in-point setting” button B6, or the “out-point setting” button B7. For example, to set the indicated position as the in-point of the first video portion, the button described as “1” in the object selection button B5 is operated and then the “in-point setting” button B6 is operated. With the operation, the position indicated with the slider SL can be indicated as the in-point position of the selected video portion to the control terminal 1.
[0170] By operating the slider SL, the object selection button B5, the “in-point setting” button B6, and the “out-point setting” button B7 as described above, resetting, in other words, correction of the in-point and the out-point set as the initial chapter marks can be performed. Note that there is a case where at least part of the initial chapter marks does not need correction. In that case, the chapter position is taken over as it is by not performing the correction operation for the initial chapter mark.
[0171] Here, since the correction operation screen G15 of the present example displays the waveform information aw, the user can be caused to easily infer the part being played in the imaged video.
[0172] Furthermore, on the correction operation screen G15 of the present example, the video portion display bar bp, that is, a region close to the in-point and a region close to the out-point, of a display region representing the video portion, are made different in display forms. Specifically, in the present example, both the regions are made different in display colors, displaying the region close to the in-point in red and the region close to the out-point in blue. Note that gradation gradually changing in color from red to blue from the in-point side to the out-point side is applied to a region between the region close to the in-point and the region close to the out-point.
[0173] Then, in the present example, the display form in the region close to the in-point in the video portion display bar bp and the display form of the “in-point setting” button B6 are matched (the display colors are matched in red, for example). Furthermore, the display form in the region close to the out-point in the video portion display bar bp and the display form of the “out-point setting” button B7 are matched (the display colors are matched in blue, for example).
[0174] With the configuration, in the case premised on indicating the position on the time axis with the slider SL and performing a setting indication of the in-point or the out-point on the indicated position with the “in-point setting” button B6 or the “out-point setting” button B7, erroneous operation for chapter mark setting, such as erroneously operating the “out-point setting” button B7 despite the fact that the in-point should be set, can be prevented.
[0175] The staff member at the event site Si corrects the initial chapter marks as needed on the correction operation screen G15. Then, when confirming the chapter mark position being set, the staff member operates the “confirmation” button B8.
[0176] When the “confirmation” button B8 is operated, the control terminal 1 transmits confirmed information of the in-point and the out-point and performs an upload instruction of the imaged video to the imaging management terminal 5.
[0177] The imaging management terminal 5 performs control to transmit the confirmed information of the in-point and the out-point and the imaged video recorded in the storage device 6 to the server device 9 in response to the upload instruction.
[0178] Furthermore, when the “confirmation” button B8 is operated, the control terminal 1 displays an uploading screen G16, as illustrated in FIG. 3B. The display of the uploading screen G16 indicates that the imaged video is being uploaded to the user.
[0179] Next, an operation regarding purchase acceptance of the imaged video will be described with reference to FIG. 4.
[0180] In the video distribution system of the present embodiment, a corresponding imaged video is distributed to a performer who have performed a purchase procedure. The purchase procedure of the imaged video is performed using the imaging management terminal 5 in the present example.
[0181] In the imaging management terminal 5 of the present example, an application for performing screen display as illustrated in FIG. 4 and accepting an operation input regarding purchase is installed. Hereinafter, the application is referred to as “purchase operation application Ap2”. In allowing the performer to perform the purchase procedure, the staff member at the event site Si activates the purchase operation application Ap2 on the imaging management terminal 5.
[0182] FIG. 4A illustrates a live list screen G17 displayed in response to activation of the purchase operation application Ap2. Purchasable imaged videos are displayed as a list of live IDs on the live list screen G17. Note that, although described below, the live ID is identification information generated by the imaging management terminal 5 with the start of recording of the imaged video, and a different value is assigned to each imaged video. On the live list screen G17 in the present example, to facilitate identification of each imaged video, recording start date and time of the imaged video (described as “imaging date and time” in FIG. 4A) is displayed in association with the live ID.
[0183] When a required imaged video is selected on the live list screen G17, a purchase information input screen G18 as illustrated in FIG. 4B is displayed. On the purchase information input screen G18, information of each item to be input upon purchasing and a “next” button B9 are displayed for the imaged video selected on the live list screen G17. In the present example, information of items of performance time division, the number of cameras, and an object to be purchased is displayed as the information of each item.
[0184] The purchase information input screen G18 of the present example is configured to be able to accept an input of purchase information for each video portion included in the imaged video. Specifically, a tab T for selecting each video portion is displayed on the purchase information input screen G18 in this case. FIG. 3B illustrates an example in which three video portions are included in the imaged video selected on the live list screen G17, and tabs T1 to T3 for individually selecting the video portions are displayed. In an initial state transitioned to the purchase information input screen G18, the tab T1 is in a selected state, and an input of the purchase information of the first video portion is available, as illustrated in FIG. 4B.
[0185] Here, as information regarding the performance time division, divisions of up to 30 minutes, 30 to 60 minutes, and 60 minutes or more are provided, for example, and a division corresponding to the time length of the video portion selected with the tab T should be selected. Furthermore, in the present example, three or four are selectable as the number of cameras. For example, in a case where the fourth camera (imaging device 2) is a camera for imaging a specific performer such as a drum performer of a band, the fourth camera image is unnecessary for a band in which the specific performer is omitted. Under such circumstances, selection of the number of cameras is available.
[0186] Furthermore, in the present example, at least divisions of a material set and a digest are provided as the item of the object to be purchased. The material set means a set sale of corresponding video portions in corresponding imaged videos by the imaging devices 2.
[0187] The digest means a sale of the above-described viewpoint switching video.
[0188] Note that, in the present example, a fee system having different purchase price for each object to be purchased is adopted according to a combination of the performance time division and the number of cameras. In other words, the display content of the purchase price displayed corresponding to the object to be purchased is changed according to the selected states of the items of the performance time division and the number of cameras.
[0189] The staff member or the performer operates the “next” button B9 after selecting the corresponding tab T and selecting each item for the corresponding video portion.
[0190] A purchase confirmation screen G19 having an “OK” button B10 as illustrated in FIG. 4C is displayed in response to the operation of the “next” button B9, and the user can indicate purchase confirmation about the object to be purchased selected on the purchase information input screen G18 to the imaging management terminal 5 by operating the “next” button B9 by operating the “OK” button B10.
[0191] A confirmation screen G20 as illustrated in FIG. 4D is displayed in response to the operation of the “OK” button B10.
[0192] The confirmation screen G20 is a screen prompting the user as a purchaser (performer) to input account information or newly register account information.
[0193] In the video distribution system of the present example, the server device 9 causes the purchaser to input the account information or causes the purchaser, who has not registered the account information yet, to newly register the account information, so as to make the user as the purchaser identifiable. The account information is, for example, combination information of a user ID and a password.
[0194] An “OK” button B11 is provided on the confirmation screen G20, and the user operates the “OK” button B11 in a case of inputting or newly registering the account information.
[0195] Although illustration is omitted, a screen for selecting input or new registration of account information is displayed in response to the operation of the “OK” button B11, for example, and an account information input screen is displayed in a case where the input of account information has been selected, and an account information new registration screen is displayed in a case where the new registration of account information has been selected. The account information input on the account information input screen and the account information registered on the account information new registration screen are transmitted to the server device 9 together with video to be purchased specification information I1. Note that the video to be purchased specification information I1 is information generated by the imaging management terminal 5 according to the input information on the purchase information input screen G18 illustrated in FIG. 4B, and details will be described below.
[0196] Note that a purchase price payment method is not particularly limited.
[0197] FIG. 5 is an explanatory diagram of generation timing of a purchase information input form.
[0198] The purchase information input form is form information used in displaying the purchase information input screen G18 illustrated in FIG. 4B, and is generated for each video portion in a case where a plurality of video portions is present in the imaged video.
[0199] FIG. 5 illustrates a recording example of the imaged video and a generation example of the purchase information input form in contrast, and illustrates the relationship between the examples.
[0200] In the recording example in FIG. 5, in a case where three performers A, B, and C give a performance in order in a live event, indication of the in-point and the out-point as the initial chapter marks is performed for each of performance portions of the performers A, B, and C.
[0201] In the present example, the imaging management terminal 5 generates, in response to indication of an initial out-point for the imaged video, the purchase information input form for the video portion corresponding to the indicated out-point.
[0202] Specifically, looking at the performer A in FIG. 5, the purchase information input form for the video portion of the performer A divided by the initial out-point is generated in response to the indication of the initial out-point for the performance portion of the performer A. The purchase information input forms are similarly generated for the corresponding video portions in response to the indication of the initial out-points, for the other performers B and C.
[0203] With the configuration, the purchase information input form is generated and the user can perform a purchase procedure even before recording of the imaged video is terminated.
[0204] Therefore, the user can perform the purchase procedure of the video portion without waiting until the end of the recording of the imaged video after user’s turn ends, and the convenience of the user can be improved.
[0205] FIG. 6 is diagrams schematically illustrating a state of transition of the imaged video in a case where purchase of the “digest”, in other words, purchase of the viewpoint switching video has been performed. Note that FIG. 6 illustrates an example in which the viewpoint switching video is generated using the imaged videos by the three imaging devices 2 of the first to third cameras. In this case, it is assumed that the performance portions of the three performers A to C are recorded as the imaged video, and the performer A purchases the viewpoint switching video.
[0206] For the imaged video at the event site Si, the in-point and the out-point dividing the video portion of each performer are set after the initial chapter mark indication and the chapter mark correction as needed using the control terminal 1. Note that, since recording of the imaged video is started before the start of the performance of the first performer (performer A) and is terminated after the end of the performance of the last performer (performer C) as described above, a state of preparation and the like before the start of the performance of the first performer and a state of tidying up and the like after the end of the performance of the last performer can be recorded in the imaged video.
[0207] When upload of the imaged video is indicated as described above, the imaged videos by the imaging devices 2 (the first to third cameras in this case) and the information of the in-points and the out-points are transmitted to the server device 9.
[0208] The server device 9 cuts out each video portion in each imaged video, as illustrated in FIG. 6B, according to the received information of the in-point and the out-point. Note that FIG. 6B illustrates an example in which the cutout of the video portions for all the performers A to C has been performed. However, cutout of the video portion is not necessary for the performer who has not performed the purchase procedure.
[0209] Here, the cutout of the video portion can be rephrased as generation of a video portion as an independent video file.
[0210] Note that the server device 9 of the present example performs synchronization processing (in other words, synchronization of videos at viewpoints) and the like for the uploaded imaged videos. However, the processing of the server device 9 will be described below.