空 挡 广 告 位 | 空 挡 广 告 位

Sony Patent | Information Processing Apparatus And Information Processing Method For Guiding A User To A Vicinity Of A Viewpoint

Patent: Information Processing Apparatus And Information Processing Method For Guiding A User To A Vicinity Of A Viewpoint

Publication Number: 10636185

Publication Date: 20200428

Applicants: Sony

Abstract

There is provided an information processing apparatus to present, to a user, an additional image, which is a virtual object, in a manner superimposed on a real-space image at a position corresponding to a viewpoint in the real space, the image processing apparatus including a processing unit configured to display an additional image corresponding to a viewpoint of a user in the real world, and guide the user to the vicinity of the viewpoint in the real world where the additional image has been acquired.

CROSS REFERENCE TO PRIOR APPLICATION

This application is a National Stage Patent Application of PCT International Patent Application No. PCT/JP2015/058702 (filed on Mar. 23, 2015) under 35 U.S.C. .sctn. 371, which claims priority to Japanese Patent Application No. 2014-112384 (filed on May 30, 2014), which are all hereby incorporated by reference in their entirety.

TECHNICAL FIELD

The present disclosure relates to an information processing apparatus and an information processing method.

BACKGROUND ART

In recent years, a technique called Augmented Reality (AR) is drawing attention, which presents the user with additional information in a manner superimposed on the real space. The information presented to the user through the AR technique is visualized as virtual objects of various forms such as texts, icons, animations, or the like. Virtual objects are arranged in the AR space according to the positions of real objects with which the virtual objects are associated. Virtual objects are generally displayed on a display of a mobile communication terminal such as a mobile phone, a smart phone or a tablet terminal, or a wearable terminal such as a head-mount display (abbreviated as “HMD”, in the following), an eye-glass type terminal, or the like.

CITATION LIST

Patent Literature

Patent Literature 1: JP 2011-193243A

SUMMARY OF INVENTION

Technical Problem

Here, when the virtual object is a photograph or a picture of a place that agrees with the real space, or the like, the angle of view in the AR space may not agree with that in the real space depending on the angle at which the virtual object displayed in the AR space is viewed, thereby failing to sufficiently provide a sense as if the real world has been augmented. Accordingly, it is desirable to present such a virtual object to the user at a position corresponding to the viewpoint in the real space.

Solution to Problem

According to the present disclosure, there is provided an information processing apparatus including a processing unit configured to display an additional image corresponding to a viewpoint of a user in a real world, and guide the user to a vicinity of a viewpoint in the real world at which the additional image has been acquired.

According to the present disclosure, there is provided an information processing method including: displaying an additional image corresponding to a viewpoint of a user in a real world, and guiding the user to a vicinity of a viewpoint in the real world at which the additional image has been acquired.

According to the present disclosure, the user is guided so as to be able to view an additional image, which is a virtual object at a position corresponding to the viewpoint in the real space.

Advantageous Effects of Invention

According to the present disclosure described above, a virtual object is presented to the user, with the position corresponding to the viewpoint in the real space being taken into account. Note that the effects described above are not necessarily limitative. With or in the place of the above effects, there may be achieved any one of the effects described in this specification or other effects that may be grasped from this specification.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 is an explanatory diagram illustrating an exemplary additional image presented as a virtual object via a device.

FIG. 2 is a block diagram illustrating a functional configuration of an information processing apparatus according to a first embodiment of the present disclosure.

FIG. 3 is a flowchart illustrating the content of a preliminary process performed by the information processing apparatus according to the embodiment.

FIG. 4 is a flowchart illustrating the content of an additional image presentation process performed by the information processing apparatus according to the embodiment.

FIG. 5 is an explanatory diagram illustrating an exemplary display of a billboard-type object as exemplary presentation of visual guidance information according to the embodiment.

FIG. 6 is an explanatory diagram illustrating an example in which a text object is added to the object of FIG. 5.

FIG. 7 is an explanatory diagram illustrating an exemplary display of a three-dimensional model of an additional image representing the content of the additional image three-dimensionally on a real-space image, as an exemplary presentation of visual guidance information according to the embodiment.

FIG. 8 is an explanatory diagram illustrating an example in which a superimposable-area object is presented in addition to the object of FIG. 5.

FIG. 9 is an explanatory diagram illustrating an example in which a photographer object is presented in addition to the object of FIG. 8 and the superimposable-area object.

FIG. 10 is an explanatory diagram with regard to adjustment of an angle of view between a real-space image and an additional image.

FIG. 11 is an explanatory diagram illustrating an example in which a navigable object indicating navigability is presented together with the superimposable-area object.

FIG. 12 is an explanatory diagram illustrating an example in which passability during the freezing period and also the freezing period in an average year are presented together with the superimposable-area object.

FIG. 13 is an explanatory diagram illustrating a presentation of an image with obstacles excluded from the real-space image.

FIG. 14 is an explanatory diagram illustrating a presentation process of an image with obstacles excluded from the real-space image illustrated in FIG. 13.

FIG. 15 is an explanatory diagram illustrating an exemplary image presented by the additional image presentation process according to a second embodiment of the present disclosure.

FIG. 16 is an explanatory diagram illustrating another exemplary image presented by an additional image presentation process according to the embodiment.

FIG. 17 is a hardware configuration diagram illustrating a hardware configuration of an information processing apparatus according to an embodiment of the present disclosure.

DESCRIPTION OF EMBODIMENT(S)

Hereinafter, (a) preferred embodiment(s) of the present disclosure will be described in detail with reference to the appended drawings. In this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted.

Note that description will be provided in the following order.

1.* First embodiment*

1.1.* Outline*

1.2.* Functional configuration*

(1)* First processing unit*

(2)* Storage unit*

(3)* Second processing unit*

1.3.* Content of process*

(1)* Preliminary process*

(2)* Additional image presentation process*

2.* Second embodiment*

3.* Exemplary hardware configuration*

1.* First Embodiment*

[1.1. Outline]

First, an outline of a virtual object presented together with a real-space image by an information processing apparatus according to a first embodiment of the present disclosure will be described, referring to FIG. 1. FIG. 1 is an explanatory diagram illustrating an exemplary additional image represented as a virtual object via a device 10.

The information processing apparatus according to the present embodiment is a device configured to display, on the basis of the user’s position and direction in the real world, a photograph taken or a picture painted in the past in a place nearby, in a manner superimposed on the real-space image as an additional image. When the user shoots the real space with the device 10, a photograph taken or a picture painted at the position at a different time is presented in a manner superimposed on the real-space image as an additional image, as illustrated in FIG. 1, for example.

The information processing apparatus performs a process of displaying on the device 10 including an image-capturing unit and a display, such as, for example, a mobile communication terminal such as a mobile phone, a smart phone or a tablet terminal, a wearable terminal such as an HMD or an eye-glass type terminal, an image-capturing device such as a digital camera or the like. The information processing apparatus may be installed in the device 10, or may be installed in an information processing terminal, a server, or the like, which is communicable with the device 10.

When presenting the aforementioned additional image, the information processing apparatus first constructs a three-dimensional model representing the real space based on a plurality of images of the real world. The information processing apparatus then performs a matching process between preliminarily acquired additional information and the three-dimensional model based on the user’s acquired position and direction, and estimates the position and the posture of the additional image to be presented so that the viewpoint of the additional image corresponds to the real space. Furthermore, the information processing apparatus guides the user to a position at which the user can view the additional image in a manner corresponding to the real world, based on the estimated position and posture of the additional image. Accordingly, the user can view the additional image in a state in which an angle of view of the additional image agrees with an angle of view of the real space, whereby it becomes possible to sufficiently obtain a sense as if the real world is augmented.

[1.2. Functional Configuration]

Next, a functional configuration of an information processing apparatus 100 according to the present embodiment will be described, referring to FIG. 2. FIG. 2 is a block diagram illustrating the functional configuration of the information processing apparatus 100 according to the present embodiment. The information processing apparatus 100 according to the present embodiment includes a first processing unit 110, a storage unit 120, and a second processing unit 130, as illustrated in FIG. 2.

(1)* First Processing Unit*

The first processing unit 110 performs a preliminary process for displaying an additional image based on the user’s position information. The first processing unit 110 basically functions offline. The first processing unit 110 includes, as illustrated in FIG. 2, a real-space model construction unit 111, a meta-information providing unit 113, a position-and-posture estimation unit 115, a superimposable-area estimation unit 117, and an additional-image registration unit 119.

The real-space model construction unit 111 constructs a three-dimensional model of the real world. The real-space model construction unit 111 collects a large number of captured images of the real world such as historic buildings or objects to be landmarks, and constructs a three-dimensional model by matching common parts (characteristic points) in respective images. On this occasion, the model may be acquired using an infrared ray, a laser sensor or the like, in addition to the captured images of the real world in order to increase the precision of the three-dimensional model, or a 3D model acquired from design documents of buildings or the like, may be used. Upon acquiring a newly captured image of the real world, the real-space model construction unit 111 reconstructs the three-dimensional model at a predetermined timing by learning. Accordingly, a three-dimensional model agreeing with the real world may be acquired. The real-space model construction unit 111 stores the constructed three-dimensional model in the storage unit 120.

The meta-information providing unit 113 provides meta information to the three-dimensional model of the real world constructed by the real-space model construction unit 111. For example, information indicating passability in the real world, environmental information such as material of the land surface may be mentioned as meta information. Such meta information may be acquired from information added to images used to construct the three-dimensional model, or geographic information, traffic information, map information or the like, which may be acquired via a network. The meta-information providing unit 113 stores meta information in the storage unit 120 in association with position information in the three-dimensional model of the real world. Note that the meta-information providing unit 113 need not always be installed in the information processing apparatus 100.

The position-and-posture estimation unit 115 estimates a viewpoint of an additional image superimposed on the real-space image. The position-and-posture estimation unit 115 performs matching between the three-dimensional model constructed by the real-space model construction unit 111 and the additional image, and estimates the position and the viewpoint at which the additional image has been acquired (also referred to as “additional image acquisition position” and “additional-image viewpoint”, respectively, in the following).

The superimposable-area estimation unit 117 estimates a superimposable area, based on the additional image acquisition position and the additional-image viewpoint. The superimposable area is an area in which an additional image is visible in the real world with a predetermined size and within a predetermined angle of view. The superimposable-area estimation unit 117 outputs the superimposable area of each additional image estimated from the additional image acquisition position and the additional-image viewpoint to the additional-image registration unit 119.

The additional-image registration unit 119 stores the additional image, the additional image acquisition position, the additional-image viewpoint, and the superimposable area in the storage unit 120 in association with one another. The additional image stored in the storage unit 120 by the additional-image registration unit 119 is presented to the user by the second processing unit 130.

(2)* Storage Unit*

The storage unit 120 stores information to be processed by the information processing apparatus 100. The storage unit 120 includes, for example, a real-space model DB 121, an additional-image DB 123, and a position-and-posture index DB 125, as illustrated in FIG. 2.

The real-space model DB 121 stores the three-dimensional model of the real world constructed based on the captured image of the real world. The three-dimensional model is acquired by the real-space model construction unit 111 learning based on images captured by a device itself including the information processing apparatus 100, or images held by a server or the like capable of communicating with the information processing apparatus 100. The three-dimensional model stored in the real-space model DB 121 is updated each time it is reconstructed by the real-space model construction unit 111. In addition, the real-space model DB 121 stores meta information, which is associated with the position information of the three-dimensional model by the meta-information providing unit 113.

The additional-image DB 123 stores additional image to be displayed in a manner superimposed on the real-space image. The additional image stored in the additional-image DB 123 may be preliminarily stored, or may be acquired as appropriate, and subsequently stored, from a server capable of communicating with the information processing apparatus 100 or the like. The additional image stored in the additional-image DB 123 is used by the position-and-posture estimation unit 115.

您可能还喜欢...