Sony Patent | Information Processing Device, Information Processing Method, And Program

Patent: Information Processing Device, Information Processing Method, And Program

Publication Number: 20200210051

Publication Date: 20200702

Applicants: Sony

Abstract

The present technique relates to an information processing device, an information processing method, and a program for enabling presentation of information to a user in a more easy-to-understand manner according to the situation. A control section controls an output associated with a position of an object which is disposed in a three-dimensional space, on the basis of user action information indicating a user’s action and a position relationship between a display region of a display device and the object. The present technique is applicable to an AR-use display device.

TECHNICAL FIELD

[0001] The present technique relates to an information processing device, an information processing method, and a program, and particularly, relates to an information processing device, an information processing method, and a program for enabling presentation of information to a user in a more easy-to-understand manner.

BACKGROUND ART

[0002] A technique called AR (Augmented Reality) has been known which presents additional information to a user by superimposing the information in a real space. Information which is presented to a user by the AR technique may also be called an annotation. An annotation is made visible through a virtual object in various forms such as texts, icons, and animations.

[0003] For example, PTL 1 discloses a technique of associating an annotation with a position in a real space, or associating an annotation with a substance that is present in a real space.

[0004] In particularly recent years, an AR-use HMD (Head-Mounted Display; hereinafter, referred to as AR-HMD) is becoming popular as a wearable terminal for displaying such an annotation in a real space. Note that an eyeglass-type AR-use HMD is called AR eyeglasses, in some cases. In addition, besides wearable terminals, HUDs (Head-Up Displays) have been known as devices capable of performing AR display.

CITATION LIST

Patent Literature

[PTL 1]

[0005] PCT Patent Publication No. WO2014/162823

SUMMARY

Technical Problem

[0006] However, in general AR-use displays, display regions are limited, so that information cannot be necessarily presented to a user in an easy-to-understand manner depending on the situation.

[0007] The present technique has been made in view of these circumstances, and is configured to enable presentation of information to a user in a more easy-to-understand manner according to the situation.

Solution to Problem

[0008] An information processing device according to a first aspect of the present technique includes a control section that controls an output associated with a position of an object which is disposed in a three-dimensional space, on the basis of user action information indicating a user’s action and a position relationship between a display region of a display device and the object.

[0009] An information processing method according to the first aspect of the present technique includes controlling an output associated with a position of an object which is disposed in a three-dimensional space, on the basis of user action information indicating a user’s action and a position relationship between a display region of a display device and the object.

[0010] A program according to the first aspect of the present technique causes a computer to execute a process including controlling an output associated with a position of an object which is disposed in a three-dimensional space, on the basis of user action information indicating a user’s action and a position relationship between a display region of a display device and the object.

[0011] In the first aspect of the present technique, an output associated with a position of an object which is disposed in a three-dimensional space is controlled on the basis of user action information indicating a user’s action and a position relationship between a display region of a display device and the object.

[0012] An information processing device according to a second aspect of the present technique includes a control section that controls a display device such that a virtual object which is given to a first real object is changed on the basis of a position relationship between the first real object and a second real object which is different from the first real object and a parameter concerning the second real object.

[0013] In the second aspect of the present technique, a control device is controlled such that a virtual object which is given to a first real object is changed on the basis of the position relationship between the first real object and a second real object which is different from the first real object and a parameter concerning the second real object.

Advantageous Effect of Invention

[0014] According to the present technique, information can be presented to a user in a more easy-to-understand manner according to the situation.

[0015] Note that the effects described above are not limited, and any of effects described in the present disclosure may be provided.

BRIEF DESCRIPTION OF DRAWINGS

[0016] FIG. 1 is a diagram depicting an appearance configuration of an AR-HMD to which a technique according to the present disclosure has been applied.

[0017] FIG. 2 is a block diagram depicting a configuration example of the AR-HMD as an information processing device.

[0018] FIG. 3 is a block diagram depicting a functional configuration example of the AR-HMD.

[0019] FIG. 4 is a diagram for explaining limitations on a display region of the AR-HMD.

[0020] FIG. 5 is a block diagram depicting a functional configuration example of the AR-HMD according to the first embodiment.

[0021] FIG. 6 is a flowchart for explaining a content display process.

[0022] FIG. 7 depicts diagrams for explaining an operative layout.

[0023] FIG. 8 depicts diagrams for explaining a bird’s eye view layout.

[0024] FIG. 9 depicts diagrams for explaining an example of a user’s action.

[0025] FIG. 10 depicts diagrams for explaining an example of a user’s action.

[0026] FIG. 11 depicts diagrams for explaining an example of a content display layout.

[0027] FIG. 12 depicts diagrams for explaining an example of a content display layout.

[0028] FIG. 13 depicts diagrams for explaining limitations on the display region of the AR-HMD.

[0029] FIG. 14 is a block diagram depicting a functional configuration example of an AR-HMD according to a second embodiment.

[0030] FIG. 15 is a flowchart for explaining a feedback output process.

[0031] FIG. 16 depicts diagrams for explaining the distance from a user and a feedback output.

[0032] FIG. 17 depicts diagrams for explaining a user’s action and a feedback output.

[0033] FIG. 18 depicts diagrams for explaining a user’s action and a feedback output.

[0034] FIG. 19 depicts diagrams for explaining a user’s action and a feedback output.

[0035] FIG. 20 is a block diagram depicting a functional configuration example of an AR-HMD according to a third embodiment.

[0036] FIG. 21 is a flowchart for explaining a feedback output process.

[0037] FIG. 22 depicts diagrams for explaining an example of a feedback output.

[0038] FIG. 23 depicts diagrams for explaining an example of a feedback output.

[0039] FIG. 24 depicts diagrams for explaining an example of a feedback output.

[0040] FIG. 25 depicts diagrams for explaining an example of a feedback output.

[0041] FIG. 26 depicts diagrams for explaining an example of a feedback output.

[0042] FIG. 27 depicts diagrams for explaining an example of a feedback output.

[0043] FIG. 28 depicts diagrams for explaining limitations on a display region of the AR-HMD.

[0044] FIG. 29 is a block diagram depicting a functional configuration example of an AR-HMD according to a fourth embodiment.

[0045] FIG. 30 is a flowchart for explaining a feedback output process.

[0046] FIG. 31 depicts diagrams for explaining an example of a feedback output.

[0047] FIG. 32 depicts diagrams for explaining an example of a feedback output.

[0048] FIG. 33 depicts diagrams for explaining an example of a feedback output.

[0049] FIG. 34 depicts diagrams for explaining an example of a feedback output.

DESCRIPTION OF EMBODIMENTS

[0050] Hereinafter, modes for implementing the present disclosure (hereinafter, referred to as embodiments) will be explained. Note that explanations will be given in accordance with the following order.

[0051] 1.* Outline of AR-HMD to Which Technique According to Present Disclosure Has Been Applied*

[0052] 2. First Embodiment (Switching of Content Display Layout in Accordance with User’s Action)

[0053] 3. Second Embodiment (Switching of Feedback Output Format in Accordance with User’s Action)

[0054] 4. Third Embodiment (Determination of Feedback Output Pattern in Accordance with User’s Position)

[0055] 5. Fourth Embodiment (Change in Feedback Display Format in Accordance with Surrounding Environment)

<1. Outline of AR-HMD to which Technique According to Present Disclosure Has Been Applied>

(Appearance Configuration of an AR-HMD)

[0056] FIG. 1 is a diagram depicting an appearance configuration of an AR-HMD to which a technique according to the present disclosure has been applied.

[0057] An AR-HMD 10 in FIG. 1 has an eyeglass shape as a whole, and includes display sections 11 and a camera 12.

[0058] The display sections 11 correspond to lens portions of the eyeglass, and the entirety thereof is formed as a transmission type display, for example. Therefore, the display sections 11 carry out transmissive superimposition display of an annotation (virtual object) on a real world image (real object) being visually recognized directly by a user.

[0059] The camera 12 is provided at an end of the display section 11 that corresponds to the left eye of a user wearing the AR-HMD 10, and captures an image of a real space included in the visual field of the user. The camera 12 is formed by using a solid state imaging element such as a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor. Note that multiple CCD image sensors and multiple CMOS image sensors may be provided. In other words, the camera 12 may be configured as a stereo camera.

[0060] The display sections 11 can be configured to display an image acquired by the camera 12, and can be configured to perform superimposition display of an annotation on the image.

[0061] In addition, various types of sensors, buttons, and loudspeakers (not depicted) are housed or installed in a casing, of the AR-HMD 10, corresponding to an eyeglass frame.

[0062] Note that the shape of the AR-HMD 10 is not limited to the shape depicted in FIG. 1, and various shapes such as a hat shape, a belt shape which is fixed around a user’s head, and a helmet shape for covering the whole head part of a user, can be adopted. In other words, the technique according to the present disclosure is applicable to HMDs in general.

Configuration Example of AR-HMD as Information Processing Device

[0063] FIG. 2 is a block diagram depicting a configuration example of the AR-HMD 10 as an information processing device.

[0064] The AR-HMD 10 in FIG. 2 includes a CPU (Central Processor Unit) 31, a memory 32, a sensor section 33, an input section 34, an output section 35, and a communication section 36, which are mutually connected via a bus 37.

[0065] The CPU 31 executes a process for implementing various types of functions included in the AR-HMD 10, in accordance with a program or data stored in the memory 32.

[0066] The memory 32 includes a storage medium such as a semiconductor memory or a hard disk, and stores a program or data for use in the process which is executed by the CPU 31.

[0067] The sensor section 33 includes various types of sensors including a microphone, a gyro sensor, and an acceleration sensor, in addition to the camera 12 in FIG. 1. Various types of sensor information acquired by the sensor section 33 are also used in a process which is executed by the CPU 31.

[0068] The input section 34 includes a button, a key, a touch panel, and the like. The output section 35 includes the display sections 11 in FIG. 1, a loudspeaker, and the like. The communication section 36 is formed as a communication interface for relaying various types of communication.

Functional Configuration Example of AR-HMD

[0069] FIG. 3 is a block diagram depicting a functional configuration example of the AR-HMD 10 to which the technique according to the present disclosure has been applied.

[0070] The AR-HMD 10 in FIG. 3 includes a control section 51, a sensor section 52, a display section 53, a loudspeaker 54, a communication section 55, an operation input section 56, and a storage section 57.

[0071] The control section 51 corresponds to the CPU 31 in FIG. 2, and executes a process for implementing various types of functions included in the AR-HMD 10.

[0072] The sensor section 52 corresponds to the sensor section 33 in FIG. 2, and includes various types of sensors.

[0073] Specifically, the sensor section 52 includes an outward camera 52a that corresponds to the camera 12 in FIG. 1, an inward camera 52b that captures an image of a user who is wearing the AR-HMD 10, and a microphone 52c that collects sounds in the surrounding area of the AR-HMD 10. In particular, with the inward camera 52b, a visual line of the user can be detected.

[0074] Further, the sensor section 52 includes a gyro sensor 52d that detects the angle (attitude) or angular velocity of the AR-HMD 10, an acceleration sensor 52e that detects the acceleration of the AR-HMD 10, and an azimuth sensor 52f that detects the bearing of the AR-HMD 10. These sensors may be separately configured, or may be integrally configured.

[0075] Moreover, the sensor section 52 includes a location positioning section 52g for positioning a location through a satellite positioning system such as a GPS (Global Positioning System) system, and a biological sensor 52h that acquires biological information (heart rate, body temperature, brain waves, etc.) regarding the user who is wearing the AR-HMD 10.

[0076] Various types of sensor information acquired by these sensors are used in a process which is executed by the control section 51.

[0077] The display section 53 corresponds to the display section 11 in FIG. 1, and carries out annotation display under control of the control section 51, or displays an image acquired by the outward camera 52a.

[0078] The loudspeaker 54 serves as a sound source of a sound to be outputted to the user, and outputs a sound under control of the control section 51.

[0079] The communication section 55 corresponds to the communication section 36 in FIG. 2, and performs various types of communication with another device.

[0080] The operation input section 56 corresponds to the input section 34 in FIG. 2, and receives a user’s operation input performed on the AR-HMD 10.

[0081] On the basis of user action information (hereinafter, also simply referred to as action information) indicating a user’s action and the position relationship between a display region of the display section 53 of the AR-HMD 10 and a real object or virtual object which is disposed in a three-dimensional space, the control section 51 controls an output associated with the real object or virtual object. Here, the three-dimensional space may be a real space, or may be a virtual space.

[0082] Specifically, by executing a predetermined program, the control section 51 implements a sensor information acquisition section 71, a parameter calculation section 72, a determination section 73, and an output control section 74.

[0083] The sensor information acquisition section 71 acquires sensor information from the sensor section 52, and acquires user action information indicating an action of the user wearing the AR-HMD 10 on the basis of the sensor information. The user action information includes dynamic information regarding actions of the user’s entire body or each site thereof, movement of the visual line (change in the visual line position) of the user, a change in the distance between the user and the object, or the like. Further, the sensor information acquisition section 71 acquires user position/attitude information (hereinafter, also simply referred to as position/attitude information) indicating the position or the attitude of the user wearing the AR-HMD 10 on the basis of the sensor information acquired from the sensor section 52. The user position/attitude information includes static information regarding the attitude or the position of the user, the distance between the user and the object, or the like.

[0084] The parameter calculation section 72 calculates a parameter representing a user’s action, position, status, or the like on the basis of the sensor information acquired by the sensor information acquisition section 71, or specifically, the user action information and the user position/attitude information acquired from the sensor information.

[0085] The determination section 73 determines an output format of an output regarding an object that is not displayed in the display region of the display section 53 (object that is in a non-displayed state) on the basis of the parameter calculated by the parameter calculation section 72.

[0086] The output control section 74 controls the output regarding the object that is in the non-displayed state in the display region of the display section 53, in accordance with the output format determined by the determination section 73. Note that the output regarding the object that is in the non-displayed state may be provided through indications or sounds.

[0087] With this configuration, information can be presented to a user in a more easy-to-understand manner according to various types of situations such as a user’s action, position, status, or the like.

[0088] Hereinafter, embodiments of the aforementioned AR-HMD 10 will be specifically explained.

2.* First Embodiment*

[0089] In general, in an AR-HMD that presents information in a space surrounding a user, a displayed angular field of a display is limited, so that the displayed angular field of the display has a tendency to become relatively narrower than the visual field of the user. For this reason, a virtual object (annotation) can be displayed in only a part of the visual field of the user. Accordingly, an overview of presented information is difficult to grasp in some cases.

[0090] FIG. 4 depicts one example of a scheme for grasping an overview of information presented by a general AR-HMD. In the example in FIG. 4, a virtual object (menu list) 102 is displayed by being resized so as to be included in a displayed angular field 101 of a display, in a real space 100 which is included in the visual field of a user.

[0091] However, when a virtual object displayed within the displayed angular field 101 is evenly resized as depicted in FIG. 4, the visibility of the displayed details thereof may be deteriorated or the operability may be worsened.

[0092] Therefore, the present embodiment switches an information display layout in accordance with a user’s action such as a change in the distance between the user and a virtual object to be operated.

[0093] For example, in a case where a menu list is displayed and the menu list is viewed by a user at a short distance, items are displayed in a certain size and a certain interval such that the items are operable, or character strings describing the respective items in detail are displayed in a visible character size. In a case where the menu list is viewed by a user having stepped back, the items are displayed in a certain size and a certain interval such that the overall menu list can be grasped, or only brief descriptions of the respective items are displayed in a visible character size.

Functional Configuration Example of AR-HMD

[0094] FIG. 5 is a block diagram depicting a functional configuration example of an AR-HMD 10A according to the present embodiment.

[0095] Note that the AR-HMD 10A in FIG. 5 differs from the AR-HMD 10 in FIG. 3 in that the AR-HMD 10A is provided with a control section 51A in place of the control section 51.

[0096] On the basis of the position relationship between a display region of the display section 53 of the AR-HMD 10A and a virtual object which is displayed in the display region, and on the basis of at least any one of user action information indicating a user’s action or user position/attitude information, the control section 51A moves the virtual object located outside the display region of the display section 53, into the display region.

[0097] Specifically, the control section 51A implements a sensor information acquisition section 111, a parameter calculation section 112, a layout determination section 113, and an output control section 114.

[0098] The sensor information acquisition section 111 acquires the action information indicating an action of the user wearing the AR-HMD 10A and the position/attitude information on the basis of sensor information acquired from the sensor section 52.

[0099] The parameter calculation section 112 calculates a parameter representing a user’s action, position, status, or the like, on the basis of the action information and the position/attitude information acquired by the sensor information acquisition section 111.

[0100] On the basis of the parameter calculated by the parameter calculation section 112, the layout determination section 113 determines a display layout of a virtual object (hereinafter, referred to as content) which is displayed in the display region of the display section 53.

[0101] The output control section 114 displays, in the display region of the display section 53, the content in the display layout determined by the layout determination section 113.

(Content Display Process)

[0102] Next, a content display process in the AR-HMD 10A will be explained with reference to a flowchart in FIG. 6.

[0103] In step S11, the sensor information acquisition section 111 acquires sensor information from the sensor section 52.

[0104] In step S12, on the basis of the sensor information, the parameter calculation section 112 calculates a parameter representing the distance between the user (specifically, the head part corresponding to the user’s eye position) and content which is displayed in the display region of the display section 53.

[0105] In step S13, the layout determination section 113 determines whether or not the calculated parameter is equal to or greater than a predetermined threshold.

[0106] In a case where the parameter is determined not to be equal to or greater than the predetermined threshold in step S13, in other words, in a case where the distance between the user and the content is shorter than a predetermined distance, the process proceeds to step S14.

[0107] In step S14, the layout determination section 113 determines an operative layout as the display layout of the content which is displayed in the display region on the display section 53.

[0108] On the other hand, in a case where the parameter is determined to be equal to or greater than the predetermined threshold in step S13, in other words, in a case where the distance between the user and the content is longer than the predetermined distance, the process proceeds to step S15.

[0109] In step S15, the layout determination section 113 determines a bird’s eye view layout as the display layout of the content which is displayed in the display region on the display section 53.

[0110] After step S14 or step S15, the process proceeds to step S16, and the output control section 114 displays the content in the determined display layout in the display region on the display section 53.

[0111] FIG. 7 depicts diagrams for explaining the operative layout.

[0112] A of FIG. 7 depicts content C11 which is to be operated, when viewed from the rear side of a user U11. B of FIG. 7 depicts the content C11 when viewed from above the user U11. A and B of FIG. 7 each depict a state where the user U11 is viewing the content C11 at a short distance.

[0113] The content C11 indicates a menu including five menu icons (hereinafter, simply referred to as icons) arranged at a predetermined interval. The icons correspond to items of the menu.

[0114] In the example in FIG. 7, the icons of the content C11 are arranged at a certain wide interval so as to prevent an erroneous operation or erroneous recognition from being generated during a user’s selection operation.

[0115] For example, in a case where the content C11 is operated by a hand, the interval between the icons is set to 20 cm or longer in view of the width of a palm (approximately 15 cm) in order to prevent unintended selection of a next icon. Also, in a case where the content C11 is operated by a visual line, the interval between the icons is set in view of an error in detection of the visual line. For example, in a case where an error in detection of the visual line is X(.degree.) and the distance to the content is N, it is sufficient that the interval between the icons is set to N tan(X) or greater.

[0116] Accordingly, as depicted in C of FIG. 7, only three icons out of the five icons of the content C11 are displayed in a displayed angular field 131 of the display section 53, in a real space 130 which is included in the visual field of the user. In an example in C of FIG. 7, character strings (Cart, Message, Map) respectively describing the three icons in detail are displayed in a visible size.

[0117] FIG. 8 depicts diagrams for explaining the bird’s eye view layout.

[0118] A of FIG. 8 depicts the content C11 when viewed from behind the user U11. B of FIG. 8 depicts the content C11 when viewed from above the user U11. A and B of FIG. 8 each depict a state where the user U11 having stepped back is viewing the content C11.

[0119] In the example in FIG. 8, the icons of the content C11 are arranged at a certain narrow interval such that many items (icons) can be included in the displayed angular field of the display section 53.

[0120] Accordingly, as depicted in C of FIG. 8, the five icons of the content C11 are all displayed in the displayed angular field 131 of the display section 53, in the real space 130 which is included in the visual field of the user. In an example in C of FIG. 8, only the icons are displayed while no character strings for describing the respective icons in detail are displayed. Note that, in C of FIG. 8, not only the interval between the icons but also the respective sizes of the icons are reduced, compared to the example in C of FIG. 7.

[0121] According to the aforementioned process, the content display layout is switched in accordance with a change in the distance between the user and the content being displayed in the display region of the display section 53. Therefore, the user can operate the content or confirm the details thereof, and can grasp the content entirely, without feeling any burden.

[0122] In particular, when the user has stepped back with respect to the content, content located outside the display region is moved to be displayed in the display region. Therefore, information can be presented to the user in a more easy-to-understand manner.

[0123] Note that, as described above, B of FIG. 8 depicts an example in which, when the user U11 has stepped back, the virtual distance between the user’s head (the visual point of the user U11 or the AR-HMD 10A) and the content C11 (icons) is changed, and the content C11 is displayed in the bird’s eye view layout. In a case where, from this state, the user U11 approaches the content C11, the virtual distance between the visual point of the user U11 and the content C11 is reduced, so that the content C11 can be displayed in the operative layout.

[0124] Also, in each of the examples in FIGS. 7 and 8, the icons of the content C11 may move within a predetermined range in the front-rear direction when viewed from the user U11. Specifically, an upper limit value and a lower limit value are set for the distance between the icons and the user U11. The upper limit value is set to a distance at which all the icons of the content C11 can be visually recognized when the user U11 has stepped back. Further, the lower limit value is set to a distance at which a hand of the user U11 can naturally reach the icons of the content C11.

[0125] With this configuration, the virtual distance to the icons changes in the front-rear direction in accordance with whether the icons are in the operative layout or in the bird’s eye view layout, but the icons can move while following the user U11, as appropriate, in accordance with movement of the user U11. Therefore, the user U11 can move, together with the icons, to a desired position in a real space, and switching between the operative layout and the bird’s eye view layout can be performed by a natural action which is movement in the front-rear direction. Note that, in a case where the moving speed of the user U11 is equal to or greater than a predetermined value, some or all of the icons are set to a non-displayed state, irrespective of a result of determination on whether to perform switching to the operative layout or the bird’s eye view layout, so that the visual field of the user U11 may be ensured.

Examples of User’s Action

[0126] In the aforementioned examples, the distance between a user’s head and content is used as a user’s action which is a trigger for switching a content display layout. However, other information may be used therefor.

[0127] For example, the content display layout may be switched on the basis of the distance between a user’s palm (hand) and content.

[0128] Specifically, in a case where a palm of the user U11 approaches the content C11 or is held in front of the content C11 as depicted in A of FIG. 9, the content C11 is displayed in the operative layout. On the other hand, in a case where a palm of the user U11 is moved away from the content C11 or is moved down as depicted in B of FIG. 9, the content C11 is displayed in the bird’s eye view layout.

[0129] Further, in a case where a sensor for detecting a palm of the user U11 is provided to the AR-HMD 10A, the content C11 may be displayed in the operative layout when a palm of the user U11 enters the detection range of the sensor and is detected. As the sensor for detecting a palm, a stereo camera, a ToF (Time of Flight) type ranging sensor (IR sensor), or the like can be used, for example.

[0130] Moreover, in this configuration, the display layout of the content C11 may be changed in a case where the distance between the palm of the user U11 and the content C11 is changed, while the distance, in the front-rear direction, between the head of the user U11 and the content C11 may be substantially fixed even in a case where the head position of the user U11 is moved. Accordingly, the icons can be more naturally presented to the user U11 when the user U11 operates the icons by using hand gestures, for example.

[0131] Also, in a case where the gazing direction of the user U11 is detected with a sensor provided to the AR-HMD 10A, the content C11 may be displayed in the operative layout such that an icon being gazed at is located in the center position. With this configuration, the user U11 can intuitively perform switching between the operative layout and the bird’s eye view layout.

[0132] In addition, the content display layout may be switched on the basis of a change in the user’s visual line position with respect to content displayed in the display region of the display section 53.

[0133] Specifically, in a case where the user U11 is gazing at a specific item (icon) of the content C11 without moving the visual line as depicted in A of FIG. 10, the content C11 is displayed in the operative layout. On the other hand, in a case where the user U11 is constantly moving the visual line to look over multiple icons of the content C11 as depicted in B of FIG. 10 or the visual line is directed to something other than the icons, the content C11 is displayed in the bird’s eye view layout.

Examples of Content Display Layout

[0134] In the aforementioned examples, the interval between the icons of content is mainly changed in accordance with whether the content is in the operative layout or in the bird’s eye view layout. However, other elements may be changed.

[0135] For example, the size of each icon of the content may be changed in accordance with whether the content is in the operative layout or in the bird’s eye view layout.

[0136] Specifically, in the operative layout, the size of each icon of content C12 is set to be large at a certain level, as depicted in A of FIG. 11, such that any erroneous operation or erroneous recognition is prevented from being generated during a user’s selective operation.

[0137] For example, in a case where the content C12 is operated by a hand, the width of each icon is set to 20 cm or greater in view of the width of a palm (approximately 15 cm) in order to prevent unintended selection of a next icon. Also, in a case where the content C12 is operated by a visual line, the width of each icon is set in view of an error in detection of the visual line. For example, in the case where an error in detection of the visual line is X(.degree.) and the distance to the content is N, it is sufficient that the width of each icon is set to N tan(X) or greater.

[0138] On the other hand, in the bird’s eye view layout, the size of each icon of the content C12 is set to be small at a certain level, as depicted in B of FIG. 11, such that many items (icons) can be included within the displayed angular field of the display section 53.

[0139] Alternatively, icon arrangement in content may be changed in accordance with whether the content is in the operative layout or in the bird’s eye view layout.

[0140] Specifically, in the operative layout, icons of content C13 are horizontally arranged in a line, as depicted in A of FIG. 12, such that a user can separately select the items with ease. In this case, some of the icons may be located outside a displayed angular field 151 of the display section 53.

[0141] On the other hand, in the bird’s eye view layout, the icons of the content C13 are arranged in a matrix form as depicted in B of FIG. 12, for example, such that as many items (icons) as possible are included within the displayed angular field 151 of the display section 53.

[0142] In addition, the number of icons of content may be changed in accordance with whether the content is in the operative layout or in the bird’s eye view layout.

[0143] Specifically, in the operative layout, only three icons of content, that is, an icon being focused on and icons next thereto (for example, icons on the left and right sides of the focused icon) are displayed. On the other hand, in the bird’s eye view layout, as many icons as possible are displayed within the displayed angular field of the display section 53.

[0144] Note that, in the present embodiment, the aforementioned user’s actions and the aforementioned content display layouts may be implemented in an arbitrary combination.

3.* Second Embodiment*

[0145] In general, in an AR-HMD that presents information in a space surrounding a user, the displayed angular field of a display is limited, so that a virtual object (annotation) can be displayed in only a part of the visual field of the user in some cases.

[0146] For example, in a display, a virtual object can be displayed only in a region RI corresponding to a partial angle in the 360.degree. angle about a user U21 with respect to the visual field direction of the user U21, as depicted in A of FIG. 13. Note that, as depicted in B of FIG. 13, the region RI is limited also with respect to the vertical direction of the visual field direction of the user U21.

[0147] For this reason, even when a virtual object exists in a region RO other than the region RI, the user U21 can miss or cannot find out the virtual object. Meanwhile, in a case where excessive feedback is outputted through a display or a sound in order to make it easy to find out such a virtual object, the feedback may inhibit an AR application experience itself, or may interrupt content viewing.

[0148] Therefore, the present embodiment switches a feedback output format regarding a virtual object to be presented, in accordance with a user’s action such as the distance between the user and the virtual object.

[0149] For example, in a case where a user is at a position far away from a target (virtual object) which is desired to be presented to the user, a simple display indicating the position (direction) of the target is outputted as feedback. Then, in a case where the user has approached the target, a display for highlighting the target itself is outputted as feedback. Furthermore, in a case where the user has approached the target but does not gaze at the target, a sound indicating that the target is near the user is outputted as feedback.

Functional Configuration Example of AR-HMD

[0150] FIG. 14 is a block diagram depicting a functional configuration example of an AR-HMD 10B according to the present embodiment.

[0151] Note that the AR-HMD 10B in FIG. 14 differs from the AR-HMD 10 in FIG. 3 in that the AR-HMD 10B is provided with a control section 51B in place of the control section 51.

[0152] The control section 51B causes a feedback output section (the display section 53 or the loudspeaker 54) to output feedback indicating the position of a virtual object located outside the display region of the display section 53 on the basis of the position relationship between the display region of the display section 53 of the AR-HMD 10B and the virtual object located outside the display region and on the basis of at least any one of user action information indicating a user’s action or user position/attitude information.

[0153] Specifically, the control section 51B implements a sensor information acquisition section 211, a parameter calculation section 212, an output format determination section 213, and an output control section 214.

[0154] The sensor information acquisition section 211 acquires the action information indicating an action of the user wearing the AR-HMD 10B and the position/attitude information on the basis of sensor information acquired from the sensor section 52.

[0155] The parameter calculation section 212 calculates a parameter representing a user’s action, position, status, or the like on the basis of the action information and the position/attitude information acquired by the sensor information acquisition section 211.

[0156] The output format determination section 213 determines a feedback output format regarding a virtual object (hereinafter, referred to as content) which is a target desired to be presented to the user, on the basis of the parameter calculated by the parameter calculation section 212.

[0157] The output control section 214 causes the display section 53 or the loudspeaker 54 to output feedback in the output format determined by the output format determination section 213.

(Feedback Output Process)

[0158] Next, a feedback output process which is executed by the AR-HMD 10B will be explained with reference to a flowchart in FIG. 15.

[0159] In step S21, the sensor information acquisition section 211 acquires sensor information from the sensor section 52.

[0160] In step S22, the parameter calculation section 212 calculates a parameter representing the distance between the user and content (virtual object) which is desired to be presented to the user, on the basis of the sensor information.

[0161] In step S23, the output format determination section 213 determines whether or not the calculated parameter is equal to or greater than a predetermined threshold.

[0162] In a case where the parameter is determined to be equal to or greater than the predetermined threshold in step S23, in other words, in a case where the distance between the user and the content is longer than a predetermined distance, the process proceeds to step S24.

[0163] In step S24, the output format determination section 213 determines a simple output format as the feedback output format regarding the content which is desired to be presented to the user.

[0164] On the other hand, in a case where the parameter is determined not to be equal to or greater than the predetermined threshold in step S23, that is, in a case where the distance between the user and the content is shorter than the predetermined distance, the process proceeds to step S25.

[0165] In step S25, the output format determination section 213 determines an outstanding output format as the feedback output format regarding the content which is desired to be presented to the user.

[0166] After step S24 or step S25, the process proceeds to step S26, and the output control section 214 causes the display section 53 or the loudspeaker 54 to output feedback in the determined output format.

[0167] For example, in a case where the user U21 is at a position far away from content C21 which is desired to be presented to the user, or remains at a certain position as depicted in A of FIG. 16, the display section 53 is caused to display feedback FB21 which is simple and has a small visual-field occupying area (drawing area) such that the visual field of the user U21 is not shielded and only minimum required information is provided. In the example in A of FIG. 16, the feedback FB21 is displayed as a triangular arrow icon for indicating the position of the content C21.

……
……
^

更多阅读推荐......