空 挡 广 告 位 | 空 挡 广 告 位

Sony Patent | Information Processing Apparatus, Information Processing Method, And Program

Patent: Information Processing Apparatus, Information Processing Method, And Program

Publication Number: 20200348749

Publication Date: 20201105

Applicants: Sony

Abstract

[Problem] To provide an information processing apparatus, an information processing method, and a program capable of improving usability. [Solution] An information processing apparatus that includes an input method determination unit configured to determine an operation input method related to a virtual object that is arranged in a real space, on the basis of arrangement information on the virtual object.

FIELD

[0001] The present disclosure relates to an information processing apparatus, an information processing method, and a program.

BACKGROUND

[0002] In recent years, a head-mounted display (hereinafter, also referred to as an “HMD”) that includes a sensor has been developed. The HMD includes a display that is located in front of eyes of a user when the HMD is worn on a head of the user, and displays a virtual object in front of the user, for example. In the HMD as described above, the display may be of a transmissive type or a non-transmissive type. In an HMD including a transmissive-type display, the virtual object as described above is displayed, in a superimposed manner, on a real space that can be viewed via the display.

[0003] Operation input performed by a user on the HMD may be realized based on, for example, sensing performed by a sensor included in the HMD. For example, Patent Literature 1 described below discloses a technology in which a user who is wearing an HMD causes a camera (one example of the sensor) included in the HMD to sense various gestures using a user’s hand, and operates the HMD by gesture recognition.

CITATION LIST

[0004]* Patent Literature*

[0005] Patent Literature 1: JP 2014-186361** A**

SUMMARY

Technical Problem

[0006] However, when the user performs operation input by using a virtual object arranged in a three-dimensional real space, in some cases, it may be difficult to perform operation input using a predetermined operation input method depending on a position of the virtual object, and usability may be reduced.

[0007] To cope with this situation, in the present disclosure, an information processing apparatus, an information processing method, and a program capable of improving usability by determining an operation input method based on arrangement of a virtual object are proposed.

Solution to Problem

[0008] According to the present disclosure, an information processing apparatus is provided that includes: an input method determination unit configured to determine an operation input method related to a virtual object that is arranged in a real space, on the basis of arrangement information on the virtual object.

[0009] Moreover, according to the present disclosure, an information processing method is provided that includes: determining an operation input method related to a virtual object that is arranged in a real space, on the basis of arrangement information on the virtual object.

[0010] Moreover, according to the present disclosure, a program is provided that causes a computer to realize a function to execute: determining an operation input method related to a virtual object that is arranged in a real space, on the basis of arrangement information on the virtual object.

Advantageous Effects of Invention

[0011] As described above, according to the present disclosure, it is possible to improve usability by switching between operation input methods based on arrangement of a virtual object.

[0012] In addition, the effects described above are not limiting. That is, any of the effects described in the present specification or other effects that may be recognized from the present specification may be achieved, in addition to or in place of the effects described above.

BRIEF DESCRIPTION OF DRAWINGS

[0013] FIG. 1 is a diagram for explaining an overview of an information processing apparatus 1 according to a first embodiment of the present disclosure.

[0014] FIG. 2 is a block diagram illustrating a configuration example of the information processing apparatus 1 according to the first embodiment.

[0015] FIG. 3 is a flowchart illustrating an example of operation of the information processing apparatus 1 according to the first embodiment.

[0016] FIG. 4 is an explanatory diagram illustrating an exemplary case in which touch operation is determined as an operation input method according to the first embodiment.

[0017] FIG. 5 is an explanatory diagram illustrating an exemplary case in which pointing operation is determined as the operation input method according to the first embodiment.

[0018] FIG. 6 is an explanatory diagram illustrating an exemplary case in which command operation is determined as the operation input method according to the first embodiment.

[0019] FIG. 7 is an explanatory diagram for explaining an overview of a second embodiment of the present disclosure.

[0020] FIG. 8 is a block diagram illustrating a configuration example of an information processing apparatus 1-2 according to the second embodiment of the present disclosure.

[0021] FIG. 9 is a flowchart illustrating an example of operation of the information processing apparatus 1-2 according to the second embodiment.

[0022] FIG. 10 is an explanatory diagram for explaining a first arrangement control example according to the second embodiment.

[0023] FIG. 11 is an explanatory diagram for explaining a second arrangement control example according to the second embodiment.

[0024] FIG. 12 is an explanatory diagram for explaining the second arrangement control example according to the second embodiment.

[0025] FIG. 13 is an explanatory diagram for explaining a third arrangement control example according to the second embodiment.

[0026] FIG. 14 is an explanatory diagram for explaining the third arrangement control example according to the second embodiment according to the embodiment.

[0027] FIG. 15 is an explanatory diagram for explaining a fourth arrangement control example according to the second embodiment.

[0028] FIG. 16 is an explanatory diagram for explaining the fourth arrangement control example according to the second embodiment.

[0029] FIG. 17 is an explanatory diagram for explaining the fourth arrangement control example according to the second embodiment.

[0030] FIG. 18 is an explanatory diagram for explaining a modification of the second embodiment.

[0031] FIG. 19 is an explanatory diagram for explaining a modification of the second embodiment.

[0032] FIG. 20 is an explanatory diagram for explaining a modification of the second embodiment.

[0033] FIG. 21 is an explanatory diagram for explaining a modification of the second embodiment.

[0034] FIG. 22 is an explanatory diagram illustrating a hardware configuration example.

DESCRIPTION OF EMBODIMENTS

[0035] Preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. In this specification and the drawings, structural elements that have substantially the same functions and configurations will be denoted by the same reference symbols, and repeated explanation of the structural elements will be omitted.

[0036] Furthermore, in this specification and the drawings, a plurality of structural elements that have substantially the same or similar functions and configurations may be distinguished from one another by appending different alphabets after the same reference symbols. However, if the structural elements that have substantially the same or similar functions and configurations need not be specifically distinguished from one another, the structural elements will be denoted by only the same reference symbols.

[0037] In addition, hereinafter, explanation will be given in the following order.

[0038] <<1. First embodiment>>

[0039] <1-1. Overview>

[0040] <1-2. Configuration>

[0041] <1-3. Operation>

[0042] <1-4. Examples of operation input method>

[0043] <1-5. Modifications>

[0044] <1-6. Effects>

[0045] <<2. Second embodiment>>

[0046] <2-1. Overview>

[0047] <2-2. Configuration>

[0048] <2-3. Operation>

[0049] <2-4. Examples of arrangement control>

[0050] <2-5. Modification>

[0051] <2-6. Effects>

[0052] <<3. Hardware configuration example>>

[0053] <<4. Conclusion>>

1.* First Embodiment*

[0054] <1-1. Overview>

[0055] First, an overview of an information processing apparatus according to a first embodiment of the present disclosure will be described. FIG. 1 is a diagram for explaining an overview of an information processing apparatus 1 according to the first embodiment. As illustrated in FIG. 1, the information processing apparatus 1 according to the first embodiment is realized by, for example, a glasses-type head-mounted display (HMD) that is worn on a head of a user U. A display unit 13 that corresponds to an eyeglass lens part located in front of eyes of the user U when the HMD is worn may be a transmissive type or a non-transmissive type. The information processing apparatus 1 is able to provide a display object in front of a line of sight of the user U by displays the display object on the display unit 13. Further, an HMD as one example of the information processing apparatus 1 is not limited to a device that provides videos for both eyes, but may be a device that provides a video for only one eye. For example, the HMD may be a one eye type provided with the display unit 13 that displays a video for only one eye.

[0056] Further, the information processing apparatus 1 includes an out-camera 110 that captures images in a line-of-sight direction of the user U, that is, in an outward direction, when the apparatus is worn. Furthermore, while not illustrated in FIG. 1, the information processing apparatus 1 includes various sensors, such as an in-camera that captures images of the eyes of the user U when the apparatus is worn, or a microphone (hereinafter, referred to as a “mic”). It may be possible to provide the plurality of out-cameras 110 and the plurality of in-cameras. If the plurality of out-cameras 110 are provided, it is possible to obtain a depth image (distance image) using disparity information, so that it is possible to three-dimensionally sense surrounding environments.

[0057] Meanwhile, the shape of the information processing apparatus 1 is not limited to the example as illustrated in FIG. 1. For example, the information processing apparatus 1 may be a headband type HMD (a type that is worn by a band extended around the entire circumference of the head or a type including a band that is extended along not only the side of the head, but also the top of the head), or a helmet type HMD (a visor part of a helmet serves as a display). Further, the information processing apparatus 1 may be realized by a wearable device of a wrist band type (for example, a smart watch with or without a display), a headphone type (without a display), a neck phone type (a neck holder type with or without a display), or the like.

[0058] Furthermore, the information processing apparatus 1 according to the first embodiment is realized by the wearable device as described above and can be worn on the user U; therefore, the information processing apparatus 1 may include various operation input methods, such as voice input, gesture input using a hand or a head, and a line of sight input, in addition to using a button, a switch, or the like.

[0059] Moreover, the display unit 13 may display a virtual object related to operation input. For example, the user U may be allowed to perform touch operation of touching the virtual object, pointing operation of pointing the virtual object by an operation object, such as a finger, or voice command operation by speaking a voice command indicated by the virtual object.

[0060] Furthermore, for example, if the display unit 13 is a transmissive type, the information processing apparatus 1 is able to arrange a virtual object in a real space on the basis of information on the real space obtained by capturing performed by the camera 110, and displays the virtual object such that the user U can view the virtual object as if the virtual object is located in the real space.

[0061] Meanwhile, if an apparatus includes various operation input methods like the information processing apparatus 1, it is often the case that an operation input method that is determined in advance by, for example, an application or the like is adopted with respect to a virtual object to be displayed. However, if the virtual object is arranged in the real space as described above, in some cases, depending on a position of the virtual object, it may be difficult to perform operation input by using the operation input method determined in advance and usability may be reduced. In particular, if the user is allowed to freely change arrangement of the virtual object, it is likely that the virtual object may be arranged at a position at which the operation input method determined in advance is not appropriate.

[0062] To cope with this, the information processing apparatus 1 according to the first embodiment determines an operation input method based on arrangement of a virtual object, to thereby improve usability. A configuration of the first embodiment that achieves the above-described effects will be described in detail below.

[0063] <1-2. Configuration>

[0064] The overview of the information processing apparatus 1 according to the first embodiment has been described above.

[0065] Next, a configuration of the information processing apparatus 1 according to the first embodiment will be described with reference to FIG. 2. FIG. 2 is a block diagram illustrating a configuration example of the information processing apparatus 1 according to the first embodiment. As illustrated in FIG. 2, the information processing apparatus 1 includes a sensor unit 11, a control unit 12, the display unit 13, a speaker 14, a communication unit 15, an operation input unit 16, and a storage unit 17.

[0066] (Sensor Unit 11)

[0067] The sensor unit 11 has a function to acquire various kinds of information on a user or surrounding environments. For example, the sensor unit 11 includes an out-camera 110, an in-camera 111, a mic 112, a gyro sensor 113, an acceleration sensor 114, an orientation sensor 115, a location positioning unit 116, and a biological sensor 117. A specific example of the sensor unit 11 described herein is one example, and embodiments are not limited to this example. Further, the number of each of the sensors may be two or more.

[0068] Each of the out-camera 110 and the in-camera 111 includes a lens system that includes an imaging lens, a diaphragm, a zoom lens, a focus lens, and the like, a driving system that causes the lens system to perform focus operation and zoom operation, a solid-state imaging element array that generates an imaging signal by performing photoelectric conversion on imaging light obtained by the lens system, and the like. The solid-state imaging element array may be realized by, for example, a charge coupled device (CCD) sensor array or a complementary metal oxide semiconductor (CMOS) sensor array.

[0069] The mic 112 collects voice of a user and sounds in surrounding environments, and outputs them as voice data to the control unit 12.

[0070] The gyro sensor 113 is realized by, for example, a three-axis gyro sensor, and detects an angular velocity (rotational speed).

[0071] The acceleration sensor 114 is realized by, for example, a three-axis acceleration sensor (also referred to as a G sensor), and detects acceleration at the time of movement.

[0072] The orientation sensor 115 is realized by, for example, a three-axis geomagnetic sensor (compass), and detects an absolute direction (orientation).

[0073] The location positioning unit 116 has a function to detect a current location of the information processing apparatus 1 on the basis of a signal acquired from outside. Specifically, for example, the location positioning unit 116 is realized by a global positioning system (GPS) measurement unit, receives radio waves from GPS satellites, detects a position at which the information processing apparatus 1 is located, and outputs the detected location information to the control unit 12. Further, for example, the location positioning unit 116 may be a device that detects a position through Wi-Fi (registered trademark), Bluetooth (registered trademark), transmission/reception with a mobile phone, a PHS, a smartphone, etc., near field communication, or the like, instead of the GPS.

[0074] The biological sensor 117 detects biological information on a user. Specifically, for example, the biological sensor 117 may detect heartbeats, body temperature, diaphoresis, blood pressure, pulse, breathing, eye blink, eye movement, a gaze time, a size of a pupil diameter, blood pressure, brain waves, body motion, body position, skin temperature, electric skin resistance, MV (micro-vibration), myopotential, SPO2 (blood oxygen saturation level) or the like.

[0075] (Control Unit 12)

[0076] The control unit 12 functions as an arithmetic processing device and a control device, and controls entire operation in the information processing apparatus 1 in accordance with various programs. Further, as illustrated in FIG. 2, the control unit 12 according to the first embodiment functions as a recognition unit 120, an arrangement control unit 122, an input method determination unit 124, an operation input receiving unit 126, and an output control unit 128.

[0077] The recognition unit 120 has a function to perform recognition on a user or recognition on surrounding conditions by using various kinds of sensor information sensed by the sensor unit 11. For example, the recognition unit 120 may recognize a position and a posture of the head of the user (including orientation or inclination of the face with respect to the body), positions and postures of arms, hands and fingers of the user, a user’s line of sight, user’s voice, user’s behavior, or the like. Further, the recognition unit 120 may recognize a three-dimensional position or shape of a real object (including the ground, floors, walls, and the like) that is present in a surrounding real space. The recognition unit 120 provides a recognition result on the user and a recognition result on the surrounding conditions to the arrangement control unit 122, the input method determination unit 124, the operation input receiving unit 126, and the output control unit 128.

[0078] The arrangement control unit 122 controls arrangement of a virtual object that is arranged in a real space, and provides arrangement information on the arrangement of the virtual object to the input method determination unit 124 and the output control unit 128.

[0079] For example, the arrangement control unit 122 may control the arrangement of the virtual object in the real space on the basis of a setting for the arrangement of the virtual object, where the setting is determined in advance. It may be possible to determine, in advance, a setting for arranging the virtual object such that the virtual object comes into contact with a real object around the user, a setting for arranging the virtual object in the air in front of the user, or the like.

[0080] Further, it may be possible to determine, in advance, a plurality of settings with priorities, and the arrangement control unit 122 may determine whether arrangement is possible in each of the settings in order from the highest to the lowest priorities, and may control the arrangement of the virtual object based on the setting for which it is determined that the arrangement is possible. Meanwhile, the arrangement control unit 122 may acquire the setting for the arrangement of the virtual object from, for example, the storage unit 17 or from other devices via the communication unit 15.

[0081] Furthermore, the arrangement control on the virtual object performed by the arrangement control unit 122 according to the first embodiment is not limited to the example as described above. Other examples of the arrangement control performed by the arrangement control unit 122 will be described later as modifications.

[0082] The input method determination unit 124 determines an operation input method related to the virtual object on the basis of the arrangement information provided from the arrangement control unit 122. The input method determination unit 124 may determine the operation input method on the basis of the recognition result on the user or the recognition result on the surrounding environments, where the recognition result is provided from the recognition unit 120.

[0083] For example, the input method determination unit 124 may determine whether the user is able to touch the virtual object (whether the virtual object is arranged in a range in which the user is able to virtually touch the object) on the basis of the recognition result on the user, and determine the operation input method on the basis of the previous determination. The determination on whether the user is able to touch the virtual object may be performed based on a recognition result of the hands of the user or based on a distance between a head position of the user and the virtual object.

[0084] Furthermore, if the user is able to touch the virtual object, the input method determination unit 124 may determine touch operation as the operation input method. Meanwhile, the touch operation in this specification is operation of virtually contacting (touching) the virtual object by a finger, a hand, or the like, for example.

[0085] With this configuration, if the virtual object is arranged in the range in which the user is able to directly touch the virtual object, the touch operation that allows more direct operation is determined as the operation input method, so that the usability can be improved.

[0086] Moreover, the input method determination unit 124 may determine whether a real object present in a real space and the virtual object are in contact with each other on the basis of the recognition result on the surrounding environments, and determine the operation input method on the basis of the previous determination. The determination on whether the real object and the virtual object are in contact with each other may be performed based on a recognition result of a position or a shape of the surrounding real object and the arrangement information on the virtual object.

[0087] Furthermore, if the real object present in the real space and the virtual object are in contact with each other, the input method determination unit 124 may determine pointing operation as the operation input method. Meanwhile, the pointing operation in this specification is the operation input method of pointing the virtual object by an operation object, such as a finger or a hand, for example. The operation object may be a finger of the user, a hand of the user, or a real object held by the user. Moreover, pointing may be performed using a user’s line of sight. The input method determination unit 124 may determine both of the pointing operation using the operation object and the pointing operation using the line of sight as the operation input methods, or may determine one of them as the operation input method.

[0088] If the virtual object is in contact with the real object, the user can easily focus on the virtual object and recognize a position of the virtual object or a distance to the virtual object, so that the user is able to perform the pointing operation more easily.

[0089] Furthermore, if the real object present in the real space and the virtual object are not in contact with each other (the virtual object is arranged in the air), the input method determination unit 124 may determine voice command operation or command operation performed by the operation input unit 16 (to be described later) as the operation input method. It is difficult to recognize a sense of distance in the touch operation and the pointing operation on the virtual object arranged in the air. Moreover, extension of a hand into the air where a real object is absent may cause fatigue in the user. In contrast, the voice command operation or the command operation by the operation input unit 16 is effective in that a physical load on the user is small.

[0090] Meanwhile, the determination of the operation input method as described above may be performed in a combined manner. For example, if the virtual object is in contact with the real object and if the user is able to touch the virtual object, the input method determination unit 124 may determine the touch operation as the operation input method. With this configuration, the user is able to perform operation input by directly touching the real object, so that tactile feedback to a hand or a finger of the user is virtually performed and usability can further be improved.

[0091] The operation input receiving unit 126 receives operation input performed by the user, and outputs operation input information to the output control unit 128. The operation input receiving unit 126 according to the first embodiment may receive operation input performed by the operation input method determined by the input method determination unit 124, or the operation input receiving unit 126 may receive operation input performed by the user with respect to the virtual object by using information corresponding to the operation input method determined by the input method determination unit 124. In other words, the information that is used by the operation input receiving unit 126 to receive the operation input performed by the user may be different depending on the operation input method determined by the input method determination unit 124.

[0092] For example, if the input method determination unit 124 determines the touch operation or the pointing operation using the operation object as the operation input method, the operation input receiving unit 126 uses captured image information obtained by the out-camera 110. Further, if the input method determination unit 124 determines the pointing operation using the line of sight as the operation input method, the operation input receiving unit 126 uses gyro sensor information, acceleration information, orientation information, and captured image information obtained by the in-camera 111. Furthermore, if the input method determination unit 124 determines the voice command operation as the operation input method, the operation input receiving unit 126 uses voice data obtained by the mic 112. Moreover, if the input method determination unit 124 determines the command operation using the operation input unit 16 as the operation input method, the operation input receiving unit 126 uses information provided by the operation input unit 16.

[0093] The output control unit 128 controls display performed by the display unit 13 and voice output performed by the speaker 14, which will be described later. The output control unit 128 according to the first embodiment causes the display unit 13 to display the virtual object in accordance with the arrangement information on the virtual object provided by the arrangement control unit 122.

[0094] (Display Unit 13)

[0095] The display unit 13 is realized by, for example, a lens unit (one example of a transmissive display unit) that performs display using a holographic optical technology, a liquid crystal display (LCD) device, an organic light emitting diode (OLED) device, or the like. Further, the display unit 13 may be of a transmissive type, a semi-transmissive type, or a non-transmissive type.

[0096] (Speaker 14)

[0097] The speaker 14 reproduces a voice signal under the control of the control unit 12.

[0098] (Communication Unit 15)

[0099] The communication unit 15 is a communication module for performing data transmission and reception to and from other devices in a wired or wireless manner. The communication unit 15 performs wireless communication with external devices in a direct manner or via a wireless network access point by using a system, such as a wired local area network (LAN), a wireless LAN,

[0100] Wireless Fidelity (WI-Fi: registered trademark), infrared communication, Bluetooth (registered trademark), or near-field/contactless communication.

[0101] (Storage Unit 17)

[0102] The storage unit 17 stores therein programs and parameters for causing the control unit 12 as described above to implement each of the functions. For example, the storage unit 17 stores therein a three-dimensional shape of a virtual object, a setting for arrangement of the virtual object determined in advance, or the like.

[0103] Thus, the configuration of the information processing apparatus 1 according to the first embodiment has been described in detail above, but the configuration of the information processing apparatus 1 according to the first embodiment is not limited to the example illustrated in FIG. 2. For example, at least a part of the functions of the control unit 12 of the information processing apparatus 1 may be included in other devices connected via the communication unit 15.

[0104] (Operation Input Unit 16)

[0105] The operation input unit 16 is realized by an operation member having a physical structure, such as a switch, a button, or a lever.

[0106] <1-3. Operation>

[0107] The configuration example of the information processing apparatus 1 according to the first embodiment has been described above. Next, operation of the information processing apparatus 1 according to the first embodiment will be described with reference to FIG. 3. FIG. 3 is a flowchart illustrating an example of the operation performed by the information processing apparatus 1 according to the first embodiment.

[0108] First, the sensor unit 11 performs sensing, and the recognition unit 120 performs recognition on the user and recognition on the surrounding conditions by using various kinds of sensor information obtained by the sensing (S102). Subsequently, the arrangement control unit 122 controls arrangement of a virtual object (S104). Further, the input method determination unit 124 determines whether a real object present in a real space and the virtual object are in contact with each other (S106).

[0109] If it is determined that the real object present in the real space and the virtual object are in contact with each other (Yes at S106), the input method determination unit 124 determines whether the user is able to touch the virtual object (S108). If it is determined that the user is able to touch the virtual object (Yes at S108), the input method determination unit 124 determines the touch operation as the operation input method (5110). In contrast, if it is determined that the user is not able to touch the virtual object (No at S108), the input method determination unit 124 determines the pointing operation as the operation input method (S112).

[0110] In contrast, if it is determined that the real object present in the real space and the virtual object are not in contact with each other (No at S106), the input method determination unit 124 determines the command operation as the operation input method (S114).

[0111] Finally, the output control unit 128 causes the display unit 13 to display (output) the virtual object in accordance with the arrangement control on the virtual object performed by the arrangement control unit 122 (S116). Meanwhile, the processes at Step S102 to 5116 as described above may be repeated sequentially.

[0112] <1-4. Examples of Operation Input Method>

[0113] Examples of the operation input method according to the first embodiment will be described in detail below with reference to FIG. 4 to FIG. 6. In FIG. 4 to FIG. 6, the user U is wearing the information processing apparatus 1 that is a glasses-type HMD as illustrated in FIG. 1. Further, the display unit 13 of the information processing apparatus 1 located in front of the eyes of the user U is a transmissive type, and virtual objects V11 to V14 displayed on the display unit 13 are viewed by the user U as if the virtual objects V11 to V14 are present in a real space.

[0114] (Touch Operation)

[0115] FIG. 4 is an explanatory diagram illustrating an exemplary case in which the touch operation is determined as the operation input method. In the example illustrated in FIG. 4, the virtual objects V11 to V14 are arranged so as to come into contact with a table 3 (one example of the real object) in front of the user U, and the user U is able to touch the virtual objects. Therefore, the input method determination unit 124 determines the touch operation as the operation input method. In the example illustrated in FIG. 4, the user U performs operation input by touching the virtual object V12 by using a finger UH.

[0116] (Pointing Operation)

[0117] FIG. 5 is an explanatory diagram illustrating an exemplary case in which the pointing operation is determined as the operation input method. In the example illustrated in FIG. 5, the virtual objects V11 to V14 are arranged so as to come into contact with a floor 7 (one example of the real object) that is not reachable for the user U (the user U is not able to touch). Therefore, the input method determination unit 124 determines the pointing operation as the operation input method. In the example illustrated in FIG. 5, the user U performs operation input by pointing the virtual object V12 by using the finger UH. Meanwhile, if the pointing operation is determined as the operation input method, the output control unit 128 may display, on the display unit 13, a pointer V16 indicating a position pointed by the finger UH of the user U as illustrated in FIG. 5.

[0118] (Command Operation)

[0119] FIG. 6 is an explanatory diagram illustrating an exemplary case in which the command operation is determined as the operation input method. In the example illustrated in FIG. 6, the virtual objects V11 to V14 are arranged in the air. Therefore, the input method determination unit 124 determines the command operation as the operation input method. In the example illustrated in FIG. 6, the user U performs operation input by speaking a voice command “AA” indicated by the virtual object V11.

[0120] <1-5. Modifications>

[0121] The first embodiment of the present disclosure has been described above. In the following, some modifications of the first embodiment will be described. Meanwhile, the modifications described below may independently be applied to the first embodiment, or may be applied to the first embodiment in a combined manner. Further, each of the modifications may be applied in place of the configurations described in the first embodiment, or may be applied in addition to the configurations described in the first embodiment.

[0122] (Modification 1-1)

[0123] If a plurality of virtual objects are present, the input method determination unit 124 may determine the operation input method in accordance with a density of the virtual objects. For example, if the density of the virtual objects is high and the objects are arranged densely, it is likely that operation that is not intended by a user may be performed through the touch operation and the pointing operation; therefore, the input method determination unit 124 may determine the command operation as the operation input method. In contrast, if the density of the virtual objects is low, the input method determination unit 124 may determine the touch operation or the pointing operation as the operation input method.

[0124] (Modification 1-2)

[0125] The input method determination unit 124 may determine whether a moving body, such as a person, is present in a surrounding area on the basis of the recognition result on the surrounding conditions obtained by the recognition unit 120, and determine the operation input method on the basis of the previous determination. If the moving body is present around the user, it is likely that the user’s line of sight may follow the moving body or the pointing operation may be disturbed due to blocking by the moving body or the like; therefore, the input method determination unit 124 may determine the command operation as the operation input method.

[0126] (Modification 1-3)

[0127] Further, the example in which the arrangement control unit 122 controls the arrangement of the virtual object in the real space on the basis of the setting for the arrangement of the virtual object determined in advance has been described above, but embodiments are not limited to this example.

[0128] The arrangement control unit 122 may control the arrangement of the virtual object in the real space on the basis of the operation input method determined by the input method determination unit 124.

[0129] For example, the arrangement control unit 122 may control an interval between virtual objects in accordance with the operation input method. For example, the touch operation allows operation input to be performed with higher accuracy than in the pointing operation, and therefore, if the touch operation is determined as the operation input method, the interval between the virtual objects may be reduced as compared to a case in which the pointing operation is determined as the operation input method. Further, the command operation is less likely to be influenced by the interval between the virtual objects; therefore, if the command operation is determined as the operation input method, it may be possible to further reduce the interval between the virtual objects, and, for example, the virtual objects may come into contact with each other.

[0130] Furthermore, the arrangement control unit 122 may control an arrangement direction of virtual objects in accordance with the operation input method. For example, if the virtual objects are arranged in a vertical direction with respect to a user, it may be difficult to perform the touch operation and the pointing operation. Therefore, if the touch operation or the pointing operation is determined as the operation input method, the arrangement control unit 122 may control arrangement such that the virtual objects are arranged in a horizontal direction with respect to the user. Moreover, the command operation is less likely to be influenced by the arrangement direction of the virtual objects; therefore, if the command operation is determined as the operation input method, the virtual objects may be arranged in the vertical direction or may be arranged in the horizontal direction. For example, if the command operation is determined as the operation input method, the arrangement control unit 122 may select, as the arrangement direction, a direction in which the virtual objects can be displayed in a more compact manner.

……
……
……

您可能还喜欢...