空 挡 广 告 位 | 空 挡 广 告 位

Sony Patent | Information processing device, information processing method, information processing program

Patent: Information processing device, information processing method, information processing program

Drawings: Click to check drawins

Publication Number: 20210158623

Publication Date: 20210527

Applicant: Sony

Abstract

An information processing device acquires first information from a detection device attached to a real object, acquires second information from a display device, places a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space, and transmits information on the virtual space to the display device.

Claims

  1. An information processing device that acquires first information from a detection device attached to a real object, acquires second information from a display device, places a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space, and transmits information on the virtual space to the display device.

  2. The information processing device according to claim 1, wherein the first information is state information of the real object, and the virtual object is placed in the virtual space when the real object is in a first state.

  3. The information processing device according to claim 1, wherein in a state in which the real object is placed in the virtual space, the virtual object is not placed in the virtual space when the real object is in the second state.

  4. The information processing device according to claim 1, wherein the first information is position information of the real object, and the virtual object is placed at a position within the virtual space corresponding to a position of the detection device.

  5. The information processing device according to claim 1, wherein the first information is identification information of the detection device, and the virtual object associated with the identification information in advance is placed in the virtual space.

  6. The information processing device according to claim 1, wherein the first information is attitude information of the real object, and the virtual object is placed in the virtual space in an attitude corresponding to the attitude information.

  7. The information processing device according to claim 1, wherein the second information is position information of the display device, and the virtual camera is placed at a position within the virtual space corresponding to the position information.

  8. The information processing device according to claim 1, wherein the second information is attitude information of the display device, and the virtual camera is placed in the virtual space in an attitude corresponding to the attitude information.

  9. The information processing device according to claim 1, wherein the second information is visual field information of the display device, and a visual field of the virtual camera is set according to the visual field information.

  10. The information processing device according to claim 9, wherein the information on the virtual space is information on an inside of the visual field of the virtual camera set according to the visual field information of the display device.

  11. The information processing device according to claim 1, wherein the information on the virtual space is information on an inside of a predetermined range in the virtual space.

  12. The information processing device according to claim 11, wherein the predetermined range is determined in advance in the display device, and is a range with an origin of the visual field as almost a center.

  13. An information processing method comprising: acquiring first information from a detection device attached to a real object; acquiring second information from a display device; placing a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space; and transmitting information on the virtual space to the display device.

  14. An information processing program that causes a computer to execute an information processing method including acquiring first information from a detection device attached to a real object; acquiring second information from a display device; placing a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space; and transmitting information on the virtual space to the display device.

Description

TECHNICAL FIELD

[0001] The present technique relates to an information processing device, an information processing method, and an information processing program.

BACKGROUND ART

[0002] In recent years, a technique for virtually enhancing the world in front of the eye has attracted attention, which is called augmented reality (AR) in which a virtual object such as CG (Computer Graphics) and/or visual information are overlaid and displayed on a real-world landscape, and various proposals using AR have been made (PTL 1).

CITATION LIST

Patent Literature

[PTL 1]

JP 2012-155654A

SUMMARY

Technical Problem

[0003] In AR, a mark called “marker” is usually used, and when the user recognizes the position of the marker and then captures an image of the marker with a camera of an AR device such as a smartphone, a virtual object and/or visual information are overlaid and displayed on a live image captured by the camera of the smartphone.

[0004] In this method, the virtual object and/or the visual information are not displayed on the AR device unless the image of the marker is captured by the camera of the AR device, so that there is a problem that the use environment and the use application are limited.

[0005] The present technique has been made in view of such problems, and an object thereof is to provide an information processing device, an information processing method, and an information processing program capable of displaying a virtual object without recognizing the position of a mark such as a marker.

Solution to Problem

[0006] In order to solve the above-described problem, a first technique is an information processing device that acquires first information from a detection device attached to a real object, acquires second information from a display device, places a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space, and transmits information on the virtual space to the display device.

[0007] Further, a second technique is an information processing method including acquiring first information from a detection device attached to a real object, acquiring second information from a display device, placing a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space, and transmitting information on the virtual space to the display device.

[0008] Further, a third technique is an information processing program that causes a computer to execute an information processing method including acquiring first information from a detection device attached to a real object, acquiring second information from a display device, placing a virtual object corresponding to the first information and a virtual camera corresponding to the second information in a virtual space, and transmitting information on the virtual space to the display device.

Advantageous Effects of Invention

[0009] According to the present technique, it is possible to display a virtual object without recognizing the position of a mark such as a marker. Note that the advantageous effect described here is not necessarily limited, and any advantageous effects described in the description may be enjoyed.

BRIEF DESCRIPTION OF DRAWINGS

[0010] FIG. 1 is a block diagram illustrating a configuration of an information processing system according to an embodiment of the present technique.

[0011] FIG. 2A is a block diagram illustrating a configuration of a detection device, and FIG. 2B is a block diagram illustrating a configuration of a display device.

[0012] FIG. 3 is an explanatory diagram of a visual field and a peripheral range.

[0013] FIG. 4 is a block diagram illustrating a configuration of an information processing device.

[0014] FIG. 5 is an explanatory diagram of arrangement of a virtual object and a virtual camera in a virtual space.

[0015] FIG. 6 is an explanatory diagram of arrangement position and arrangement attitude of a virtual object in a virtual space.

[0016] FIG. 7 is an explanatory diagram of position and attitude of the display device, and position and attitude of the virtual camera.

[0017] FIG. 8A is a stand signboard serving as a real object in a first specific embodiment, and FIG. 8B is a display example of a display device in the first specific embodiment.

[0018] FIG. 9A is a situation explanatory view of a second specific embodiment, and FIG. 9B is a display example of a display device in the second specific embodiment.

[0019] FIG. 10A is a second display example of the display device in the second specific embodiment, and FIG. 10B is a third display example of the display device in the second specific embodiment.

[0020] FIG. 11 is a schematic explanatory diagram of a third specific embodiment.

[0021] FIG. 12 is a display example of a display device in the third specific embodiment.

[0022] FIG. 13 is a diagram illustrating a modified example of the third specific embodiment.

[0023] FIG. 14A is a situation explanatory view of a fourth specific embodiment, and FIG. 14B is a display example of a display device in the fourth specific embodiment.

[0024] FIG. 15A is a situation explanatory view of a fifth specific embodiment, and FIG. 15B is a display example of a display device in the fifth specific embodiment.

DESCRIPTION OF EMBODIMENTS

[0025] Hereinafter, embodiments of the present technique will be described with reference to the drawings. Note that the description will be given in the following order.

<1. Embodiments>

[1-1. Configuration of Information Processing System]

[1-2. Configuration of Detection Device]

[1-3. Configuration of Display Device]

[1-4. Configuration of Information Processing Device]

<2. Specific Embodiments>

[2-1. First Specific Embodiment]

[2-2. Second Specific Embodiment]

[2-3. Third Specific Embodiment]

[2-4. Fourth Specific Embodiment]

[2-5. Fifth Specific Embodiment]

[2-6. Other Specific Embodiments]

<3. Modified Examples>

1.* EMBODIMENTS*

1-1. Configuration of Information Processing System

[0026] An information processing system 10 includes a detection device 100, a display device 200, and an information processing device 300, in which the detection device 100 and the information processing device 300 can communicate with each other via a network or the like, and the information processing device 300 and the display device 200 can communicate with each other via a network or the like.

[0027] The detection device 100 is attached to a real object 1000 in the real world, for example, a signboard, a sign, a fence, or the like, to use. Attachment of the detection device 100 to the real object 1000 is performed by a business operator who provides the information processing system 10, a business operator who uses the information processing system 10 to provide a service to a customer, a user who wants to show a CG video to another user with the information processing system 10, or the like.

[0028] The detection device 100 transmits to the information processing device 300 identification information for identifying the detection device 100 itself, and position information, attitude information, state information, and time information of the attached real object 1000. These pieces of information transmitted from the detection device 100 to the information processing device 300 correspond to first information recited in the claims. The time information is used for synchronization between the detection device 100 and the information processing device 300, confirmation of display timing, and the like. Details of the other pieces of information will be described below.

[0029] The display device 200 has at least a video display function of, for example, a smartphone or a head-mounted display, and an AR device or a VR device that is used by a user who uses the information processing system 10.

[0030] The display device 200 transmits to the information processing device 300 identification information of the display device 200 itself, and position information, attitude information, visual field information, peripheral range information, and time information of the display device 200. These pieces of information transmitted from the display device 200 to the information processing device 300 correspond to second information recited in the claims. The time information is used for synchronization between the display device 200 and the information processing device 300, confirmation of display timing, and the like. Details of the other pieces of information will be described below.

[0031] The information processing device 300 forms a virtual space, and places a virtual object 2000 in the virtual space according to the position information and attitude information of the detection device 100 transmitted from the detection device 100. The virtual object 2000 is created of CG of objects and living things existing in the real world, and is also created of CG of all things having any shape such as animated characters, letters, numbers, diagrams, images, and videos.

[0032] Further, the information processing device 300 places a virtual camera 3000 that virtually captures an image in the virtual space according to the position information and attitude information of the display device 200 transmitted from the display device 200. Then, information on the inside of the capture range of the virtual camera 3000 in the virtual space is transmitted to the display device 200.

[0033] The display device 200 renders and displays a CG video based on the information on the virtual space (hereinafter referred to as virtual space information, which will be described in detail below) transmitted from the information processing device 300. In a case where the display device 200 is an AR device, the CG video is overlaid and displayed on a video captured by a camera included in the AR device. Further, in a case where the display device 200 is a VII device, the created CG video and other CG videos as needed are synthesized and displayed. Further, in a case where the display device 200 is a transmissive AR device called smart glasses, the created CG video is displayed on its display unit.

1-2. Configuration of Detection Device

[0034] FIG. 2A is a block diagram illustrating a configuration of the detection device 100. The detection device 100 includes a position detection unit 101, an attitude detection unit 102, a state detection unit 103, and a transmission unit 104.

[0035] The position detection unit 101 detects the current position of the detection device 100 itself as position information by, for example, GPS (Global Positioning System). Since the detection device 100 is attached to the real object 1000, this position information can be said to represent the current position of the real object 1000. In addition to a point represented by coordinates (X, Y), the position information may include an altitude (Z) and point information suitable for use (building name, store name, floor number, road name, intersection name, address, map code, distance mark (km post), etc.).

[0036] Note that the method of detecting the position is not limited to GPS, and GNSS (Global Navigation Satellite System), INS (Inertial Navigation System), beacon, WiFi, geomagnetic sensor, depth camera, infrared sensor, ultrasonic sensor, barometer, radio wave detection device, or the like may be used, and these may be used in combination.

[0037] The attitude detection unit 102 detects an attitude of the detection device 100 to detect an attitude of the real object 1000 to which the detection device 100 is attached. The attitude is, for example, an orientation of the real object 1000, an upright state, an oblique state, or a sideways state of the real object 1000, or the like.

[0038] The state detection unit 103 detects a state of the real object 1000 to which the detection device 100 is attached. The state detection unit 103 detects at least a first state of the real object 1000 and a second state in which the first state is released. The first state and the second state of the real object 1000 referred to here are whether or not the real object 1000 is in a use state. The first state refers to a state in which the real object 1000 is in use, and the second state refers to a state in which the real object 1000 is not in use.

[0039] For example, for the real object 1000 being a stand signboard of a store, a state in which the real object 1000 is installed upright on the ground or on a stand is referred to as the first state in which it is in use, and a state in which the real object 1000 is placed sideways is referred to as the second state in which it is not in use. Further, for the real object 1000 being a hanging signboard of a store, a state in which the real object 1000 is hung on a wall is referred to as the first state in which it is in use, and a state in which the real object 1000 is placed sideways is referred to as the second state in which it is not in use. Furthermore, for the real object 1000 being a free standing fence, a state in which the real object 1000 is installed upright on the ground or on a stand is referred to as the first state in which it is in use, and a state in which the real object 1000 is placed sideways is referred to as the second state in which it is not in use. In this way, the first state and the second state differ depending on what the real object 1000 is.

[0040] The first state or the second state of the real object 1000 detected by the detection device 100 correspond to whether or not the information processing device 300 causes the virtual object 2000 to appear in the virtual space. When the real object 1000 is in the first state, the virtual object 2000 is placed in the virtual space and is displayed on the display device 200. Then, when the real object 1000 enters the second state in the state in which the virtual object 2000 is placed in the virtual space, the virtual object 2000 is deleted (not placed) from the virtual space. In this way, it is determined in advance that the first state and the second state each indicate in what state the real object 1000 is, and that the first state and the second state correspond to the placement and deletion of the virtual object 2000, respectively, or vice versa, and they are registered in the detection device 100 and the information processing device 300.

[0041] Such detection of the state of the real object 1000 may be automatically performed by static detection and attitude detection by an inertial measurement unit (IMU: Inertial Measurement Unit) or the like, or may be performed by a button-shaped sensor or the like that is pressed down by contacting with a supporting surface when the real object 1000 is installed.

[0042] The transmission unit 104 is a communication module that communicates with the information processing device 300 to transmit the first information, which includes the position information, the attitude information, the state information, and the time information, to the information processing device 300. Note that it is not always necessary to transmit all the pieces of information as the first information, and only a piece or pieces of necessary information may be transmitted. Communication with the information processing device 300 may be performed by a network such as the Internet or a wireless LAN such as Wi-Fi if the distance between the detection device 100 and the information processing device 300 is long, and may be performed by any one of wireless communication such as Bluetooth (registered trademark) or ZigBee and wired communication such as USB (Universal Serial Bus) communication if the distance between the detection device 100 and the information processing device 300 is short.

[0043] The detection device 100 continues to transmit the first information to the information processing device 300 at predetermined time intervals as long as the real object 1000 is in the first state. Then, when the real object 1000 enters the second state, the transmission of the first information ends.

1-3. Configuration of Display Device

[0044] FIG. 2B is a block diagram illustrating a configuration of the display device 200. The display device 200 includes a position detection unit 201, an attitude detection unit 202, a visual field information acquisition unit 203, a peripheral range information acquisition unit 204, a transmission unit 205, a reception unit 206, a rendering processing unit 207, and a display unit 208. The display device 200 is a smartphone serving as an AR device having a camera function and an image display function, a head-mounted display serving as a VR device, or the like.

[0045] The position detection unit 201 and the attitude detection unit 202 are similar to those included in the detection device 100, and detect the position and attitude of the display device 200, respectively.

[0046] The visual field information acquisition unit 203 acquires a horizontal viewing angle, a vertical viewing angle, and a visible limit distance of display on the display unit 208. As illustrated in FIG. 3A, the visible limit distance indicates a limit distance that can be seen from the position of a line of sight of the user (the origin of the visual field). Further, the horizontal viewing angle is a horizontal distance at the position of the visible limit distance, and the vertical viewing angle is a vertical distance at the position of the visible limit distance. The horizontal viewing angle and the vertical viewing angle define a viewing range that is a range that the user can see.

[0047] In a case where the display device 200 is an AR device having a camera function, the horizontal view angle, the vertical view angle, and the visible limit distance, which are visual field information, are determined by the camera settings. Further, in a case where the display device 200 is a VR device, the horizontal viewing angle, the vertical viewing angle, and the visible limit distance are set to predetermined values in advance depending on that device. As illustrated in FIG. 3B, the vertical viewing angle, the horizontal viewing angle, and the visible limit distance of the virtual camera 3000 placed in the virtual space are set to be the same as the horizontal viewing angle, the vertical viewing angle, and the visible limit distance of display on the display unit 208.

[0048] The peripheral range information acquisition unit 204 acquires information indicating a peripheral range. The peripheral range is a range of a predetermined size with the position of a viewpoint of the user who sees a video on the display device 200 (the origin of the visual field) as almost the center, as illustrated in FIG. 3A. The peripheral range is set in advance in a manner that is defined in advance by the provider of a service using the information processing system 10 or is defined by the user. The peripheral range information corresponds to information on a predetermined range in the virtual space, recited in the claims.

[0049] As illustrated in FIG. 3B, the display device 200 receives from the information processing device 300 information on a virtual space within the same range as the peripheral range with the virtual camera 3000 placed in the virtual space formed by the information processing device 300 as almost the center.

[0050] The visible limit distance and the peripheral range are distances in the virtual space, and all distances in the virtual space may be defined to be the same as the distances in the real world so that 1 m in the virtual space is defined to be the same as 1 m in the real world. However, distances in the virtual space do not have to be the same as the distances in the real world. In that case, it is necessary to define such that “one meter in the virtual space corresponds to ten meters in the real world”. Further, distances in the virtual space may be defined by pixels. In that case, it is necessary to define such that “one pixel in the virtual space corresponds to one centimeter in the real world”.

[0051] The transmission unit 205 is a communication module that communicates with the information processing device 300 to transmit position information, attitude information, visual field information, peripheral range information, and time information, to the information processing device 300. These pieces of information transmitted from the display device 200 to the information processing device 300 correspond to second information recited in the claims. Note that it is not always necessary to transmit all the pieces of information as the second information, and only a piece or pieces of necessary information may be transmitted.

[0052] Communication with the information processing device 300 may be performed by a network such as the Internet or a wireless LAN such as Wi-Fi if the distance between the display device 200 and the information processing device 300 is long, and may be performed by any one of wireless communication such as Bluetooth (registered trademark) or ZigBee and wired communication such as USB communication if the distance between the display device 200 and the information processing device 300 is short.

[0053] The reception unit 206 is a communication module for communicating with the information processing device 300 to receive the virtual space information. The received virtual space information is supplied to the rendering processing unit 207.

[0054] The virtual space information includes visual field information of the virtual camera 3000 determined from the horizontal viewing field angle, vertical viewing field angle, and visible limit distance of the virtual camera 3000, and information on the inside of the peripheral range. The visual field information of the virtual camera 3000 indicates a range which is presented to the user as a video on the display device 200.

[0055] The rendering processing unit 207 performs rendering processing based on the virtual space information received from the information processing device 300, thereby creating a CG video to be displayed on the display unit 208 of the display device 200.

[0056] The display unit 208 is a display device including, for example, an LCD (Liquid Crystal Display), a PDP (Plasma Display Panel), or an organic EL (Electro Luminescence) panel. The display unit 208 displays the CG video created by the rendering processing unit 207, a user interface serving as an AR device or a VR device, and the like.

[0057] When the display device 200 enters a mode in which the information processing system 10 is used (e.g., a service application using the information processing system 10 is activated), the display device 200 continuously transmits the second information, which includes the identification information, the position information, the attitude information, and the visual field information, the peripheral range information, and the time information to the information processing device 300 at predetermined time intervals. Then, the display device 200 ends the transmission of the second information when the mode of using the information processing system 10 ends.

1-4. Configuration of Information Processing Device

[0058] FIG. 4 is a block diagram illustrating a configuration of the information processing device 300. The information processing device 300 includes a first reception unit 310, a second reception unit 320, a 3DCG modeling unit 330, and a transmission unit 340. The 3DCG modeling unit 330 includes a virtual object storage unit 331, a virtual camera control unit 332, and a virtual space modeling unit 333.

[0059] The first reception unit 310 is a communication module for communicating with the detection device 100 to receive the first information transmitted from the detection device 100. The first information from the detection device 100 is supplied to the 3DCG modeling unit 330.

[0060] The second reception unit 320 is a communication module for communicating with the display device 200 to receive the second information transmitted from the display device 200. The second information from the display device 200 is supplied to the 3DCG modeling unit 330.

[0061] The 3DCG modeling unit 330 includes a DSP (Digital Signal Processor) or a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like. The ROM stores programs to loaded and operated by the CPU. The RAM is used as a work memory for the CPU. The CPU performs various processing in accordance with the programs stored in the ROM to issue commands, thereby performing processing as the 3DCG modeling unit 330.

[0062] The virtual object storage unit 331 stores data (shape, color, size, etc.) data that defines the virtual object 2000 created in advance. If pieces of data on a plurality of virtual objects are stored in the virtual object storage unit 331, each virtual object 2000 has a unique ID. Associating this ID with the identification information of the detection device 100 makes it possible to place the virtual object 2000 corresponding to the detection device 100 in the virtual space.

[0063] The virtual camera control unit 332 performs controls such as changing or adjusting the position, attitude, and viewing range of the virtual camera 3000 in the virtual space. Note that in a case where a plurality of virtual cameras 3000 are used, it is necessary to give a unique ID to each virtual camera 3000. Associating this ID with the identification information of the display device 200 makes it possible to place the virtual camera 3000 corresponding to each display device 200 in the virtual space.

[0064] The virtual space modeling unit 333 performs modeling processing of the virtual space. When the state information included in the first information supplied from the detection device 100 is the first state corresponding to the positioning of the virtual object 2000, the virtual space modeling unit 333 reads from the virtual object storage unit 331 the virtual object 2000 having the ID corresponding to the identification information of the detection device 100, and places it in the virtual space as illustrated in FIG. 5. At that time, the virtual object 2000 is placed in a position in the virtual space corresponding to the position information transmitted from the detection device 100.

[0065] This position in the virtual space corresponding to the position information may be a position having the same coordinates in the virtual space as the coordinates of the position of the detection device 100 (the position of the real object 1000), or may be a position at a predetermined distance from the position of the detection device 100 (the position of the real object 1000) serving as a reference. At what position the placement is made based on the position information of the virtual object 1000 may be defined in advance. If it is not defined, the virtual object 1000 may be placed in a default position indicated by the position information. Further, the virtual object 2000 is placed in the virtual space in an attitude corresponding to the attitude information transmitted from the detection device 100.

[0066] When receiving the identification information, the position information, and the attitude information from the display device 200, the virtual space modeling unit 333 further places the virtual camera 3000 having the ID corresponding to the identification information in the virtual space. At that time, the virtual camera 3000 is placed in a position in the virtual space corresponding to the position information transmitted from the display device 200. Similar to the placement of the virtual object 2000 described above, the virtual camera 3000 may be placed in a position having the same coordinates in the virtual space as the coordinates of the display device 200, or may be placed in a position at a predetermined distance from the display device 200 serving as a reference. Further, the virtual camera 3000 is placed in the virtual space in an attitude corresponding to the attitude information from the display device 200.

[0067] As illustrated in FIG. 6A, the virtual space is a 3D stereoscopic space model designed in advance. The world coordinate system is defined in the virtual space, so that the position and attitude in the space can be uniquely expressed by using that system. Further, the virtual space may include settings that affect the entire environment, such as definitions of the ambient light and also the sky and floor.

[0068] The virtual object 2000 is object data of a 3D model designed in advance, and unique identification information (ID) is given to each virtual object 2000. As illustrated in FIG. 6B, a unique local coordinate system is defined for each virtual object 2000, and the position of the virtual object 2000 is represented as a position from the base point of the local coordinate system.

[0069] As illustrated in FIG. 6C, when the virtual object 2000 is placed in the virtual space, the position and attitude of the local coordinate system including the virtual object 2000 changes based on the received position information and attitude information. Further, when the attitude information is updated, the virtual object 2000 is rotated about the base point of the local coordinate system. Furthermore, when the position information is updated, the base point of the local coordinate system is moved to the corresponding coordinates on the world coordinate system of the virtual space.

[0070] Note that if it is necessary to display the created CG video in actual size, even when the same virtual object 2000 is displayed as illustrated in FIG. 6D, it is necessary to display a larger range for a large screen and a smaller range for a small screen. This viewing range can be specified by the visual field information transmitted from the display device 200 to the information processing device 300. The display device 200 can transmit appropriate visual field information to the information processing device 300 according to the screen size of the display unit and the characteristics of the camera, thereby adjusting the size of the virtual object 2000 to be displayed to the actual size.

[0071] Associating the identification information of the display device 200 with the ID of the virtual camera 3000 in advance makes it possible to place, in a case where a plurality of display devices 200 are used at the same time, a plurality of virtual cameras 3000 corresponding to the plurality of display devices 200, respectively, in the virtual space.

[0072] Furthermore, when receiving the visual field information from the display device 200, the virtual camera control unit 332 adjusts the horizontal viewing angle, the vertical viewing angle, and the visible limit distance of the virtual camera 3000 according to the visual field information. Furthermore, when receiving the peripheral range information from the display device 200, the virtual camera control unit 332 sets a peripheral range preset in the display device 200 in the virtual space.

[0073] The display device 200 constantly transmits the position information and the attitude information to the information processing device 300 at predetermined intervals, and the virtual camera control unit 332 changes the position, orientation, and attitude of the virtual camera 3000 in the virtual space according to changes of the position, orientation, and attitude of the display device 200.

[0074] When the virtual object 2000 and the virtual camera 3000 are placed in the virtual space, the 3DCG modeling unit 330 provides to the transmission unit 340 the virtual space information, which is information on the inside of the visual field of the virtual camera 3000 in the virtual space specified by the horizontal viewing angle, the vertical viewing angle, and the visible limit distance, and information on the inside of the peripheral range in the virtual space.

[0075] The transmission unit 340 is a communication module for communicating with the display device 200 to transmit the virtual space information supplied from the 3DCG modeling unit 330 to the display device 200. Note that although the first reception unit 310, the second reception unit 320, and the transmission unit 340 are described as separate units in the block diagram of FIG. 4, one communication module having a transmitting and receiving function may involve the first reception unit 310, the second reception unit 320, and the transmission unit 340.

[0076] When the display device 200 receives the virtual space information from the information processing device 300, the rendering processing unit 207 performs rendering processing based on the virtual space information to create a CG video and display the CG video on the display unit 208. When the position and attitude of the display device 200 in the real world are as illustrated in FIG. 7A, the virtual camera 3000 is placed in the virtual space corresponding to the position and attitude of the display device 200 as illustrated in FIG. 7B. Then, when the virtual object 2000 is within the viewing range of the virtual camera 3000, the virtual object 2000 is displayed on the display unit 208 of the display device 200 as illustrated in FIG. 7C.

[0077] When the position and/or attitude of the display device 200 changes from the state of FIG. 7A as illustrated in FIG. 7D, the position and/or attitude of the virtual camera 3000 in the virtual space also correspondingly changes as illustrated in FIG. 7E. Then, as illustrated in FIG. 7E, when the virtual object 2000 deviates from the viewing range of the virtual camera 3000, the virtual object 2000 is no longer displayed on the display unit 208 of the display device 200 as illustrated in FIG. 7F.

[0078] When the virtual object 2000 enters the viewing range of the virtual camera 3000 again from the state where the virtual object 2000 deviates from the viewing range of the virtual camera 3000 as illustrated in FIGS. 7D to 7F, the virtual object 2000 is displayed on the display unit 208 of the display device 200. Accordingly, the user who uses the display device 200 needs to adjust the position and attitude of the display device 200 in order to display the virtual object 2000 on the display unit 208. However, in the present technique, the user does not need to recognize the position of the detection device 100 in order to display the virtual object 2000 on the display device 200, and also capture the detection device 100.

[0079] Note that when the state information indicating that the real object 1000 is in the second state is received from the detection device 100, the 3DCG modeling unit 330 deletes the virtual object 2000 from the virtual space.

[0080] Note that the peripheral range is set as a fixed range in advance, but when information indicating that the peripheral range information has changed is received from the display device 200, the virtual camera control unit 332 changes the peripheral range in the virtual space.

[0081] As described above, the display device 200 creates a CG video by performing the rendering processing based on the virtual space information received from the information processing device 300. Then, in a case where the display device 200 is an AR device, the CG video is overlaid and displayed on a video captured by a camera included in the AR device. Further, in a case where the display device 200 is a VR device, the created CG video and other CG videos as needed are synthesized and displayed. Further, in a case where the display device 200 is a transmissive AR device called smart glasses, the created CG video is displayed on its display unit.

[0082] The detection device 100, the display device 200, and the information processing device 300 are configured as described above. Note that the information processing device 300 is configured to operate in, for example, a server of a company that provides the information processing system 10.

[0083] The information processing device 300 is implemented by a program, and the program may be installed in advance on a processor such as a DSP or on a computer that performs signal processing, or may be distributed by downloading, a storage medium, or the like, to be installed by the user himself/herself. Further, the information processing device 300 may be implemented not only by a program but also by combining a dedicated device, a circuit, or the like with hardware having the functions.

[0084] In the conventional AR technique, the user marker needs to continue capturing an AR marker in order to display a created CG video on the AR device, and this causes a problem that when the AR marker deviates from the capture range of the camera, the virtual object 2000 suddenly disappears. On the other hand, in the present technique, the user does not need to capture the real object 1000 to which the detection device 100 is attached in order to display a created CG video on the display device 200 or to know the position of the real object 1000. Therefore, there is no problem that the virtual object 2000 is not displayed and cannot be seen because the real object 1000 to which the detection device 100 is attached cannot be captured by the camera, or the camera deviates from the real object 1000 during the display of the virtual object 2000 and thus the virtual object 2000 disappears.

[0085] In the conventional AR technique, a virtual object 2000 is displayed and appears at the moment when the user changes the orientation of the camera to captures the marker. The surrounding environment such as a shadow and a sound that should always be present if the virtual object 2000 exists is not present until the virtual object 2000 appears. On the other hand, in the present technique, the virtual object 2000 exists as long as it is placed in the virtual space even if it is not visible because it is not displayed on the display device 200. Therefore, it is possible to provide the surrounding environment such as a shadow of the virtual object 2000 to the user even in a state where the virtual object 2000 is not displayed on the display device 200.

……
……
……

您可能还喜欢...