Samsung Patent | Method and device for providing augmented reality (ar) service based on viewer environment
Patent: Method and device for providing augmented reality (ar) service based on viewer environment
Patent PDF: 加入映维网会员获取
Publication Number: 20220358728
Publication Date: 2022-11-10
Assignee: Samsung Electronics .
Abstract
The disclosure relates to a 5G or 6G communication system for supporting a higher data transmission rate. A method for operating a terminal for an augmented reality (AR) service in a mobile communication system includes generating terminal anchoring metadata based on environment information obtained from at least one sensor included in the terminal, transmitting the terminal anchoring metadata to a server, receiving, from the server, a 3D model generated based on the terminal anchoring metadata, and rendering a virtual object based on the 3D model and the environment information.
Claims
What is claimed is:
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)
This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2021-0060030, filed on May 10, 2021, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
BACKGROUND1. Field
The disclosure generally relates to a method and device for providing a service using augmented reality (AR) content.
2. Description of the Related Art
5th generation (5G) mobile communication technologies define broad frequency bands such that high transmission rates and new services are possible, and can be implemented not only in “sub 6 GHz” bands such as 3.5 GHz, but also in “above 6 GHz” bands referred to as mmWave including 28 GHz and 39 GHz. In addition, it has been considered to implement 6th generation (6G) mobile communication technologies (referred to as beyond 5G systems) in terahertz bands (for example, 95 GHz to 3 THz bands) in order to accomplish transmission rates fifty times faster than 5G mobile communication technologies and ultra-low latencies one-tenth of 5G mobile communication technologies.
At the beginning of the development of 5G mobile communication technologies, in order to support services and to satisfy performance requirements in connection with enhanced mobile broadband (eMBB), ultra reliable low latency communications (URLLC), and massive machine-type communications (mMTC), there has been ongoing standardization regarding beamforming and massive multiple input multiple output (MIMO) for mitigating radio-wave path loss and increasing radio-wave transmission distances in mmWave, supporting numerologies (for example, operating multiple subcarrier spacings) for efficiently utilizing mmWave resources and dynamic operation of slot formats, initial access technologies for supporting multi-beam transmission and broadbands, definition and operation of BWP (bandwidth part), new channel coding methods such as an LDPC (low density parity check) code for large amount of data transmission and a polar code for highly reliable transmission of control information, L2 pre-processing, and network slicing for providing a dedicated network specialized to a specific service.
Currently, there are ongoing discussions regarding improvement and performance enhancement of initial 5G mobile communication technologies in view of services to be supported by 5G mobile communication technologies, and there has been physical layer standardization regarding technologies such as V2X (vehicle-to-everything) for aiding driving determination by autonomous vehicles based on information regarding positions and states of vehicles transmitted by the vehicles and for enhancing user convenience, NR-U (new radio unlicensed) aimed at system operations conforming to various regulation-related requirements in unlicensed bands, NR user equipment (UE) power saving, non-terrestrial network (NTN) which is UE-satellite direct communication for providing coverage in an area in which communication with terrestrial networks is unavailable, and positioning.
Moreover, there has been ongoing standardization in air interface architecture/protocol regarding technologies such as industrial Internet of things (IIoT) for supporting new services through interworking and convergence with other industries, IAB (integrated access and backhaul) for providing a node for network service area expansion by supporting a wireless backhaul link and an access link in an integrated manner, mobility enhancement including conditional handover and DAPS (dual active protocol stack) handover, and two-step random access for simplifying random access procedures (2-step RACH for NR). There also has been ongoing standardization in system architecture/service regarding a 5G baseline architecture (for example, service based architecture or service based interface) for combining network functions virtualization (NFV) and software-defined networking (SDN) technologies, and mobile edge computing (MEC) for receiving services based on UE positions.
As 5G mobile communication systems are commercialized, connected devices that have been exponentially increasing will be connected to communication networks, and it is expected that enhanced functions and performances of 5G mobile communication systems and integrated operations of connected devices will be necessary. To this end, new research is scheduled in connection with eXtended reality (XR) for efficiently supporting AR, VR (virtual reality), MR (mixed reality) and the like, 5G performance improvement and complexity reduction by utilizing artificial intelligence (AI) and machine learning (ML), AI service support, metaverse service support, and drone communication.
Furthermore, such development of 5G mobile communication systems will serve as a basis for developing not only new waveforms for providing coverage in terahertz bands of 6G mobile communication technologies, multi-antenna transmission technologies such as full dimensional MIMO (FD-MIMO), array antennas and large-scale antennas, metamaterial-based lenses and antennas for improving coverage of terahertz band signals, high-dimensional space multiplexing technology using OAM (orbital angular momentum), and RIS (reconfigurable intelligent surface), but also full-duplex technology for increasing frequency efficiency of 6G mobile communication technologies and improving system networks, AI-based communication technology for implementing system optimization by utilizing satellites and AI from the design stage and internalizing end-to-end AI support functions, and next-generation distributed computing technology for implementing services at levels of complexity exceeding the limit of UE operation capability by utilizing ultra-high-performance communication and computing resources.
As communication technology develops, the demand for providing various devices and eXtended reality (XR) services is increasing. XR may include at least one of VR, AR, or MR. XR services may include, e.g., location-based service applications, XR calls based on XR objects configured in three dimension (3D), XR streaming, and the like. Here, “XR call” means a service in which 3D object creation and playback functions are added to general video and audio calls. “XR streaming” means a service that allows an XR device to receive XR content from a server and play it. AR is a technique that provides additional information difficult to obtain only in the real world by synthesizing virtual objects with the real world that the user sees. AR service may provide a plurality of virtual objects and may describe the physical relationship between virtual objects, using the scene description technique. The virtual object may be provided as a 3D model, and the AR service viewer may analyze the real world to determine a position (anchor) where to synthesize a 3D model and render a 3D model in the identified position.
When the AR service viewer's environment is different from the environment assumed by the AR service provider when creating the 3D model, the 3D model cannot be played on the viewer's AR device or may be played in a different manner than is intended by the AR service provider.
SUMMARY
To address the foregoing issues, an embodiment of the disclosure may provide a method and device by which an AR service provider may adapt to the AR service viewer's real world in real-time.
An embodiment of the disclosure may provide an AR service provider which can adapt to the AR service viewer's real world in real-time.
According to an embodiment, a method for operating a terminal for an augmented reality (AR) service in a mobile communication system includes generating terminal anchoring metadata based on environment information obtained from at least one sensor included in the terminal, transmitting the terminal anchoring metadata to a server, receiving, from the server, a 3D model generated based on the terminal anchoring metadata, and rendering a virtual object based on the 3D model and the environment information.
According to an embodiment, a method for operating a server for an augmented reality (AR) service in a mobile communication system includes receiving, from a terminal, terminal anchoring metadata generated based on environment information obtained from at least one sensor included in the terminal, generating a 3D model using the terminal anchoring metadata, and transmitting the 3D model to the terminal. According to an embodiment, a virtual object may be rendered by the terminal based on the 3D model and the environment information.
According to an embodiment, a terminal for an augmented reality (AR) service in a mobile communication system includes a transceiver and a controller generating terminal anchoring metadata based on environment information obtained from at least one sensor included in the terminal, controlling to transmit the terminal anchoring metadata to a server, controlling to receive, from a server, a 3D model generated based on the terminal anchoring metadata, and rendering a virtual object based on the 3D model and the environment information.
According to an embodiment, a server for an augmented reality (AR) service in a mobile communication system includes a transceiver and a controller controlling to receive, from a terminal, terminal anchoring metadata generated based on environment information obtained from at least one sensor included in the terminal, generating a 3D model using the terminal anchoring metadata, and controlling to transmit the 3D model to the terminal. According to an embodiment, a virtual object may be rendered by the terminal based on the 3D model and the environment information.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects, features, and advantages of certain embodiments of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 illustrates a flowchart of an example of an AR service playback procedure;
FIG. 2 illustrates an example of an AR terminal's functional structure;
FIG. 3 illustrates an example of an AR terminal's functional structure according to an embodiment;
FIG. 4 illustrates a block diagram of an AR server's functional structure according to an embodiment;
FIG. 5 illustrates a flowchart of an AR service playback procedure based on terminal anchoring metadata;
FIG. 6 illustrates a flowchart of an AR service playback procedure based on content anchoring metadata;
FIG. 7 illustrates a block diagram of a structure of a terminal according to an embodiment; and
FIG. 8 illustrates a block diagram of a structure of a server according to an embodiment.
DETAILED DESCRIPTION
Hereinafter, embodiments of the disclosure are described in detail with reference to the accompanying drawings. When determined to make the subject matter of the disclosure unclear, the detailed description of the known art or functions may be skipped. The terms as used herein are defined considering the functions in the disclosure and may be replaced with other terms according to the intention or practice of the user or operator. Therefore, the terms should be defined based on the overall disclosure.
Embodiments of the disclosure may also be applicable to communication systems with a similar technical background with minor changes without significantly departing from the scope of the disclosure, and this may be possible under the determination of those skilled in the art to which the disclosure pertains. As used herein, the term “communication system” encompasses broadcast systems, but when a broadcast service is a main service, the communication system may be clearly mentioned as broadcast system.
Advantages and features of the disclosure, and methods for achieving the same may be understood through the embodiments to be described below taken in conjunction with the accompanying drawings. However, the disclosure is not limited to the embodiments disclosed herein, and various changes may be made thereto. The embodiments disclosed herein are provided only to inform one of ordinary skilled in the art of the category of the disclosure. The disclosure is defined only by the appended claims. The same reference numeral denotes the same element throughout the specification.
Methods described below in connection with embodiments are based on hardware. However, embodiments of the disclosure encompass technology using both hardware and software and thus do not exclude software-based methods. The disclosure is not limited to the terms, and other terms equivalent in technical concept may also be used.
The 3D model for the AR service considered in the disclosure may be defined as a continuous volumetric frame that changes over time. The volumetric frame may be regarded as a set of primitive elements, such as points, lines, and planes, existing in a three-dimensional (3D) space at a specific time, and the primitive elements may have attributes, such as color, reflectance, and the like. The volumetric frames may be stored and transmitted in a format specialized for characteristics and applications of content. For example, the graphics language transmission format (GLTF) format spatially and logically structures and expresses a scene in a three-dimensional space. More specifically, the scene may be structured with nodes having a tree or graph structure and expressed in JavaScript object notation (JSON) format, and actual media data referenced by the node may be specified in the above-described raw element-based 3D model structure.
FIG. 1 illustrates a flowchart of a procedure for reproducing an AR service on an AR terminal.
Referring to FIG. 1, an AR terminal may initiate at step 100 an AR service by executing an application program (or application) and at step 110, obtain a 3D model constituting the AR service. The 3D model constituting the AR service is a part of the application program and may be present in the storage space of the AR terminal or be received from an AR server located over the network.
The AR terminal may determine at step 130 an anchor where the 3D model is located via real world analysis at step 120 and at step 140, render a virtual object represented as the 3D model. The real world analysis includes a process in which the AR terminal recognizes the environment around the AR terminal using a sensor, e.g., a camera, and the anchor may be a geometrical structure referenced for synthesizing the 3D model with the real world context.
For example, in a service in which the user of the AR terminal in a room with a table is able to put a virtual object on the table, the real world analysis may be a process for figuring out the size of the room, the position and size of the table, and the process of determining the anchor may be a process for determining the position where the 3D model indicating the virtual object may be placed on the top surface of the table. Since the real world may be varied by the movement of the AR terminal or external factors, the position of the anchor may be changed, if necessary, by continuously performing the real world analysis after the virtual object rendering at step 140.
FIG. 2 illustrates an example of a functional structure of an AR terminal for playing the above-described AR service.
Referring to FIG. 2, the AR terminal 200 may include an AR application 201, a network interface 202, a vision engine 203, an AR renderer 204, and a pose correction 205 function.
According to an embodiment, the network interface 202 may be denoted as a communication interface or a transceiver. The AR application 201, the vision engine 203, the AR renderer 204, and the pose correction 205 function are collectively referred to as a controller or a processor.
The AR application 201 denotes a service logic that executes the AR service based on the user's input. The vision engine 203 may perform the real world analysis 120 of FIG. 1 based on data inputted from a device, e.g., a sensor, a camera, or a microphone, and generate data for anchor determination 130. The anchor determination 130 may be executed by the AR application using the 3D model and the data generated by the vision engine 203, and the AR application 201 may use user input during the anchor determination 130. The 3D model, as part of the AR application 201, may be stored in the AR terminal 200 and received from the AR server 210 through the network interface 202.
Meanwhile, for the above-described AR terminal to determine the anchor for rendering the virtual object, information on the 3D model representing the virtual object is required. For example, the user of the AR terminal may perform the application of placing a 3D model representing a piece of furniture, as a virtual object, in the room of the user of the AR terminal to determine a piece of furniture to purchase. The 3D model representing the piece of furniture in the application should be rendered to be recognized by the user of the AR terminal as having the same size as a real piece of furniture and, to that end, the anchor should be set in a position where there is an empty space sufficient to place the real piece of furniture. There may be a piece of furniture impossible to place depending on the structure of the room of the AR terminal user. In this case, the AR service may provide the AR terminal user with only 3D models for placeable furniture using terminal anchoring metadata, which is to be described below.
As another example, an application in which a virtual object is not placed in a fixed position but is movable in the real world of the AR terminal user may be performed. To correctly synthesize the virtual object with the AR terminal user's real world, information on the space necessary for representing the movement of the virtual object as well as information regarding the size of the virtual object is required. Accordingly, in this case, the AR service may transfer information on a movement range of the virtual object to the AR terminal using the content anchoring metadata which is to be described below.
The terminal anchoring metadata may include at least one of the geometry of the real world where the AR terminal is located, coordinate system, the display resolution of the user AR terminal, display resolution corresponding to the region of interest, the position and direction (or pose) of the user AR terminal, anchor position and attributes, and information on content processing required for anchoring. The geometry of the real world may be represented as a 3D model, and additional information may be specified as the attributes of the nodes constituting the 3D model depending on its application range.
Examples of the parameters constituting the above-described terminal anchoring metadata are as follows:
Geometry of the real world where the AR terminal is located: may be represented as a set of primitive elements, such as dots, lines, and planes, present on a 3D space and may further include information regarding an object if an object such as a desk or a chair is recognized;
Coordinate system: The relationship between the coordinate system representing the geometry and the real coordinate system;
Pose of the user AR terminal: The position and direction of the user AR terminal may be used when the AR server performs view frustum culling on the 3D model;
Anchor position and attributes: The position of the anchor in the real world of the AR terminal or a candidate area where the anchor may be positioned may further include such attributes as anchor normal, geometric characteristics (horizontal plane, vertical plane, boundary plane, etc.), and information on real objects (next to a water cup, in a frame, etc.). The anchor position and attributes may be set based on requirements for content anchoring metadata described below, results of real world analysis, and user input;
Content processing required for anchoring: The type of content processing required for anchoring the 3D model in the AR terminal's real world and parameters for the content processing. Examples of content processing may include scaling, rotation, and translation, and the processing is applicable to individual virtual objects or the entire scene constituted of virtual objects. The content processing may be set based on requirements for content anchoring metadata described below, results of real world analysis, and user input;
Display resolution of user AR terminal: Resolution corresponding to the entire field of view (FoV) of the AR terminal may be used when setting the precision of the 3D model by the AR server; and
Display resolution corresponding to the region of interest: The resolution of the display of the AR terminal corresponding to the region of interest of the AR terminal. The region of interest may be a region in which the geometry of the content anchoring metadata described below or the 3D model is to be rendered in the position of the anchor.
The terminal anchoring metadata may further include parameters defined in the content anchoring metadata in response to the content anchoring metadata described below.
The content anchoring metadata may include at least one of the space to be occupied by the scene representing the AR content, the space to be occupied by each of the virtual objects constituting the scene, anchor requirements, the frontal direction of the scene and virtual object, and possible content processing. The space to be occupied by the scene and the space to be occupied by each of the virtual objects constituting the scene may be represented as a 3D model, and additional information may be specified as the attributes of the nodes constituting the 3D model depending on its application range.
Examples of the parameters constituting the above-described content anchoring metadata are as follows:
Space to be occupied by the scene: may be represented as a set of primitive elements, such as dots, lines, and planes, present on a 3D space;
Space to be occupied by each of the virtual objects constituting the scene: may be represented as a set of primitive elements, such as dots, lines, and planes, present on a 3D space and may further include information for recognizing virtual objects (e.g., a desk or a chair);
Anchor requirements: Requirements for locating the anchor may further include such attributes as anchor normal, geometric characteristics (horizontal plane, vertical plane, boundary plane, etc.), and information on real objects (next to a water cup, in a frame, etc.);
Frontal direction of the scene and virtual object; and
Possible content processing: The type of content processing that may be provided by the AR server to anchor the 3D model in the AR terminal's real world and parameters for the content processing. Examples of content processing may include scaling, rotation, and translation, and the processing is applicable to individual virtual objects or the entire scene constituted of virtual objects. Content processing may further include a range (e.g., maximum reduction ratio) allowed for each processing.
The space to be occupied by each of the virtual objects constituting the scene may be represented as a set of simple structures, e.g., boxes or cylinders, which may include virtual objects or precisely specify the outer shape of the actual virtual object. The space to be occupied by the scene may also be represented as a set of simple structures, e.g., boxes or cylinders, which may include the scene or precisely specify the area to be occupied by the scene.
Each of the terminal anchoring metadata and the content anchoring metadata may be created, stored, transmitted, and processed in the format of, e.g., JSON, binary object, or XML and, according to an implementation, the terminal anchoring metadata and the content anchoring metadata may be created, stored, transmitted, and processed, as a single data unit. Further, the terminal anchoring metadata and the content anchoring metadata may be created, stored, transmitted, and processed as the same or separate file from the 3D model for AR service and, when present as a separate file, the AR service provider may provide a signaling method in which the AR terminal user may associate the 3D model with the terminal anchoring metadata and content anchoring metadata. Examples of the signaling method may include an electric service guide (ESG), a user service description, and service access information (SAI) of 3GPP 5G media streaming (5GMS).
The terminal anchoring metadata and the content anchoring metadata may be shared, in the form of an HTTP resource, between the AR terminal and the AR server or may be transmitted between the AR terminal and the AR server through a separate control protocol or media transport protocol.
FIG. 3 illustrates an example of an AR terminal's functional structure according to an embodiment.
Referring to FIG. 3, an AR terminal 300 may include an AR application 301, a network interface 302, a vision engine 303, an anchoring metadata processor 304, an AR renderer 305, and a pose correction 306 function. Referring to FIGS. 2 and 3, the AR terminal 300 of FIG. 3 is characterized to further include the anchoring metadata processor 304 as compared with the AR terminal 200 of FIG. 2.
The network interface 302 may be referred to as a communication interface or a transceiver. The AR application 301, the vision engine 303, the anchoring metadata processor 304, the AR renderer 305, and the pose correction 306 function are collectively referred to as a controller or a processor.
The following are examples of the functions of the anchoring metadata processor 304 shown in FIG. 3:
Generation of terminal anchoring metadata: generates the above-described terminal anchoring metadata using information on the real world of the AR terminal processed by the vision engine 303. In this case, the above-described terminal anchoring metadata may be visualized to the AR user using the AR renderer 305, and detailed parameters may be determined using user input information.
Content anchoring metadata processing: analyzes the real world and generates control information for anchor identification by processing the content anchoring metadata received from the AR server. Further, terminal anchoring metadata reflecting the results of anchor identification and the real world analysis may be generated and transferred to the AR server 310. In this case, the above-described content anchoring metadata and terminal anchoring metadata may be visualized to the AR user using the AR renderer 305, and detailed parameters may be determined using user input information.
FIG. 4 illustrates a block diagram of an AR server's functional structure according to an embodiment.
Referring to FIG. 4, an AR server 410 may include original contents 411, an anchoring metadata processor 412, a network interface 413, and a 3D model reconstruction 414 function.
The network interface 413 may be referred to as a communication interface or a transceiver. The original contents 411, anchoring metadata processor 412, network interface 413, and 3D model reconstruction 414 function are collectively referred to as a controller or a processor.
The following are examples of the functions of the anchoring metadata processor 412 shown in FIG. 4.
Generation of content anchoring metadata: generates content anchoring metadata by analyzing the original contents 411 and transfers the generated content anchoring metadata to the AR terminal or client 400. The content anchoring metadata may be generated in real-time or may be generated by a separate device or function and be transferred to the anchoring metadata processor 412 depending on the configuration of the AR service.
Generation of 3D model reconstruction parameter: generates content processing parameters for the 3D model reconstruction 414 of the original contents 411 using the received terminal anchoring metadata. Examples of content processing may include scaling, rotation, and translation, and the processing is applicable to individual virtual objects or the entire scene constituted of virtual objects. The terminal anchoring metadata may be received directly from the AR terminal requesting the service or may be obtained through a separate route.
FIG. 5 illustrates a flowchart of a procedure for playing an AR service based on terminal anchoring metadata by an AR terminal according to an embodiment.
Referring to FIG. 5, at step 500, the AR terminal may initiate an AR service by executing an application program (or application), analyze the real world to generate terminal anchoring metadata at step 510, and transmit the generated terminal anchoring metadata to the AR server at step 520.
The AR server may analyze the terminal anchoring metadata to generate a 3D model and transmit the generated 3D model to the AR terminal. The AR terminal may obtain the 3D model from the AR server at step 530, determine at step 540 the anchor where the 3D model is to be positioned based on the result of the real world analysis at step 510, and render the virtual object represented as the 3D model at step 550.
Since the real world may be varied by the movement of the AR terminal or external factors, the position of the anchor may be changed or terminal anchoring metadata may be updated, if necessary, by continuously performing the real world analysis after the virtual object rendering at step 550.
FIG. 6 illustrates a flowchart of a procedure for playing an AR service based on content anchoring metadata by an AR terminal according to an embodiment.
Referring to FIG. 6, at step 600, an AR terminal may initiate an AR service by executing an application program (or application) and obtain, at step 610, content anchoring metadata. Thereafter, at step 620, the AR terminal may analyze the real world, perform anchor identification at step 630 using the content anchoring metadata, generate the terminal anchoring metadata, and at step 640, transmit the terminal anchoring metadata to the AR server.
The AR server may analyze the terminal anchoring metadata to generate a 3D model and transmit the generated 3D model to the AR terminal. The AR terminal may obtain, at step 650, the 3D model from the AR server and render, at step 660, the virtual object represented as the 3D model based on the real world analysis at step 620 and the anchor identification at step 630. Since the real world may be varied by the movement of the AR terminal or external factors, the position of the anchor may be changed or terminal anchoring metadata may be updated, if necessary, by continuously performing the real world analysis after the virtual object rendering at step 660.
In the above-described AR terminal content anchoring metadata-based AR service reproduction procedure, the process of anchor identification at step 630 may extract one or more anchor candidate groups in which case the final anchor determination may be additionally performed after the 3D model is obtained at step 650). Further, the content anchoring metadata may be updated by the AR service provider and, in this case, the process after step 610 of obtaining the content anchoring metadata may be repeated.
According to an implementation, the AR terminal may generate terminal anchoring metadata by performing real world analysis before obtaining content anchoring metadata and select content anchoring metadata matching it, and perform the subsequent processes.
FIG. 7 illustrates a block diagram of a structure of a terminal according to an embodiment.
The terminal described above in connection with FIGS. 1 to 6 may correspond to the terminal of FIG. 7. Referring to FIG. 7, the terminal may include a transceiver 710, a memory 720, and a controller 730. The transceiver 710, controller 730, and memory 720 of the terminal may operate according to the above-described communication methods by the terminal. However, the components of the terminal are not limited thereto. For example, the terminal may include more or fewer components than the above-described components. The transceiver 710, the controller 730, and the memory 720 may be implemented in the form of a single chip. The controller 730 may include one or more processors.
The transceiver 710 collectively refers to a transmitter and a receiver of the terminal and may transmit and receive signals to/from a base station, network entity, server, or another terminal. The signals transmitted and received to/from the base station, network entity, server, or the other terminal may include control information and data. To that end, the transceiver 710 may include a radio frequency (RF) transmitter for frequency-up converting and amplifying signals transmitted and an RF receiver for low-noise amplifying signals received and frequency-down converting the frequency of the received signals. However, this is merely an example of the transceiver 710, and the components of the transceiver 710 are not limited to the RF transmitter and the RF receiver.
The transceiver 710 may receive signals via a radio channel, output the signals to the controller 730, and transmit signals output from the controller 730 via a radio channel.
The memory 720 may store programs and data necessary for the operation of the terminal. The memory 720 may store control information or data that is included in the signal obtained by the terminal. The memory 720 may include a storage medium, such as ROM, RAM, hard disk, CD-ROM, and DVD, or a combination of storage media. Rather than being separately provided, the memory 720 may be embedded in the controller 730.
The controller 730 may control a series of processes for the terminal to be able to operate according to the above-described embodiments. For example, the controller 730 may generate terminal anchoring metadata based on environment information obtained from at least one sensor included in a terminal, transmit the terminal anchoring metadata to a server, receive, from the server, a 3D model generated based on the terminal anchoring metadata, and render a virtual object based on the 3D model and the environment information. There may be provided a plurality of controllers 730. The controller 730 may control the components of the terminal by executing a program stored in the memory 720.
FIG. 8 illustrates a block diagram of a structure of a server according to an embodiment.
The server described above in connection with FIGS. 1 to 6 may correspond to the server of FIG. 8. Referring to FIG. 8, the server may include a transceiver 810, a memory 820, and a controller 830. The transceiver 810, controller 830, and memory 820 of the server may operate according to the above-described communication methods by the server. However, the components of the server are not limited thereto. For example, the server may include more or fewer components than the above-described components. The transceiver 810, the controller 830, and the memory 820 may be implemented in the form of a single chip. The controller 830 may include one or more processors.
The transceiver 810 collectively refers to the transmitter of the server and the receiver of the server and may transmit and receive signals to/from the terminal, base station, or network entity. The signals transmitted/received with the terminal, base station, or network entity may include control information and data. To that end, the transceiver 810 may include an RF transmitter for frequency-up converting and amplifying signals transmitted and an RF receiver for low-noise amplifying signals received and frequency-down converting the frequency of the received signals. However, this is merely an example of the transceiver 810, and the components of the transceiver 810 are not limited to the RF transmitter and the RF receiver.
The transceiver 810 may receive signals via a radio channel, output the signals to the controller 830, and transmit signals output from the controller 830 via a radio channel.
The memory 820 may store programs and data necessary for the operation of the server. The memory 820 may store control information or data that is included in the signal obtained by the server. The memory 820 may include a storage medium, such as ROM, RAM, hard disk, CD-ROM, and DVD, or a combination of storage media. Rather than being separately provided, the memory 820 may be embedded in the controller 830.
The controller 830 may control a series of processes for the server to be able to operate according to the above-described embodiments. For example, the controller 830 may control to receive, from a terminal, terminal anchoring metadata generated based on environment information obtained from at least one sensor included in a terminal, generate a 3D model using the terminal anchoring metadata, and control to transmit the 3D model to the terminal. In this case, a virtual object may be rendered by the terminal based on the 3D model and the environment information. There may be provided a plurality of controllers 830. The controller 830 may control the components of the server by executing a program stored in the memory 820.
The methods according to the embodiments described in the specification or claims of the disclosure may be implemented in hardware, software, or a combination of hardware and software.
When implemented in software, there may be provided a computer readable storage medium storing one or more programs (software modules). One or more programs stored in the computer readable storage medium are configured to be executed by one or more processors in an electronic device. One or more programs include instructions that enable the electronic device to execute methods according to the embodiments described in the specification or claims of the disclosure.
The programs (software modules or software) may be stored in random access memories, non-volatile memories including flash memories, ROMs, electrically erasable programmable read-only memories (EEPROMs), magnetic disc storage devices, compact-disc ROMs, digital versatile discs DVDs), or other types of optical storage devices, or magnetic cassettes. Or, the programs may be stored in a memory constituted of a combination of all or some thereof. As each constituting memory, multiple ones may be included.
The programs may be stored in attachable storage devices that may be accessed via a communication network, such as the Internet, Intranet, local area network (LAN), wide area network (WAN), or storage area network (SAN) or a communication network configured of a combination thereof. The storage device may connect to the device that performs embodiments of the disclosure via an external port. A separate storage device over the communication network may be connected to the device that performs embodiments of the disclosure.
In the above-described specific embodiments, the components included in the disclosure are represented in singular or plural forms depending on specific embodiments proposed. However, the singular or plural forms are selected to be adequate for contexts suggested for ease of description, and the disclosure is not limited to singular or plural components. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Although specific embodiments of the disclosure have been described above, various changes may be made thereto without departing from the scope of the disclosure. Thus, the scope of the disclosure should not be limited to the above-described embodiments, and should rather be defined by the following claims and equivalents thereof.