Apple Patent | Presenting environment based on physical dimension

Patent: Presenting environment based on physical dimension

Drawings: Click to check drawins

Publication Number: 20210097731

Publication Date: 20210401

Applicant: Apple

Abstract

Various implementations disclosed herein include devices, systems, and methods for generating a dimensionally accurate computer-generated reality (CGR) environment with a scaled CGR object. In some implementations, a method includes obtaining environmental data corresponding to a physical environment. A known physical article located within the physical environment is identified based on the environmental data. The known physical article is associated with a known dimension. A physical dimension of the physical environment is determined based on the known dimension of the known physical article. A CGR environment is generated that represents the physical environment. A virtual dimension of the CGR environment is a function of the physical dimension of the physical environment.

Claims

  1. A method comprising: at a device including a non-transitory memory and one or more processors coupled with the non-transitory memory: obtaining environmental data corresponding to a physical environment; identifying a known physical article located within the physical environment based on the environmental data, wherein the known physical article is associated with a known dimension; determining a physical dimension of the physical environment based on the known dimension of the known physical article; and generating a computer-generated reality (CGR) environment that represents the physical environment, wherein a virtual dimension of the CGR environment is a function of the physical dimension of the physical environment.

  2. The method of claim 1, wherein obtaining the environmental data comprises receiving an image of the physical environment from an image sensor.

  3. The method of claim 2, further comprising determining a pose of the image sensor and determining a scaling factor as a function of the pose.

  4. The method of claim 1, wherein obtaining the environmental data comprises receiving depth data from a depth sensor.

  5. The method of claim 1, further comprising performing at least one of semantic segmentation or instance segmentation on the environmental data to identify the known physical article.

  6. The method of claim 1, further comprising identifying an optical machine-readable representation of data associated with the known physical article.

  7. The method of claim 1, further comprising obtaining the known dimension of the known physical article.

  8. The method of claim 1, wherein the known physical article corresponds to a portion of the environmental data.

  9. The method of claim 8, further comprising: sending a query for an image search based on the portion of the environmental data to which the known physical article corresponds; and receiving, in response to the query, dimension information for the known physical article or dimension information for a physical article within a similarity threshold of the known physical article.

  10. The method of claim 1, further comprising: sending a query based on a product identifier corresponding to the known physical article; and receiving, in response to the query, dimension information for the known physical article or dimension information for a physical article within a similarity threshold of the known physical article.

  11. The method of claim 1, further comprising receiving a user input indicating the known dimension of the known physical article.

  12. The method of claim 1, further comprising determining the physical dimension of the physical environment based on the known dimension of the known physical article and a proportion of the known physical article to the physical environment.

  13. A device comprising: an environmental sensor; a display; one or more processors; a non-transitory memory; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to: obtain, via the environmental sensor, environmental data corresponding to a physical environment; identify a known physical article located within the physical environment based on the environmental data, wherein the known physical article is associated with a known dimension; determine a physical dimension of the physical environment based on the known dimension of the known physical article; and generate a computer-generated reality (CGR) environment that represents the physical environment, wherein a virtual dimension of the CGR environment is a function of the physical dimension of the physical environment.

  14. The device of claim 13, wherein obtaining the environmental data comprises receiving an image of the physical environment from an image sensor.

  15. The device of claim 14, wherein the one or more programs further cause the device to determine a pose of the image sensor and determine a scale factor as a function of the pose.

  16. The device of claim 13, wherein obtaining the environmental data comprises receiving depth data from a depth sensor.

  17. The device of claim 13, wherein the one or more programs further cause the device to perform at least one of semantic segmentation or instance segmentation on the environmental data to identify the known physical article.

  18. The device of claim 13, wherein the one or more programs further cause the device to identify an optical machine-readable representation of data associated with the known physical article.

  19. The device of claim 13, wherein the one or more programs further cause the device to obtain the known dimension of the known physical article.

  20. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device, cause the device to: obtain, via an environmental sensor, environmental data corresponding to a physical environment; identify a known physical article located within the physical environment based on the environmental data, wherein the known physical article is associated with a known dimension; determine a physical dimension of the physical environment based on the known dimension of the known physical article; and generate a computer-generated reality (CGR) environment that represents the physical environment, wherein a virtual dimension of the CGR environment is a function of the physical dimension of the physical environment.

Description

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. patent application No. 62/906,659, filed on Sep. 26, 2019, which is hereby incorporated by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure generally relates to rendering of computer-generated reality (CGR) environments and objects.

BACKGROUND

[0003] Some devices are capable of generating and presenting computer-generated reality (CGR) environments. Some CGR environments include virtual environments that are simulated replacements of physical environments. Some CGR environments include augmented environments that are modified versions of physical environments. Some devices that present CGR environments include mobile communication devices, such as smartphones, head-mountable displays (HMDs), eyeglasses, heads-up displays (HUDs), and optical projection systems.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.

[0005] FIG. 1A illustrates an exemplary operating environment in accordance with some implementations.

[0006] FIG. 1B illustrates another exemplary operating environment in accordance with some implementations.

[0007] FIG. 2 illustrates an example system that generates a CGR environment according to various implementations.

[0008] FIG. 3 is a block diagram of an example CGR content module in accordance with some implementations.

[0009] FIGS. 4A-4C are a flowchart representation of a method for generating a CGR environment in accordance with some implementations.

[0010] FIG. 5 is a block diagram of a device in accordance with some implementations.

[0011] FIG. 6 illustrates an example system that displays a CGR object in an augmented reality (AR) environment according to various implementations.

[0012] FIG. 7 is a block diagram of an example CGR content module in accordance with some implementations.

[0013] FIGS. 8A-8C are a flowchart representation of a method for displaying a CGR object in an AR environment in accordance with some implementations.

[0014] FIG. 9 is a block diagram of a device in accordance with some implementations.

[0015] In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.

SUMMARY

[0016] Various implementations disclosed herein include devices, systems, and methods for generating a dimensionally accurate computer-generated reality (CGR) environment with a scaled CGR object. In some implementations, a method includes obtaining environmental data corresponding to a physical environment. A known physical article located within the physical environment is identified based on the environmental data. The known physical article is associated with a known dimension. A physical dimension of the physical environment is determined based on the known dimension of the known physical article. A CGR environment is generated that represents the physical environment. A virtual dimension of the CGR environment is a function of the physical dimension of the physical environment.

[0017] Various implementations disclosed herein include devices, systems, and methods for instantiating a CGR object in an augmented reality (AR) environment and scaling the CGR object based on dimension information associated with the CGR object and a known dimension of a known physical article. In some implementations, a method includes displaying an AR environment that corresponds to a physical environment. It is determined to display a CGR object in the AR environment. The CGR object represents a physical article associated with a physical dimension. A known physical article located within the physical environment is identified. The known physical article is associated with a known dimension. A virtual dimension for the CGR object is determined based on the known dimension of the known physical article and the physical dimension of the physical article that the CGR object represents. The CGR object is displayed in the AR environment in accordance with the virtual dimension.

[0018] In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs. In some implementations, the one or more programs are stored in the non-transitory memory and are executed by the one or more processors. In some implementations, the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions that, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.

Description

[0019] Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.

[0020] The present disclosure provides methods, systems, and/or devices for generating a dimensionally accurate computer-generated reality (CGR) environment with a scaled CGR object. In some implementations, a method includes obtaining environmental data corresponding to a physical environment. A known physical article located within the physical environment is identified based on the environmental data. The known physical article is associated with a known dimension. A physical dimension of the physical environment is determined based on the known dimension of the known physical article. A CGR environment is generated that represents the physical environment. A virtual dimension of the CGR environment is a function of the physical dimension of the physical environment.

[0021] In some implementations, a device generates and presents computer-generated reality (CGR) content that includes a CGR environment with virtual dimensions that are proportional to physical dimensions of a physical environment. In some implementations, based on sensor information, a controller detects a physical object in the physical environment and obtains the dimensions of the physical object, e.g., by searching a database that includes information regarding the physical object. In some implementations, the controller generates a semantic construction of the physical environment. The semantic construction may include a CGR representation of the physical object with virtual dimensions that are proportional to the physical dimensions of the physical object. In some implementations, if a detected physical object is within a degree of similarity to a physical object of a known size, the controller uses the known size of the physical object to determine relative sizes of other physical objects and the physical environment based on the sensor information.

[0022] The present disclosure provides methods, systems, and/or devices for instantiating a CGR object in an augmented reality (AR) environment and scaling the CGR object based on dimension information associated with the CGR object and a known dimension of a known physical article. In some implementations, a method includes displaying an AR environment that corresponds to a physical environment. It is determined to display a CGR object in the AR environment. The CGR object represents a physical article associated with a physical dimension. A known physical article located within the physical environment is identified. The known physical article is associated with a known dimension. A virtual dimension for the CGR object is determined based on the known dimension of the known physical article and the physical dimension of the physical article that the CGR object represents. The CGR object is displayed in the AR environment in accordance with the virtual dimension.

[0023] In some implementations, a CGR object in an augmented reality (AR) environment is scaled based on a known dimension of a known physical article. For example, an electrical outlet may be identified in an AR environment corresponding to a living room. Electrical outlets are governed by a standard and have a known height (e.g., 4 inches or approximately 10 centimeters). In some implementations, a CGR object, such as a chair, is scaled based on the electrical outlet. More generally, a CGR object may be scaled based on one or more of a known dimension of a known physical article, a distance of the known physical article from a device, a dimension of a physical article corresponding to the CGR object, and/or a distance at which the CGR object is to be placed.

[0024] A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.

[0025] In contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person’s physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person’s head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands).

[0026] A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects.

[0027] Examples of CGR Include Virtual Reality and Mixed Reality.

[0028] A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person’s presence within the computer-generated environment, and/or through a simulation of a subset of the person’s physical movements within the computer-generated environment.

[0029] In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end.

[0030] In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground.

[0031] Examples of Mixed Realities Include Augmented Reality and Augmented Virtuality.

[0032] An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment.

[0033] An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof.

[0034] An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer-generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.

[0035] There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head-mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person’s eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head-mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head-mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head-mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head-mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person’s eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one implementation, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person’s retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.

[0036] FIG. 1A illustrates an exemplary operating environment 100 in accordance with some implementations. While pertinent features are shown, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, the operating environment 100 includes an electronic device 102 and a controller 104. In some implementations, the electronic device 102 is or includes a smartphone, a tablet, a laptop computer, and/or a desktop computer. The electronic device 102 may be worn by or carried by a user 106.

[0037] As illustrated in FIG. 1A, the electronic device 102 and/or the controller 104 obtains (e.g., receives, retrieves, and/or detects) environmental data corresponding to a physical environment 108. For example, the environmental data may include an image or a video captured by an image sensor 110, such as a camera. In some implementations, the environmental data includes depth information captured by a depth sensor.

[0038] In some implementations, the electronic device 102 and/or the controller 104 identifies a known physical article 112 in the physical environment 108 based on the environmental data. For example, in some implementations, the electronic device 102 and/or the controller 104 perform semantic segmentation and/or instance segmentation on the environmental data to detect the known physical article 112. In some implementations, the electronic device 102 and/or the controller 104 identify an optical machine-readable representation (e.g., a barcode or a QR code) of data associated with the physical article. The optical machine-readable representation of data may be used to identify the known physical article 112.

[0039] The known physical article 112 is associated with a known dimension 114 (e.g., a height, a length, a width, a volume and/or an area of the known physical article 112). In some implementations, the electronic device 102 and/or the controller 104 determine (e.g., estimate) a physical dimension 116 of the physical environment 108 (e.g., a height, a length, a width, a volume and/or an area of the physical environment 108) based on the known dimension 114. In some implementations, the electronic device 102 and/or the controller 104 obtain the known dimension 114, e.g., from a datastore or via a network. In some implementations, the electronic device 102 and/or the controller 104 perform an image search based on a portion of the environmental data that corresponds to the known physical article 112. In some implementations, the electronic device 102 and/or the controller 104 determine the physical dimension 116 based on the known dimension 114 and a proportion of the known physical article 112 to the physical environment 108.

[0040] Referring now to FIG. 1B, in some implementations, the electronic device 102 and/or the controller 104 may present a computer-generated reality (CGR) environment 120 that represents the physical environment 108. In some implementations, the CGR environment 120 includes a virtual environment that is a simulated replacement of the physical environment 108. For example, the CGR environment 120 may be simulated by the electronic device 102 and/or the controller 104. In such implementations, the CGR environment 120 is different from the physical environment 108 in which the electronic device 102 is located.

[0041] In some implementations, the CGR environment 120 includes an augmented environment that is a modified version of the physical environment 108. For example, in some implementations, the electronic device 102 and/or the controller 104 modify (e.g., augment) the physical environment 108 in which the electronic device 102 is located in order to generate the CGR environment 120. In some implementations, the electronic device 102 and/or the controller 104 generate the CGR environment 120 by simulating a replica of the physical environment 108 in which the electronic device 102 is located. In some implementations, the electronic device 102 and/or the controller 104 generate the CGR environment 120 by removing and/or adding items from the simulated replica of the physical environment 108 in which the electronic device 102 is located.

[0042] In some implementations, the CGR environment 120 is associated with a virtual dimension 122 (e.g., a height, a length, a width, a volume and/or an area of the CGR environment 120). In some implementations, the virtual dimension 122 is a function of the physical dimension 116 of the physical environment 108. For example, the virtual dimension 122 is proportional to the physical dimension 116 (e.g., a ratio between a physical height and a physical width of the physical environment 108 is approximately the same as a ratio between a virtual height and a virtual width of the CGR environment 120).

[0043] In some implementations, the CGR environment 120 is an augmented reality (AR) environment that corresponds to the physical environment 108. For example, the CGR environment 120 may be rendered as an optical pass-through of the physical environment 108 in which one or more CGR objects are rendered with the physical environment 108 as a background, e.g., overlaid over the physical environment. In some implementations, the image sensor 110 obtains image data corresponding to the physical environment 108, and the CGR environment 120 is rendered as a video pass-through of the physical environment 108. In a video pass-through, the electronic device 102 and/or the controller 104 display one or more CGR objects with a CGR representation of the physical environment 108.

[0044] In some implementations, the electronic device 102 and/or the controller 104 determine to display a CGR object 124 in the CGR environment 120. The CGR object 120 represents a physical article associated with a physical dimension. For example, the electronic device 102 and/or the controller 104 may determine to display a CGR chair that represents a physical chair that is associated with a physical dimension, e.g., a height of the physical chair.

[0045] In some implementations, the electronic device 102 and/or the controller 104 identify a known physical article 126 in the physical environment 108. The known physical article 126 may be the same physical article as the known physical article 112 shown in FIG. 1A or may be a different physical article. The known physical article 126 is associated with a known dimension 128. For example, if the known physical article 126 is an electrical outlet, the electronic device 102 and/or the controller 104 may determine that electrical outlets have a known height (e.g., 4 inches or approximately 10 centimeters, for example, as defined by a standard published by a standards body such as the National Electrical Code (NEC)). In some implementations, the electronic device 102 and/or the controller 104 obtain the known dimension 128 from a datastore or via a network.

[0046] In some implementations, the electronic device 102 and/or the controller 104 determine a virtual dimension 130 of the CGR object 124 based on the known dimension 128 and the physical dimension of the physical article that the CGR object 124 represents. For example, if the CGR object 124 represents a chair, the virtual dimension 130 may be a height of the CGR object 124. The electronic device 102 and/or the controller 104 may determine the height of the CGR object 124 based on the height of an electrical outlet in the physical environment 108 and the height of a physical chair that the CGR object 124 represents.

[0047] In some implementations, a head-mountable device (HMD), being worn by the user 106, presents (e.g., displays) the computer-generated reality (CGR) environment 120 according to various implementations. In some implementations, the HMD includes an integrated display (e.g., a built-in display) that displays the CGR environment 120. In some implementations, the HMD includes a head-mountable enclosure. In various implementations, the head-mountable enclosure includes an attachment region to which another device with a display can be attached. For example, in some implementations, the electronic device 102 of FIG. 1A can be attached to the head-mountable enclosure. In various implementations, the head-mountable enclosure is shaped to form a receptacle for receiving another device that includes a display (e.g., the electronic device 102). For example, in some implementations, the electronic device 102 slides or snaps into or otherwise attaches to the head-mountable enclosure. In some implementations, the display of the device attached to the head-mountable enclosure presents (e.g., displays) the CGR environment 120. In various implementations, examples of the electronic device 104 include smartphones, tablets, media players, laptops, etc.

[0048] FIG. 2 illustrates an example system 200 that generates a CGR environment according to various implementations. In some implementations, an environmental sensor 202 obtains environmental data 204 corresponding to a physical environment. For example, in some implementations, the environmental sensor 202 comprises an image sensor 206, such as a camera, that obtains an image 208 of the environment.

[0049] In some implementations, the image 208 is a still image. In some implementations, the image 208 is an image frame forming part of a video feed. The image 208 includes a plurality of pixels. Some of the pixels, e.g., a first set of pixels, represent an object. Other pixels, e.g., a second set of pixels, represent a background, e.g., portions of the image 208 that do not represent the object. It will be appreciated that pixels that represent one object may represent the background for a different object.

[0050] In some implementations, the environmental sensor 202 comprises a depth sensor 210 that obtains depth data 212 corresponding to the physical environment. The depth data 212 may be used independently of or in connection with the image 208 to identify one or more objects in the physical environment.

[0051] In some implementations, a CGR content module 214 receives the environmental data 204 from the environmental sensor 202. In some implementations, the CGR content module 214 identifies a known physical article in the physical environment based on the environmental data 204. For example, the CGR content module 214 may perform semantic segmentation and/or instance segmentation on the environmental data 204 to identify the known physical article. In some implementations, the environmental data 204 includes an image, and the CGR content module 214 applies one or more filters and/or masks to the image to characterize pixels in the image as being associated with respective objects, such as the known physical article.

[0052] In some implementations, the image 208 includes an optical machine-readable representation (e.g., a barcode or a QR code) of data associated with the known physical article. The CGR content module 214 may send a query, e.g., to a product database to obtain information identifying the known physical article.

[0053] The known physical article is associated with a known dimension. In some implementations, the CGR content module 214 obtains dimension information for the known physical article. For example, the CGR content module 214 may send a query including information identifying the known physical article to a datastore 216 or to a service via a network 218, such as a local area network (LAN) or the Internet. In some implementations, the information identifying the known physical article includes a semantic label, a product identifier, and/or an image. In response to the query, the CGR content module 214 may receive dimension information for the known physical article. In some implementations, if dimension information for the known physical article is not available, the CGR content module 214 receives dimension information for a physical article that is within a degree of similarity to the known physical article.

[0054] In some implementations, the CGR content module 214 determines a physical dimension of the physical environment based on the known dimension of the known physical article. In some implementations, the CGR content module 214 determines the physical dimension of the physical environment based on the known dimension (e.g., the dimension information received in response to the query) of the known physical article and a proportion of the known physical article to the physical environment. For example, if the CGR content module 214 identifies the known physical article as a desk having a known width of two meters and the desk occupies half of the length of a wall, the CGR content module 214 may determine that the wall is four meters long.

[0055] In some implementations, the CGR content module 214 generates a CGR environment that represents the physical environment. The CGR environment is associated with a virtual dimension that is a function of the physical dimension of the physical environment. The CGR content module 214 may provide the CGR environment to a display engine 220, which prepares the CGR environment for output using a display 222.

[0056] FIG. 3 is a block diagram of an example CGR content module 300 in accordance with some implementations. In some implementations, the CGR content module 300 implements the CGR content module 214 shown in FIG. 2. A data obtainer 310 may obtain environmental data 302 corresponding to a physical environment.

[0057] In some implementations, the environmental data 302 includes an image 304. In some implementations, the image 304 is a still image. In some implementations, the image 304 is an image frame forming part of a video feed. The image 304 includes a plurality of pixels. Some of the pixels, e.g., a first set of pixels, represent an object. Other pixels, e.g., a second set of pixels, represent a background, e.g., portions of the image 304 that do not represent the object. It will be appreciated that pixels that represent one object may represent the background for a different object.

[0058] In some implementations, the environmental data 302 includes depth data 306 corresponding to the physical environment. The depth data 306 may be used independently of or in connection with the image 304 to identify one or more objects in the physical environment.

[0059] In some implementations, the data obtainer 310 may obtain an optical machine-readable representation 308 of data associated with a physical article. The optical machine-readable representation 308 may be implemented, for example, as a barcode or a QR code. In some implementations, the optical machine-readable representation 308 is part of the image 304. In some implementations, the optical machine-readable representation 308 is captured separately from the image 304.

[0060] In some implementations, an object analyzer 320 identifies a known physical article in the physical environment based on one or more of the image 304, the depth data 306, and/or the optical machine-readable representation 308. In some implementations, the object analyzer 320 performs semantic segmentation and/or instance segmentation on the environmental data 302 (e.g., the image 304) to identify the known physical article. In some implementations, the known physical article is represented by a portion of the image 304, and the object analyzer 320 performs semantic segmentation and/or instance segmentation on that portion of the image 304 to identify the known physical article.

[0061] In some implementations, the object analyzer 320 determines an object identifier 322, such as a semantic label and/or a product identifier, that identifies the known physical article. In some implementations, the object analyzer 320 determines the object identifier 322 for the known physical article based on available information relating to a physical article corresponding to the known physical article or within a degree of similarity to the known physical article. This information can be obtained from one or more sources.

[0062] For example, in some implementations, the object analyzer 320 determines the object identifier 322 based on information received from a database 324 (e.g., a local database). For example, the database 324 may store a product specification for a physical article (e.g., a chair) corresponding to the known physical article (e.g., of the same model of the known physical article). In some implementations, the database 324 stores a product specification for a physical article that is within a degree of similarity to (e.g., within a similarity threshold of) the known physical article. For example, if a product specification is not available for the same model of chair corresponding to the known physical article, the object analyzer 320 may use a product specification for a similar model of chair.

[0063] In some implementations, a dimension determiner 330 receives the object identifier 322 and determines a known dimension of the known physical article. In some implementations, the dimension determiner 330 obtains dimension information for the known physical article. For example, the dimension determiner 330 may send a query to a datastore 326 or to a service accessible via a network 328 (e.g., a local area network or the Internet). The datastore 326 may store dimension information for a plurality of known physical articles.

[0064] The query may include information that identifies the known physical article, such as the object identifier 322 or an image of the known physical article. In response to the query, the dimension determiner 330 may receive dimension information for the known physical article. In some implementations, if dimension information for the known physical article is not available, the dimension determiner 330 receives dimension information for a physical article that is within a degree of similarity to the known physical article. For example, if the known physical article is a chair and the datastore 326 does not store dimension information for the same model of chair corresponding to the known physical article, the dimension determiner 330 may instead receive dimension information for a similar model of chair.

[0065] In some implementations, the dimension determiner 330 determines a physical dimension of the physical environment based on the known dimension of the known physical article. In some implementations, the dimension determiner 330 determines the physical dimension of the physical environment based on the known dimension and a proportion of the known physical article to the physical environment. For example, the known physical article may be a desk located along a wall of an office, and the known dimension of the desk may be a width of two meters. If the proportion of the width of the desk to the wall is 1:2 (e.g., the desk occupies half of the wall along which the desk is located), the dimension determiner 330 may determine that the wall along which the desk is located is four meters long. In some implementations, the dimension determiner 330 determines other physical dimensions of the physical environment based on this determination. For example, if the height of the wall is three-fourths of the length of the wall, the dimension determiner 330 may determine that the wall is three meters high.

[0066] In some implementations, an environment generator 340 generates a CGR environment that represents (e.g., models) the physical environment. In some implementations, the CGR environment is a computer-generated model of the physical environment. The CGR environment may be output as part of a CGR content item 342, which may also include one or more CGR objects. In some implementations, the CGR environment has a virtual dimension, e.g., a number of pixels. The virtual dimension is a function of the physical dimension of the physical environment. For example, in some implementations, the environment generator 340 determines a number of pixels to use in rendering the physical dimension of the physical environment.

[0067] FIGS. 4A-4C are a flowchart representation of a method 400 for generating a CGR environment in accordance with some implementations. In various implementations, the method 400 is performed by a device (e.g., the electronic device 102 or the controller 104 shown in FIGS. 1A and 1B, or the system 200 shown in FIG. 2). In some implementations, the method 400 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method 400 is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., a memory). Briefly, in various implementations, the method 400 includes obtaining environmental data corresponding to a physical environment, identifying a known physical article located in the physical environment based on the environmental data, determining a physical dimension of the physical environment based on a known dimension of the known physical article, and generating a CGR environment representing the physical environment with a virtual dimension based on the physical dimension of the physical environment.

……
……
……

You may also like...