雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Facebook Patent | Audio sample phase alignment in an artificial reality system

Patent: Audio sample phase alignment in an artificial reality system

Drawings: Click to check drawins

Publication Number: 20210152966

Publication Date: 20210520

Applicant: Facebook

Abstract

This disclosure describes techniques that include aligning processing of audio samples collected by multiple audio sensors or microphones. In one example, this disclosure describes a method comprising enabling a first microphone; processing, by an audio processor and using a first processing pipeline, audio data samples collected by the first microphone; enabling a second microphone a period of time after enabling the first microphone; processing, by the audio processor and using a second processing pipeline, a sample of audio data collected by the second microphone by synchronizing starting times for the first and second processing pipelines.

Claims

  1. A system comprising: a plurality of microphones, including a first microphone and second microphone; a control system configured to selectively transition the second microphone between an enabled state and a disabled state; and an audio processing system configured to detect a transition by the second microphone from the disabled state to the enabled state, and responsive to detecting the transition, perform phase alignment between audio samples collected by the first microphone and audio samples collected by the second microphone and perform sound source identification using process the phase aligned audio samples.

  2. A system comprising: a plurality of microphones, including a first microphone and second microphone; a control system configured to selectively transition the second microphone between an enabled state and a disabled state; and an audio processing system configured to detect a transition by the second microphone from the disabled state to the enabled state, and responsive to detecting the transition, perform phase alignment between audio samples collected by the first microphone and audio samples collected by the second microphone and process the phase aligned audio samples, wherein the audio processing system is further configured to: process the audio samples collected by the first microphone using a first pipeline, wherein the first pipeline starts periodically at each of a plurality of starting clock cycles; and process the audio samples collected by the second microphone using a second pipeline.

  3. The system of claim 2, wherein to perform phase alignment, the audio processing system is further configured to: start the second pipeline during each of the plurality of starting clock cycles.

  4. The system of claim 3, wherein to start the second pipeline during each of the plurality of starting clock cycles, the audio processing system is further configured to: introduce a delay in starting the second pipeline after detecting the transition by the second microphone from the disabled state to the enabled state, wherein the delay is calculated based on the length of the first pipeline and an amount of time until one of the starting clock cycles.

  5. The system of claim 4, wherein the first pipeline operates at a first sampling frequency, wherein the second pipeline operates at a second sampling frequency that is different than the first sampling frequency, and wherein to introduce the delay in starting the second pipeline, the audio processing system is further configured to: calculate the delay further based on the difference between the first sampling frequency and the second sampling frequency.

  6. The system of claim 5, wherein the second sampling frequency is higher than the first sampling frequency.

  7. The system of claim 3, wherein to start the second pipeline during each of the plurality of starting clock cycles, the audio processing system is further configured to: detect a synchronization signal associated with the plurality of starting clock cycles; and upon detecting the synchronization signal, start the second pipeline.

  8. The system of claim 7, wherein the first pipeline operates at a first sampling frequency, and wherein the second pipeline operates at a second sampling frequency that is different than the first sampling frequency, and wherein to start the second pipeline, the audio processing system is further configured to: generate, prior detecting the synchronization signal, second pipeline data by processing audio samples collected by the second microphone prior to detecting the synchronization signal; and upon detecting the synchronization signal, discarding at least some of the second pipeline data.

  9. The system of claim 2, wherein to process the phase aligned audio samples, the audio processing system is further configured to perform at least one of: sound source identification,-directional alignment, localization, mixing.

  10. The system of claim 1, wherein the system is an artificial reality system, and wherein control system is configured to: detect a status change associated with the artificial reality system requiring a more robust audio processing; and responsive to detecting the status change, transition the second microphone from the disabled state to the enabled state.

  11. The artificial reality system of claim 10, wherein the status change is a first status change, and wherein the audio processing system is further configured to: detect a second status change associated with the artificial reality system; determine that the second status change requires less robust audio processing; and responsive to detecting the second status change, enter a low-power mode, wherein to enter the low power mode, the audio processing system is further configured to transition the second microphone from the disabled state to the enabled state.

  12. The system of claim 1, wherein the artificial reality system further includes a head-mounted display (HMD), and wherein the HMD is configured to perform at least one of: detect input; detect a change in a mode of the artificial reality system; detect audio data associated with a plurality of voices; detect a transition to a noisier physical environment; and detect a change in the physical environment.

  13. A method comprising: receiving, by an audio processing system in an artificial reality system having a first microphone and a second microphone, audio samples collected by a first microphone; processing, by the audio processing system, the audio samples collected by the first microphone using a first pipeline, wherein the first pipeline starts periodically at each of a plurality of starting clock cycles; processing, by the audio processing system, audio samples collected by the second microphone using a second pipeline; detecting, by the audio processing system, a transition by the second microphone from a disabled state to an enabled state; performing, by the audio processing system and in response to detecting the transition, phase alignment between the audio samples collected by the first microphone and the audio samples collected by the second microphone; and processing, by the audio processing system, the aligned audio samples.

  14. (canceled)

  15. The method of claim 13, wherein performing phase alignment includes: starting the second pipeline during each of the plurality of starting clock cycles.

  16. The method of claim 15, wherein starting the second pipeline during each of the plurality of starting clock cycles includes: introducing a delay in starting the second pipeline after detecting the transition by the second microphone from the disabled state to the enabled state, wherein the delay is calculated based on the length of the first pipeline and an amount of time until one of the starting clock cycles.

  17. The method of claim 16, wherein the first pipeline operates at a first sampling frequency, wherein the second pipeline operates at a second sampling frequency that is different than the first sampling frequency, and wherein introducing the delay in starting the second pipeline includes: calculating the delay further based on the difference between the first sampling frequency and the second sampling frequency.

  18. The method of claim 17, wherein the second sampling frequency is higher than the first sampling frequency.

  19. The method of claim 15, wherein starting the second pipeline during each of the plurality of starting clock cycles includes: detecting a synchronization signal associated with the plurality of starting clock cycles; and upon detecting the synchronization signal, starting the second pipeline.

  20. A non-transitory computer-readable storage medium comprising instructions that, when executed, configure an audio processing system of an artificial reality system to perform operations comprising: receiving audio samples collected by a first microphone; detecting a transition by a second microphone from a disabled state to an enabled state; performing, by the audio processing system and responsive to detecting the transition, phase alignment between the audio samples collected by the first microphone and audio samples collected by the second microphone; and performing, by the audio processing system, directional alignment using aligned audio samples.

Description

CROSS REFERENCE

[0001] This application claims the benefit of U.S. Provisional Patent Application No. 62/938,114 filed on Nov. 20, 2019, which is hereby incorporated by reference herein in its entirety.

TECHNICAL FIELD

[0002] This disclosure generally relates to audio processing, including audio processing in artificial reality systems, such as virtual reality, mixed reality and/or augmented reality systems.

BACKGROUND

[0003] Artificial reality systems are becoming increasingly ubiquitous with applications in many fields such as computer gaming, health and safety, industrial, and education. For example, artificial reality systems are being incorporated into mobile devices, gaming consoles, personal computers, movie theaters, and theme parks. In general, artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivatives thereof.

SUMMARY

[0004] This disclosure describes techniques that include aligning processing of audio samples collected by multiple audio sensors or microphones. In some examples, techniques are described for aligning processing of audio samples collected by two microphones, where one is enabled or turned on at an arbitrary time after the other is enabled or turned on. In some examples, audio samples collected by each such microphone may be processed by an audio processor in processing pipelines started at different times. As a result, the pipelines may complete processing at different times, thereby complicating use of such samples in further processing. To avoid this result, in one example, the audio processor may introduce a delay in starting the audio processing pipeline for a channel associated with the later-enabled microphone to ensure that the pipeline starts at the same time that a pipeline for the channel associated with the earlier-enabled microphone is started. In another example, the audio processor may use a synchronization signal to communicate to the later-started audio channel when to start its audio processing pipeline. If the later-started audio channel is signaled when the earlier-started audio channel is starting to process a new pipeline, the processing of audio data by the two channels may be aligned. Techniques are described for aligning processing of audio samples for channels that operate at the same frequency and at different frequencies.

[0005] The disclosed techniques may, in various implementations, provide one or more technical advantages. For instance, by aligning processing of audio samples, techniques for performing certain operations on audio samples (e.g., sound source identification, directional alignment, localization, mixing) are simplified and/or feasible. Further, by implementing techniques for aligning processing of audio samples, power-saving modes involving selectively turning on and off various microphones can be performed with little or no loss in actual or effective functionality when transitioning from a low power mode that uses only a small subset of microphones in a microphone array to a more robust power mode that uses a larger subset of microphones in the microphone array.

[0006] In some examples, this disclosure describes operations performed by an audio processing system in accordance with one or more aspects of this disclosure. In one specific example, this disclosure describes a system comprising a plurality of microphones, including a first microphone and second microphone; a control system configured to selectively transition the second microphone between an enabled state and a disabled state; and an audio processing system configured to: receive audio samples collected by the first microphone, detect a transition by the second microphone from the disabled state to the enabled state, perform phase alignment between audio samples collected by the first microphone and audio samples collected by the second microphone, and process the audio samples collected by the first microphone and the second microphone.

[0007] In another example, this disclosure describes a method comprising receiving, by an audio processing system in an artificial reality system having a first microphone and a second microphone, audio samples collected by a first microphone; detecting, by the audio processing system, a transition by the second microphone from a disabled state to an enabled state; performing, by the audio processing system, phase alignment between audio samples collected by the first microphone and audio samples collected by the second microphone; and processing, by the audio processing system, the audio samples collected by the first microphone and the second microphone.

[0008] In another example, this disclosure describes a computer-readable storage medium comprises instructions that, when executed, configure processing circuitry of a computing system to perform operations comprising receiving audio samples collected by a first microphone; detecting a transition by a second microphone from a disabled state to an enabled state; performing, by the audio processing system, phase alignment between audio samples collected by the first microphone and audio samples collected by the second microphone; and processing, by the audio processing system, the audio samples collected by the first microphone and the second microphone.

[0009] The details of one or more examples of the techniques of this disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques will be apparent from the description and drawings, and from the claims.

BRIEF DESCRIPTION OF DRAWINGS

[0010] FIG. 1A is an illustration depicting an example artificial reality system, in accordance with one or more aspects of the present disclosure.

[0011] FIG. 1B is an illustration depicting another example artificial reality system, in accordance with one or more aspects of the present disclosure.

[0012] FIG. 2A is an illustration depicting an example HMD configured to collect audio samples from a microphone array, in accordance with one or more aspects of the present disclosure.

[0013] FIG. 2B is an illustration depicting another example HMD configured to collect audio samples from a microphone array, in accordance with one or more aspects of the present disclosure.

[0014] FIG. 3 is a block diagram showing example implementations of a console and HMD of an artificial reality system that may selectively turn on and off various audio sensors, in accordance with one or more aspects of the present disclosure.

[0015] FIG. 4 is a block diagram depicting an example in which HMD of the artificial reality system that may selectively turn on and off various audio sensors, in accordance with one or more aspects of the present disclosure.

[0016] FIG. 5 is a block diagram illustrating a more detailed example implementation of a distributed architecture for a multi-device artificial reality system in which one or more devices are implemented using one or more SoC integrated circuits within each device, in accordance with one or more aspects of the present disclosure.

[0017] FIG. 6A, FIG. 6B, and FIG. 6C are timing diagrams illustrating processing of audio samples collected from multiple microphones, in accordance with one or more aspects of the present disclosure.

[0018] FIG. 7A, FIG. 7B, and FIG. 7C are timing diagrams illustrating processing of audio samples collected from multiple microphones operating at different sampling frequencies, in accordance with one or more aspects of the present disclosure.

[0019] FIG. 8 is a flow diagram illustrating an example process for transitioning between audio processing states in accordance with one or more aspects of the present disclosure.

[0020] FIG. 9 is a flow diagram illustrating operations performed by an example HMD in accordance with one or more aspects of the present disclosure.

DETAILED DESCRIPTION

[0021] FIG. 1A is an illustration depicting an example artificial reality system 10, in accordance with one or more aspects of the present disclosure. In the example of FIG. 1A, artificial reality system 10 includes head mounted device (HMD) 112, console 106 and, in some examples, one or more external sensors 90. In some examples, external sensors 90 may include microphones and/or audio sensors.

[0022] As shown, HMD 112 is typically worn by user 110 and comprises an electronic display and optical assembly for presenting artificial reality content 122 to user 110. In addition, HMD 112 includes one or more sensors (e.g., accelerometers) for tracking motion of the HMD and may include one or more image capture devices 138, e.g., cameras, line scanners and the like, for capturing image data of the surrounding physical environment. Although illustrated as a head-mounted display, AR system 10 may alternatively, or additionally, include glasses or other display devices for presenting artificial reality content 122 to user 110.

[0023] In this example, console 106 is shown as a single computing device, such as a gaming console, workstation, a desktop computer, or a laptop. In other examples, console 106 may be distributed across a plurality of computing devices, such as a distributed computing network, a data center, or a cloud computing system. Console 106, HMD 112, and sensors 90 may, as shown in this example, be communicatively coupled via network 104, which may be a wired or wireless network, such as WiFi, a mesh network or a short-range wireless communication medium. Although HMD 112 is shown in this example as in communication with, e.g., tethered to or in wireless communication with, console 106, in some implementations HMD 112 operates as a stand-alone, mobile artificial reality system.

[0024] In general, artificial reality system 10 uses information captured from a real-world, 3D physical environment to render artificial reality content 122 for display to user 110. In the example of FIG. 1A, user 110 views the artificial reality content 122 constructed and rendered by an artificial reality application executing on console 106 and/or HMD 112. In some examples, artificial reality content 122 may comprise a mixture of real-world imagery (e.g., hand 132, earth 120, wall 121) and virtual objects (e.g., virtual content items 124, 126, 140 and 142). In the example of FIG. 1A, artificial reality content 122 comprises virtual content items 124, 126 represent virtual tables and may be mapped (e.g., pinned, locked, placed) to a particular position within artificial reality content 122. Similarly, artificial reality content 122 comprises virtual content item 142 that represents a virtual display device that is also mapped to a particular position within artificial reality content 122. A position for a virtual content item may be fixed, as relative to a wall or the earth, for instance. A position for a virtual content item may be variable, as relative to a user, for instance. In some examples, the particular position of a virtual content item within artificial reality content 122 is associated with a position within the real-world, physical environment (e.g., on a surface of a physical object).

[0025] In the example artificial reality experience shown in FIG. 1A, virtual content items 124, 126 are mapped to positions on the earth 120 and/or wall 121. The artificial reality system 10 may render one or more virtual content items in response to a determination that at least a portion of the location of virtual content items is in the field of view 130 of user 110. That is, virtual content appears only within artificial reality content 122 and does not exist in the real world, physical environment.

[0026] During operation, an artificial reality application constructs artificial reality content 122 for display to user 110 by tracking and computing pose information for a frame of reference, typically a viewing perspective of HMD 112. Using HMD 112 as a frame of reference, and based on a current field of view 130 as determined by a current estimated pose of HMD 112, the artificial reality application renders 3D artificial reality content which, in some examples, may be overlaid, at least in part, upon the real-world, 3D physical environment of user 110. During this process, the artificial reality application uses sensed data received from HMD 112, such as movement information and user commands, and, in some examples, data from any external sensors 90, such as external cameras or microphones, to capture 3D information within the real world, physical environment, such as motion by user 110 and/or feature tracking information with respect to user 110. Based on the sensed data, the artificial reality application determines a current pose for the frame of reference of HMD 112 and, in accordance with the current pose, renders the artificial reality content 122.

[0027] Artificial reality system 10 may trigger generation and rendering of virtual content items based on a current field of view 130 of user 110, as may be determined by real-time gaze tracking of the user, or other conditions. More specifically, image capture devices 138 of HMD 112 capture image data representative of objects in the real-world, physical environment that are within a field of view 130 of image capture devices 138. Field of view 130 typically corresponds with the viewing perspective of HMD 112. In some examples, the artificial reality application presents artificial reality content 122 comprising mixed reality and/or augmented reality. In some examples, the artificial reality application may render images of real-world objects, such as the portions of hand 132 and/or arm 134 of user 110, that are within field of view 130 along with the virtual objects, such as within artificial reality content 122. In other examples, the artificial reality application may render virtual representations of the portions of hand 132 and/or arm 134 of user 110 that are within field of view 130 (e.g., render real-world objects as virtual objects) within artificial reality content 122. In either example, user 110 is able to view the portions of their hand 132, arm 134, and/or any other real-world objects that are within field of view 130 within artificial reality content 122. In other examples, the artificial reality application may not render representations of the hand 132 or arm 134 of the user.

[0028] During operation, artificial reality system 10 performs object recognition within image data captured by image capture devices 138 of HMD 112 to identify hand 132, including optionally identifying individual fingers or the thumb, and/or all or portions of arm 134 of user 110. Further, artificial reality system 10 tracks the position, orientation, and configuration of hand 132 (optionally including particular digits of the hand), and/or portions of arm 134 over a sliding window of time.

[0029] Rather than requiring only artificial reality applications that are typically fully immersive of the whole field of view 130 within artificial reality content 122, artificial reality system 10 may enable generation and display of artificial reality content 122 by a plurality of artificial reality applications that are concurrently running and which output content for display in a common scene. Artificial reality applications may include environment applications, placed applications, and floating applications. Environment applications may define a scene for the AR environment that serves as a backdrop for one or more applications to become active. For example, environment applications place a user in the scene, such as a beach, office, environment from a fictional location (e.g., from a game or story), environment of a real location, or any other environment. In the example of FIG. 1A, the environment application provides a living room scene within artificial reality content 122.

[0030] A placed application is a fixed application that is expected to remain rendered (e.g., no expectation to close the applications) within artificial reality content 122. For example, a placed application may include surfaces to place other objects, such as a table, shelf, or the like. In some examples, a placed application includes decorative applications, such as pictures, candles, flowers, game trophies, or any ornamental item to customize the scene. In some examples, a placed application includes functional applications (e.g., widgets) that allow quick glancing at important information (e.g., agenda view of a calendar). In the example of FIG. 1A, artificial reality content 122 includes virtual tables 124 and 126 that include surfaces to place other objects.

[0031] A floating application may include an application implemented on a “floating window.” For example, a floating application may include 2D user interfaces, 2D applications (e.g., clock, calendar, etc.), or the like. In the example of FIG. 1A, a floating application may include clock application 128 that is implemented on a floating window within artificial reality content 122. In some examples, floating applications may integrate 3D content. For example, a floating application may be a flight booking application that provides a 2D user interface to view and select from a list of available flights and is integrated with 3D content such as a 3D visualization of a seat selection. As another example, a floating application may be a chemistry teaching application that provides a 2D user interface of a description of a molecule and also shows 3D models of the molecules. In another example, a floating application may be a language learning application that may also show a 3D model of objects with the definition and/or 3D charts for learning progress. In a further example, a floating application may be a video chat application that shows a 3D reconstruction of the face of the person on the other end of the line.

[0032] As further described below, artificial reality system 10 includes an application engine 107 that is configured to execute one or more artificial reality applications, including those that may collaboratively build and share a common artificial reality environment. In one example, application engine 107 receives modeling information of objects of a plurality of artificial reality applications. For instance, application engine 107 receives modeling information of agenda object 140 of an agenda application to display agenda information. Application engine 107 also receives modeling information of virtual display object 142 of a media content application to display media content (e.g., GIF, photo, application, live-stream, video, text, web-browser, drawing, animation, 3D model, representation of data files (including two-dimensional and three-dimensional datasets), or any other visible media).

[0033] In some examples, the artificial reality applications may, in accordance with the techniques, specify any number of offer areas (e.g., zero or more) that define objects and surfaces suitable for placing the objects. In some examples, the artificial reality application includes metadata describing the offer area, such as a specific node to provide the offer area, pose of the offer area relative to that node, surface shape of the offer area and size of the offer area. In the example of FIG. 1A, the agenda application defines offer area 150 on the surface of virtual table 124 to display agenda object 140. The agenda application may specify, for example, that the position and orientation (e.g., pose) of offer area 150 is on the top of virtual table 124, the shape of offer area 150 as a rectangle, and the size of offer area 150 for placing agenda object 140. As another example, a media content application defines offer area 152 of virtual display object 142. The media content application may specify, for example, that the position and orientation (i.e., pose) of offer area 152 for placing virtual display object 142, the shape of offer 152 as a rectangle, and the size of offer area 150 for placing virtual display object 142.

[0034] The artificial reality applications may also request one or more attachments that describe connections between offer areas and the objects placed on them. In some examples, attachments include additional attributes, such as whether the object can be interactively moved or scaled. In the example of FIG. 1A, the agenda application requests for an attachment between offer area 150 and agenda object 140 and includes additional attributes indicating agenda object 140 may be interactively moved and/or scaled within offer area 150. Similarly, the media content application requests for an attachment between offer area 152 and virtual display object 142 and includes additional attributes indicating virtual display object 142 is fixed within offer area 152.

[0035] Alternatively, or additionally, objects are automatically placed on offer areas. For example, a request for attachment for an offer area may specify dimensions of the offer area and the object being placed, semantic information of the offer area and the object being placed, and/or physics information of the offer area and the object being placed. Dimensions of an offer area may include the necessary amount of space for an offer area to support the placement of the object and dimensions of the object may include the size of object. In some examples, an object is automatically placed in a scene based on semantic information, such as the type of object, the type of offer area, and what types of objects can be found on this type of area. For example, an offer area on a body of water may have semantic information specifying that only water compatible objects (e.g., boat) can be placed on the body of water. In some examples, an object is automatically placed in a scene based on physics (or pseudo-physics) information, such as whether an object has enough support in the offer area, whether the object will slide or fall, whether the object may collide with other objects, or the like.

[0036] In some examples, console 106, HMD 112, and/or other components of system 10 of FIG. 1A may be implemented to control an array of microphones, including selectively enabling and disabling such microphones to conserve power when fewer microphones might not be needed by system 20 and/or HMD 112. In some examples, console 106, HMD 112, and/or other components of system 20 may, when such microphones are enabled or disabled, perform operations to align processing of audio samples, where such microphones may be turned on asynchronously and/or at arbitrary times.

[0037] The system and techniques may provide one or more technical advantages and practical applications. For example, by aligning processing of audio samples, techniques for performing certain operations on audio samples (e.g., sound source identification, directional alignment, localization, mixing) are simplified and/or feasible. Further, by implementing techniques for aligning processing of audio samples, power-saving modes involving selectively turning on and off various microphones can be performed with little or no loss in functionality when transitioning from a low power mode that uses only a small subset of microphones in a microphone array to a more robust power mode that uses a larger subset of microphones in the microphone array.

[0038] FIG. 1B is an illustration depicting another example artificial reality system 20 that generates an artificial reality scene, in accordance with one or more aspects of the present disclosure. Similar to artificial reality system 10 of FIG. 1A, in some examples, artificial reality system 20 of FIG. 1B may generate and render a common scene including objects for a plurality of artificial reality applications within a multi-user artificial reality environment. Artificial reality system 20 may also, in various examples, provide interactive placement and/or manipulation of virtual objects in response detection of one or more particular gestures of a user within the multi-user artificial reality environment.

[0039] In the example of FIG. 1B, artificial reality system 20 includes external cameras 102A and 102B (collectively, “external cameras 102”), HMDs 112A-112C (collectively, “HMDs 112”), controllers 114A and 114B (collectively, “controllers 114”), console 106, and sensors 90. As shown in FIG. 1B, artificial reality system 20 represents a multi-user environment in which a plurality of artificial reality applications executing on console 106 and/or HMDs 112 may be concurrently running and displayed on a common rendered scene presented to each of users 110A-110C (collectively, “users 110”) based on a current viewing perspective of a corresponding frame of reference for the respective user. That is, in this example, each of the plurality of artificial reality applications constructs artificial content by tracking and computing pose information for a frame of reference for each of HMDs 112. Artificial reality system 20 uses data received from cameras 102, HMDs 112, and controllers 114 to capture 3D information within the real world environment, such as motion by users 110 and/or tracking information with respect to users 110 and objects 108, for use in computing updated pose information for a corresponding frame of reference of HMDs 112. As one example, the plurality of artificial reality applications may render on the same scene, based on a current viewing perspective determined for HMD 112C, artificial reality content 122 having virtual objects 124, 126, 140, and 142 as spatially overlaid upon real world objects 108A-108C (collectively, “real world objects 108”). Further, from the perspective of HMD 112C, artificial reality system 20 renders avatars 122A, 122B based upon the estimated positions for users 110A, 110B, respectively.

[0040] Each of HMDs 112 concurrently operates within artificial reality system 20. In the example of FIG. 1B, each of users 110 may be a “participant” (or “player”) in the plurality of artificial reality applications, and any of users 110 may be a “spectator” or “observer” in the plurality of artificial reality applications. HMD 112C may operate substantially similar to HMD 112 of FIG. 1A by tracking hand 132 and/or arm 134 of user 110C, and rendering the portions of hand 132 that are within field of view 130 as virtual hand 136 within artificial reality content 122. HMD 112B may receive user inputs from controllers 114A held by user 110B. HMD 112A may also operate substantially similar to HMD 112 of FIG. 1A and receive user inputs by tracking movements of hands 132A, 132B of user 110A. HMD 112B may receive user inputs from controllers 114 held by user 110B. Controllers 114 may be in communication with HMD 112B using near-field communication of short-range wireless communication such as Bluetooth, using wired communication links, or using another type of communication links.

[0041] As shown in FIG. 1B, in addition to or alternatively to image data captured via camera 138 of HMD 112C, input data from external cameras 102 may be used to track and detect particular motions, configurations, positions, and/or orientations of hands and arms of users 110, such as hand 132 of user 110C, including movements of individual and/or combinations of digits (fingers, thumb) of the hand.

[0042] In some aspects, the artificial reality application can run on console 106, and can utilize image capture devices 102A and 102B to analyze configurations, positions, and/or orientations of hand 132B to identify input gestures that may be performed by a user of HMD 112A. The application engine 107 may render virtual content items, responsive to such gestures, motions, and orientations, in a manner similar to that described above with respect to FIG. 1A. For example, application engine 107 may provide interactive placement and/or manipulation of agenda object 140 and/or virtual display object 142 responsive to such gestures, motions, and orientations, in a manner similar to that described above with respect to FIG. 1A.

[0043] Image capture devices 102 and 138 may capture images in the visible light spectrum, the infrared spectrum, or other spectrum. Image processing described herein for identifying objects, object poses, and gestures, for example, may include processing infrared images, visible light spectrum images, and so forth.

[0044] In some examples, console 106, HMD 112, and/or other components of system 20 of FIG. 1B may be implemented to control an array of microphones, including selectively enabling and disabling such microphones to conserve power when fewer microphones might not be needed by system 20 and/or HMD 112. In some examples, console 106, HMD 112, and/or other components of system 20 may, when such microphones are enabled or disabled, align processing of audio samples collected by microphones turned on asynchronously and/or at arbitrary times.

[0045] FIG. 2A is an illustration depicting an example HMD 112 capable of and/or configured to collect audio samples from a microphone array, in accordance with one or more aspects of the present disclosure. HMD 112 of FIG. 2A may be an example of any of HMDs 112 of FIGS. 1A and 1B. HMD 112 may be part of an artificial reality system, such as artificial reality systems 10, 20 of FIGS. 1A, 1B, or may operate as a stand-alone, mobile artificial realty system configured to implement the techniques described herein.

[0046] In this example, HMD 112 includes a front rigid body and a band to secure HMD 112 to a user. In addition, HMD 112 includes an interior-facing electronic display 203 configured to present artificial reality content to the user. Electronic display 203 may be any suitable display technology, such as liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of display capable of generating visual output. In some examples, the electronic display is a stereoscopic display for providing separate images to each eye of the user. In some examples, the known orientation and position of display 203 relative to the front rigid body of HMD 112 is used as a frame of reference, also referred to as a local origin, when tracking the position and orientation of HMD 112 for rendering artificial reality content according to a current viewing perspective of HMD 112 and the user. In other examples, HMD may take the form of other wearable head mounted displays, such as glasses or goggles.

[0047] As further shown in FIG. 2A, in this example, HMD 112 further includes one or more sensors 206, such as one or more motion sensors, accelerometers (also referred to as inertial measurement units or “IMUs”) that output data indicative of current acceleration of HMD 112, GPS sensors that output data indicative of a location of HMD 112, radar or sonar that output data indicative of distances of HMD 112 from various objects, or other sensors that may provide indications of a location or orientation of HMD 112 or other objects within a physical environment. HMD 112 may include one or more audio sensors or microphones 207 for capturing audio from the physical environment. Such microphones 207 may be arranged in an array and may be capable of being used for performing directional alignment, sound source identification, direction of arrival estimation, audio localization, and other procedures. In some examples, each of microphones can be selectively enabled and disabled or turned on or off to conserve power.

[0048] Moreover, HMD 112 may include integrated image capture devices 138A and 138B (collectively, “image capture devices 138”), such as video cameras, laser scanners, Doppler radar scanners, depth scanners, or the like, configured to output image data representative of the physical environment. More specifically, image capture devices 138 capture image data representative of objects (including hand 132) in the physical environment that are within a field of view 130A, 130B of image capture devices 138, which typically corresponds with the viewing perspective of HMD 112. HMD 112 includes an internal control unit 210, which may include an internal power source and one or more printed-circuit boards having one or more processors, memory, and hardware to provide an operating environment for executing programmable operations to process sensed data and present artificial reality content on display 203.

[0049] In some examples, application engine 107 controls interactions to the objects on the scene, and delivers input and other signals for interested artificial reality applications. For example, control unit 210 is configured to, based on the sensed data, identify a specific gesture or combination of gestures performed by the user and, in response, perform an action. As explained herein, control unit 210 may perform object recognition within image data captured by image capture devices 138 to identify a hand 132, fingers, thumb, arm or another part of the user, and track movements of the identified part to identify pre-defined gestures performed by the user. In response to identifying a pre-defined gesture, control unit 210 takes some action, such as generating and rendering artificial reality content that is interactively placed or manipulated for display on electronic display 203.

[0050] In accordance with the techniques described herein, HMD 112 may detect gestures of hand 132 and, based on the detected gestures, shift application content items placed on offer areas within the artificial reality content to another location within the offer area or to another offer area within the artificial reality content. For instance, image capture devices 138 may be configured to capture image data representative of a physical environment. Control unit 210 may output artificial reality content on electronic display 203. Control unit 210 may render a first offer area (e.g., offer area 150 of FIGS. 1A and 1B) that includes an attachment that connects an object (e.g., agenda object 140 of FIGS. 1A and 1B). Control unit 210 may identify, from the image data, a selection gesture, where the selection gesture is a configuration of hand 132 that performs a pinching or grabbing motion to the object within offer area, and a subsequent translation gesture (e.g., moving) of hand 132 from the first offer area to a second offer area (e.g., offer area 152 of FIGS. 1A and 1B). In response to control unit 210 identifying the selection gesture and the translation gesture, control unit 210 may process the attachment to connect the object on the second offer area and render the object placed on the second offer area.

[0051] FIG. 2B is an illustration depicting another example HMD 112 capable of and/or configured to collect audio samples from a microphone array, in accordance with one or more aspects of the present disclosure. As shown in FIG. 2B, HMD 112 may take the form of glasses. HMD 112 of FIG. 2A may be an example of any of HMDs 112 of FIGS. 1A and 1B. HMD 112 may be part of an artificial reality system, such as artificial reality systems 10, 20 of FIGS. 1A, 1B, or may operate as a stand-alone, mobile artificial realty system configured to implement the techniques described herein.

[0052] In this example, HMD 112 are glasses comprising a front frame including a bridge to allow the HMD 112 to rest on a user’s nose and temples (or “arms”) that extend over the user’s ears to secure HMD 112 to the user. In addition, HMD 112 of FIG. 2B includes interior-facing electronic displays 203A and 203B (collectively, “electronic displays 203”) configured to present artificial reality content to the user. Electronic displays 203 may be any suitable display technology, such as liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of display capable of generating visual output. In the example shown in FIG. 2B, electronic displays 203 form a stereoscopic display for providing separate images to each eye of the user. In some examples, the known orientation and position of display 203 relative to the front frame of HMD 112 is used as a frame of reference, also referred to as a local origin, when tracking the position and orientation of HMD 112 for rendering artificial reality content according to a current viewing perspective of HMD 112 and the user.

[0053] As further shown in FIG. 2B, in this example, HMD 112 further includes one or more sensors 206, such as one or more motion sensors or accelerometers (also referred to as inertial measurement units or “IMUs”) that output data indicative of current acceleration of HMD 112, GPS sensors that output data indicative of a location of HMD 112, radar or sonar that output data indicative of distances of HMD 112 from various objects, or other sensors that provide indications of a location or orientation of HMD 112 or other objects within a physical environment. HMD 112 of FIG. 2B may also include one or more audio sensors or microphones 207 for capturing audio from the physical environment. Such microphones 207 may be arranged in an array and capable of being used for performing directional alignment, sound source identification, direction of arrival estimation, audio localization, and other procedures. In some examples, each of microphones can be selectively turned on or off to conserve power. Moreover, HMD 112 of FIG. 2B may include integrated image capture devices 138A and 138B (collectively, “image capture devices 138”), such as video cameras, laser scanners, Doppler radar scanners, depth scanners, or the like, configured to output image data representative of the physical environment. HMD 112 includes an internal control unit 210, which may include an internal power source and one or more printed-circuit boards having one or more processors, memory, and hardware to provide an operating environment for executing programmable operations to process sensed data and present artificial reality content on display 203.

[0054] FIG. 3 is a block diagram showing example implementations of a console 106 and HMD 112 of an artificial reality system that may selectively turn on and off various audio sensors, in accordance with one or more aspects of the present disclosure. In the example of FIG. 3, console 106 performs pose tracking, gesture detection, and generation and rendering of multiple artificial reality applications 322 that may be concurrently running and outputting content for display within a common 3D AR scene on electronic display 203 of HMD 112.

[0055] In this example, HMD 112 includes one or more processors 302 and memory 304 that, in some examples, provide a computer platform for executing an operating system 305, which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system. In turn, operating system 305 provides a multitasking operating environment for executing one or more software components 307, including application engine 107. As discussed with respect to the examples of FIGS. 2A and 2B, processors 302 are coupled to electronic display 203, sensors 206 and image capture devices 138. In some examples, processors 302 and memory 304 may be separate, discrete components. In other examples, memory 304 may be on-chip memory collocated with processors 302 within a single integrated circuit.

[0056] HMD 112 may include audio processing module 390, which may perform operations relating to processing audio samples collected one or more audio sensors or microphones 207. audio processing module 390 may include a control system or controller logic that is capable of or configured to selectively transition each of sensors 207 into an enabled or disabled state (e.g., “turn on” or “turn off” microphones 207).

……
……
……

您可能还喜欢...