空 挡 广 告 位 | 空 挡 广 告 位

Magic Leap Patent | Virtual Reality, Augmented Reality, And Mixed Reality Systems With Spatialized Audio

Patent: Virtual Reality, Augmented Reality, And Mixed Reality Systems With Spatialized Audio

Publication Number: 10448189

Publication Date: 20191015

Applicants: Magic Leap

Abstract

A spatialized audio system includes a sensor to detect a head pose of a listener. The system also includes a processor to render audio data in first and second stages. The first stage includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources. The second stage includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

FIELD OF THE INVENTION

The present disclosure relates to virtual reality, augmented reality, and/or mixed reality systems with spatialized audio systems, and methods for generating a virtual reality, augmented reality, and/or mixed reality experience including spatialized audio using same.

BACKGROUND

Modern computing and display technologies have facilitated the development of mixed reality systems for so called “mixed reality” (“MR”), “virtual reality” (“VR”) and/or “augmented reality” (“AR”) experiences. This can be done by presenting computer-generated imagery to the user through a head-mounted display. This imagery creates a sensory experience which immerses the user in the simulated environment. A VR scenario typically involves presentation of digital or virtual image information without transparency to actual real-world visual input.

AR systems generally supplement a real-world environment with simulated elements. For example, AR systems may provide a user with a view of the surrounding real-world environment via a head-mounted display. However, computer-generated imagery can also be presented on the display to enhance the real-world environment. This computer-generated imagery can include elements which are contextually-related to the real-world environment. Such elements can include simulated text, images, objects, etc. MR systems also introduce simulated objects into a real-world environment, but these objects typically feature a greater degree of interactivity than in AR systems. The simulated elements can often times be interactive in real time. VR/AR/MR scenarios can be presented with spatialized audio to improve user experience.

Various optical systems generate images at various depths for displaying VR/AR/MR scenarios. Some such optical systems are described in U.S. Utility patent application Ser. No. 14/738,877 and U.S. Utility patent application Ser. No. 14/555,585 filed on Nov. 27, 2014, the contents of which have been previously incorporated-by-reference herein.

Current spatialized audio systems can cooperate with 3-D optical systems, such as those in 3-D cinema, 3-D video games, virtual reality, augmented reality, and/or mixed reality systems, to render, both optically and sonically, virtual objects. Objects are “virtual” in that they are not real physical objects located in respective positions in three-dimensional space. Instead, virtual objects only exist in the brains (e.g., the optical and/or auditory centers) of viewers and/or listeners when stimulated by light beams and/or soundwaves respectively directed to the eyes and/or ears of audience members. Unfortunately, the listener position and orientation requirements of current spatialized audio systems limit their ability to create the audio portions of virtual objects in a realistic manner for out-of-position listeners.

Current spatialized audio systems, such as those for home theaters and video games, utilize the “5.1” and “7.1” formats. A 5.1 spatialized audio system includes left and right front channels, left and right rear channels, a center channel and a subwoofer. A 7.1 spatialized audio system includes the channels of the 5.1 audio system and left and right channels aligned with the intended listener. Each of the above-mentioned channels corresponds to a separate speaker. Cinema audio systems and cinema grade home theater systems include DOLBY ATMOS, which adds channels configured to be delivered from above the intended listener, thereby immersing the listener in the sound field and surrounding the listener with sound.

Despite improvements in spatialized audio systems, current spatialized audio systems are not capable of taking into account the location and orientation of a listener, not to mention the respective locations and orientations of a plurality of listeners. Therefore, current spatialized audio systems generate sound fields with the assumption that all listeners are positioned adjacent the center of the sound field and oriented facing the center channel of the system, and have listener position and orientation requirements for optimal performance. Accordingly, in a classic one-to-many system, spatialized audio may be delivered to a listener such that the sound appears to be backwards, if that listener happens to be facing opposite of the expected orientation. Such misaligned sound can lead to sensory and cognitive dissonance, and degrade the spatialized audio experience, and any VR/AR/MR experience presented therewith. In serious cases, sensory and cognitive dissonance can cause physiological side-effects, such as headaches, nausea, discomfort, etc., that may lead users to avoid spatialized audio experiences or VR/AR/MR experiences presented therewith.

In a similar technology space, mixed media systems such as those found in theme park rides (i.e., DISNEY’S STAR TOURS) can add real life special effects such as lights and motion to 3-D film and spatialized audio. Users of 3-D mixed media systems are typically required to wear glasses that facilitate system generation of 3-D imagery. Such glasses may contain left and right lenses with different polarizations or color filters, as in traditional anaglyph stereoscopic 3-D systems. The 3-D mixed media system projects overlapping images with different polarizations or colors such that users wearing stereoscopic glasses will see slightly different images in their left and right eyes. The differences in these images are exploited to generate 3-D optical images. However, such systems are prohibitively expensive. Moreover, such mixed media systems do not address the inherent user position and orientation requirements of current spatialized audio systems.

To address these issues, some VR/AR/MR systems include head mounted speakers operatively coupled to a spatialized audio system, so that spatialized audio can be rendered using a “known” position and orientation relationship between speakers and a user/listener’s ears. Various examples of such VR/AR/MR systems are described in U.S. Provisional Patent Application Ser. No. 62/369,561, the contents of which have been previously incorporated-by-reference herein. While these VR/AR/MR systems address the listener position issue described above, the systems still have limitations related to processing time, lag and latency that can result in cognitive dissonance with rapid user head movements.

For instance, some VR/AR/MR system deliver spatialized audio to a user/listener through head mounted speakers. Accordingly, if a virtual sound source (e.g., a bird) is virtually located to the right of a user/listener in a first pose (which may be detected by the VR/AR/MR system), the VR/AR/MR system may deliver generated sound (e.g., chirping) corresponding to the virtual sound source that appears to originate from the right of the user/listener. The VR/AR/MR system may deliver the sound mostly through one or more speakers mounted adjacent the user/listener’s right ear. If the user/listener turns her head to face the virtual sound source, the VR/AR/MR system may detect this second pose and deliver generated sound corresponding to the virtual sound source that appears to originate from in front of the user/listener.

However, if the user/listener rapidly turns her head to face the virtual sound source, the VR/AR/MR system will experience a lag or latency related to various limitations of the system and the method of generating virtual sound based on a pose of a user/listener. An exemplary virtual sound generation method includes, inter alia, (1) detecting a pose change, (2) communicating the detected pose change to the processor, (3) generating new audio data based on the changed pose, (4) communicating the new audio data to the speakers, and (5) generating virtual sound based on the new audio data. These steps between detecting a pose change and generating virtual sound can result in lag or latency that can lead to cognitive dissonance in a VR/AR/MR experience with associated spatialized audio when the user/listener rapidly changes her pose.

Spatialized audio associated with a VR/AR/MR experience illustrates the cognitive dissonance because a virtual sound (e.g., a chirp) may appear to emanate from a location different from the image of the virtual object (e.g., a bird). However, all spatialized audio systems (with or without a VR/AR/MR system) can result in cognitive dissonance with rapid pose change because all spatialized audio systems include virtual sound sources with virtual locations and orientations relative to the user/listener. For instance, if a virtual bird is located to the right of the listener, the chirp should appear to emanate from the same point in space regardless of the orientation of the user’s head, or how quickly that orientation changes.

SUMMARY

In one embodiment, a spatialized audio system includes a sensor to detect a head pose of a listener. The system also includes a processor to render audio data in first and second stages. The first stage includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources. The second stage includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

In another embodiment, a spatialized audio system includes a sensor to detect a first head pose at a first time and a second head pose of a listener at a second time, the second time being after the first time. The system also includes a processor to render audio data in first and second stages. The first stage includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources based on the detected first head pose of the listener. The second stage includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected second head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

In still another embodiment, a method of rendering spatialized audio includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources. The method also includes detecting a head pose of a listener. The method further includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

In yet another embodiment, a method of rendering spatialized audio includes detecting a first head pose of a listener. The method also includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources based on the detected first head pose of the listener. The method further includes detecting a second head pose of the listener. Moreover, the method includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected second head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

In still another embodiment, a computer program product is embodied in a non-transitory computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor causes the processor to execute a method for rendering spatialized audio. The method includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources. The method also includes detecting a head pose of a listener. The method further includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

In yet another embodiment, a computer program product is embodied in a non-transitory computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor causes the processor to execute a method for rendering spatialized audio. The method includes detecting a first head pose of a listener. The method also includes rendering first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources based on the detected first head pose of the listener. The method further includes detecting a second head pose of the listener. Moreover, the method includes rendering the second audio data corresponding to the second plurality of sources to third audio data corresponding to a third plurality of sources based on the detected second head pose of the listener. The second plurality of sources consists of fewer sources than the first plurality of sources.

In one or more embodiments, the sensor is an inertial measurement unit. The first and/or second pluralities of sources may be virtual sound sources. The sensor may detect the head pose of the listener after the first stage and before the second stage. The sensor may detect the head pose of the listener immediately before the second stage.

In one or more embodiments, the third plurality of sources consists of fewer sources than the second plurality of sources or an equal number of sources as the second plurality of sources. The first audio data may be a full audio stream data set. The second plurality of sources may consist of 8 or fewer sources.

In one or more embodiments, each of the first, second, and/or third pluralities of sources corresponds to a different position/orientation. The first plurality of sources may correspond to a first plurality of positions. The second plurality of sources may correspond to a second plurality of positions, and each of the second plurality of positions may be closer to the listener than each of the first plurality of positions. The second plurality of positions may not be located in a single plane.

In one or more embodiments, the system also includes a plurality of speakers corresponding to the third plurality of sources to produce sound based on the third audio data. Each of the third plurality of sources may correspond to a different position, and each of the plurality of speakers may correspond to a respective source of the third plurality of sources at a respective different position.

In one or more embodiments, the second stage may include rendering the second audio data corresponding to the second plurality of sources to the third audio data corresponding to the third plurality of sources based on the detected head pose of the listener and respective positions/orientations of the second plurality of sources. The second stage may be more sensitive to rotation than translation of the listener. The second stage may be a rotation-only audio transformation. Each of the second plurality of sources may be located from about 6 inches to about 12 inches from the listener’s head.

In one or more embodiments, the sensor detects the first head pose of the listener before the first stage. The sensor may detect the second head pose of the listener after the first stage and before the second stage. The sensor may detect the second head pose of the listener immediately before the second stage.

In one or more embodiments, the second stage includes rendering the second audio data corresponding to the second plurality of sources to the third audio data corresponding to the third plurality of sources based on the detected second head pose of the listener and respective positions/orientations of the second plurality of sources.

In one or more embodiments, the method also includes detecting the head pose of the listener after rendering the first audio data and before rendering the second audio data. The method may also include detecting the head pose of the listener immediately before rendering the second audio data. The method may also include producing sound based on the third audio data through a plurality of speakers corresponding to the third plurality of sources. The method may also include rendering the second audio data corresponding to the second plurality of sources to the third audio data corresponding to the third plurality of sources based on the detected head pose of the listener and respective positions/orientations of the second plurality of sources.

In one or more embodiments, rendering the second audio data corresponding to the second plurality of sources to the third audio data corresponding to the third plurality of sources is more sensitive to rotation than translation of the listener. Rendering the second audio data corresponding to the second plurality of sources to the third audio data corresponding to the third plurality of sources may be a rotation-only audio transformation.

In one or more embodiments, the method also includes detecting the first head pose of the listener before rendering the first audio data. The method may also include detecting the second head pose of the listener after rendering the first audio data and before rendering the second audio data. The method may also include detecting the second head pose of the listener immediately before rendering the second audio data.

In one or more embodiments, the method also includes rendering the second audio data corresponding to the second plurality of sources to the third audio data corresponding to the third plurality of sources based on the detected second head pose of the listener and respective positions/orientations of the second plurality of sources.

BRIEF DESCRIPTION OF THE DRAWINGS

The drawings illustrate the design and utility of various embodiments of the present invention. It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. In order to better appreciate how to obtain the above-recited and other advantages and objects of various embodiments of the invention, a more detailed description of the present inventions briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

FIG. 1 depicts a user’s view of augmented reality/mixed reality through a wearable AR/MR user device according to one embodiment;

FIG. 2 is a top schematic view of a spatialized audio system according to one embodiment worn on a user/listener’s head;

FIG. 3 is a back schematic view of the spatialized audio system worn on the user/listener’s head as depicted in FIG. 2;

FIG. 4 is a more detailed top schematic view of the spatialized audio system worn on the user/listener’s head as depicted in FIG. 2;

FIGS. 5 to 8 are partial perspective and partial schematic views of spatialized audio systems worn on a user/listener’s head according to various embodiments;

FIG. 9 is a detailed schematic view of a pose-sensitive spatialized audio system according to one embodiment;

FIG. 10 is a schematic view of a spatialized sound field generated by a real physical audio source;

FIG. 11 is a back schematic view of a spatialized audio experience including various virtual sound sources and a virtual object according to one embodiment;

FIG. 12 is a side schematic view of the spatialized audio experience depicted in FIG. 11;

FIGS. 13 and 14 are top views of a user/listener receiving a pose-sensitive spatialized audio experience according to one embodiment, in FIG. 13, the user/listener is facing forward, while in FIG. 14, the user/listener is facing to the left;

FIGS. 15 and 17 are flowcharts depicting methods of late-frame time warp, pose-sensitive audio processing utilizing a spatialized audio system according to two embodiments;

FIG. 16 schematically depicts late-frame time warp audio processing according to one embodiment.

DETAILED DESCRIPTION

Various embodiments of the invention are directed to systems, methods, and articles of manufacture for spatialized audio systems in a single embodiment or in multiple embodiments. Other objects, features, and advantages of the invention are described in the detailed description, figures, and claims.

Various embodiments will now be described in detail with reference to the drawings, which are provided as illustrative examples of the invention so as to enable those skilled in the art to practice the invention. Notably, the figures and the examples below are not meant to limit the scope of the present invention. Where certain elements of the present invention may be partially or fully implemented using known components (or methods or processes), only those portions of such known components (or methods or processes) that are necessary for an understanding of the present invention will be described, and the detailed descriptions of other portions of such known components (or methods or processes) will be omitted so as not to obscure the invention. Further, various embodiments encompass present and future known equivalents to the components referred to herein by way of illustration.

The spatialized audio systems may be implemented independently of AR/MR systems, but many embodiments below are described in relation to AR/MR systems for illustrative purposes only. Further, the spatialized audio systems described herein may also be used in an identical manner with VR systems.

Summary of Problems and Solutions

Spatialized audio systems, such as those for use with or forming parts of 2-D/3-D cinema systems, 2-D/3-D video games and VR/AR/MR systems, render, present and emit spatialized audio corresponding to virtual objects with virtual locations in real-world, physical, 3-D space. As used in this application, “emitting,” “producing” or “presenting” audio or sound includes, but is not limited to, causing formation of sound waves that may be perceived by the human auditory system as sound (including sub-sonic low frequency sound waves). These virtual locations are typically “known” to (i.e., recorded in) the spatialized audio system using a coordinate system (e.g., a coordinate system with the spatialized audio system at the origin and a known orientation relative to the spatialized audio system). Virtual audio sources associated with virtual objects have content, position and orientation. Another characteristic of virtual audio sources is volume, which falls off as a square of the distance from the listener. However, current spatialized audio systems (e.g., 5.1 spatialized audio systems, 7.1 spatialized audio systems, cinema audio systems and even some head-worn audio systems) all have listener position and orientation restrictions that limit the number and characteristics of listeners for which the spatialized audio systems can generate realistic spatialized audio.

Head-worn spatialized audio systems according to some embodiments described herein track a pose (e.g., position and orientation) of a user/listener to more accurately render spatialized audio such that audio associated with various virtual objects appear to originate from virtual positions corresponding to the respective virtual objects. Systems according to some embodiments described herein also track a head pose of a user/listener to more accurately render spatialized audio such that directional audio associated with various virtual objects appear to propagate in virtual directions appropriate for the respective virtual objects (e.g., out of the mouth of a virtual character, and not out of the back of the virtual characters’ head). Moreover, systems according to some embodiments described herein include other real physical and virtual objects in their rendering of spatialized audio such that audio associated with various virtual objects appear to appropriately reflect off of the real physical and virtual objects.

However, even head-worn spatialized audio systems including pose tracking based audio rendering are susceptible to system lag and latency between detecting a pose change and presentation of virtual sound associated therewith. This system lag and latency may lead to cognitive dissonance between a virtual position of a virtual sound source and a real position of virtual sound corresponding to the virtual sound source. System lag and latency are especially problematic with rapid pose changes (e.g., rapid head movements), which can increase the magnitude/extent of the cognitive dissonance.

Spatialized audio systems described herein perform a two stage audio data rendering process. In the first stage, the system renders first audio data corresponding to a first plurality of sources to second audio data corresponding to a second plurality of sources. The first stage may take into account a head pose estimate. The second plurality of sources has fewer sources compared to the first plurality of sources, thereby simplifying the audio data. In the second stage, the system renders the second audio data to third audio data corresponding to a third plurality of sources (e.g., system speakers). The second stage takes into account a most recently available head pose estimate of the user/listener to more accurately render the third audio data. The previous processing in the first stage reduced the processor cycles and time required to render the third audio data. Therefore, splitting audio processing into two stages and taking more current head pose into account in the second later and simpler stage, reduces the system lag and latency between estimating a head pose and presentation of virtual sound based thereon.

您可能还喜欢...