Facebook Patent | Determination Of Material Acoustic Parameters To Facilitate Presentation Of Audio Content
Patent: Determination Of Material Acoustic Parameters To Facilitate Presentation Of Audio Content
Publication Number: 20200382895
Publication Date: 20201203
Applicants: Facebook
Abstract
Determination of material acoustic parameters for a headset is presented herein. A value of a material acoustic parameter is initialized. A simulation is performed using the value of the material acoustic parameter and a model. The model includes a three-dimensional representation of a local area occupied by the headset. During the simulation, the value of the material acoustic parameter is dynamically modified until a reverberation time calculated based on the modified value of the material acoustic parameter falls within a threshold value of a target reverberation time. The model is updated with the modified value of the material acoustic parameter. The model is used to determine one or more acoustic parameters. Audio content is rendered based on the one or more acoustic parameters so that the audio content appears originating from an object in the local area.
BACKGROUND
[0001] The present disclosure relates generally to presentation of audio content, and specifically relates to determination of material acoustic parameters that facilitate presentation of audio content.
[0002] In an artificial reality environment, simulating sound propagation from an object to a listener may use knowledge about acoustic parameters of the room. To seamlessly place a virtual sound source in an environment, sound signals to each ear are determined based on sound propagation paths from the source, through an environment, to a listener (receiver). While models may be used to simulate sound propagation within an environment, it can be difficult to determine appropriate material properties for objects in the environment. Current techniques rely on tables of measured acoustic material data that are manually assigned by an administrator to objects in the room. However, assigning these properties is a time-consuming manual process that requires an in-depth user knowledge of acoustic materials. Also, the resulting simulation may not match known acoustic characteristics of the room due to differences between the manually assigned data and actual materials in the room.
SUMMARY
[0003] Embodiments of the present disclosure support a method, computer readable medium, and apparatus for determining material acoustic parameters to facilitate presentation of audio content (e.g., via an audio assembly on a headset). A material acoustic parameter (e.g., acoustic absorption coefficient, acoustic scattering coefficient, etc.) describes an acoustic property of a surface of an object. One or more material acoustic parameters may be used to determine acoustic parameters (e.g., room impulse response) that may be used (e.g., by the audio assembly) to present audio content.
[0004] In some embodiments, a value is initialized (e.g., by an audio server) for a material acoustic parameter describing a portion of a local area (e.g., a room). A simulation is performed using a model and the value of the material acoustic parameter. The simulation dynamically modifies the value of the material acoustic parameter until a simulated reverberation time calculated using the value of the material acoustic parameter is within a threshold value of a target reverberation time. The model is updated based on the modified value of the material acoustic parameter that causes the simulated reverberation time to be within the threshold value of the target reverberation time. The updated model is used to render audio content presented by a headset (e.g., via an audio system on the headset). For example, the updated model may be used to determine one or more acoustic parameters that are sent to the headset, and the headset may use the one or more acoustic parameters to present audio content.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 is a block diagram of a system environment for a headset, in accordance with one or more embodiments.
[0006] FIG. 2A is a block diagram of an audio server, in accordance with one or more embodiments.
[0007] FIG. 2B is a block diagram of an audio assembly, in accordance with one or more embodiments.
[0008] FIG. 3 illustrates sound propagation paths of a spatialized sound from a virtual sound source to a user of a headset, in accordance with one or more embodiments.
[0009] FIG. 4 is a perspective view of a headset including an audio assembly, in accordance with one or more embodiments.
[0010] FIG. 5 is a flowchart illustrating a process for determining one or more material acoustic parameters that facilitate presentation of audio content, in accordance with one or more embodiments.
[0011] FIG. 6 is a block diagram of a system that includes a headset and an audio server, in accordance with one or more embodiments.
[0012] The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.
DETAILED DESCRIPTION
[0013] Embodiments of the present disclosure may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a headset, a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a near-eye display (NED), a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
[0014] An audio system for determination of material acoustic parameters to facilitate presentation of audio content is presented herein. The audio system includes an audio assembly communicatively coupled to an audio server. The audio assembly may be implemented on a headset. The headset may also include one or more imaging sensors. The audio assembly may request (e.g., over a network) one or more acoustic parameters from the audio server. The request may include, e.g., location information of the headset within a local area, visual information (depth information, color information, etc.) captured by the imaging sensors, audio data (e.g., reverberation time) measured by the microphone assembly, information describing the audio content (e.g., location information of the sound source of the audio content), etc.
[0015] The audio server determines material acoustic parameters for a local area occupied by the audio assembly. The audio server identifies and/or generates a model of the local area using the information in the request. The model is a 3-dimentional (3D) virtual representation of at least a portion of the local area and uses one or more material acoustic parameters to describe acoustic properties of surfaces within the local area. A material acoustic parameter may be, e.g., an acoustic absorption coefficient, an acoustic scattering coefficient, an acoustic transmission coefficient, an acoustic bidirectional scattering distribution function (BSDF), or some other parameter that describes acoustic properties of a surface.
[0016] The audio server initializes a value of each of one or more material acoustic parameters describing a portion of the local area. The audio server performs a simulation of reverberation time using the model and the value of each material acoustic parameter. The simulation dynamically modifies the value of each material acoustic parameter until a simulated reverberation time calculated using the value of the material acoustic parameter is within a threshold value of a target reverberation time. In some embodiments, the target reverberation time is determined based on one or more reverberation times measured by the audio assembly that are included in the request from the audio assembly. The audio server updates the model based on the modified value of each material acoustic parameter that causes the simulated reverberation time to be within the threshold value of the target reverberation time. In some embodiments, the audio server performs the simulation for each of a plurality of target reverberation times and updates the model with a modified value of each material acoustic parameter for each surface within the local area that causes the simulated reverberation time to be within the threshold value of the target reverberation time.
[0017] The audio server uses the updated model to determine one or more acoustic parameters. For example, the audio server uses the updated model, location information of the headset, and location information of the sound source of the audio content to determine sound propagation paths (e.g., direct path, early reflection, late reverberation etc.) in the local area. The audio server determines the acoustic parameters based on the sound propagation and transmits the acoustic parameters to the headset. The headset uses (e.g., via the audio assembly) the acoustic parameters to render audio content. In some embodiments, the audio content is spatialized audio content. Spatialized audio content is audio content that is presented in a manner such that it appears to originate from one or more points in an environment surrounding the user (e.g., from a virtual object in a local area of the user) and propagate toward the user.
[0018] FIG. 1 is a block diagram of a system environment 100 for a headset 110, in accordance with one or more embodiments. The system 100 includes the headset 110 that can be worn by a user 140 in a room 150. The headset 110 is connected to an audio server 130 via a network 120.
[0019] The network 120 connects the headset 110 to the audio server 130. The network 120 may include any combination of local area and/or wide area networks using both wireless and/or wired communication systems. For example, the network 120 may include the Internet, as well as mobile telephone networks. In one embodiment, the network 120 uses standard communications technologies and/or protocols. Hence, the network 120 may include links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 2G/3G/4G mobile communications protocols, digital subscriber line (DSL), asynchronous transfer mode (ATM), InfiniBand, PCI Express Advanced Switching, etc. Similarly, the networking protocols used on the network 120 can include multiprotocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the User Datagram Protocol (UDP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), etc. The data exchanged over the network 120 can be represented using technologies and/or formats including image data in binary form (e.g. Portable Network Graphics (PNG)), hypertext markup language (HTML), extensible markup language (XML), etc. In addition, all or some of links can be encrypted using conventional encryption technologies such as secure sockets layer (SSL), transport layer security (TLS), virtual private networks (VPNs), Internet Protocol security (IPsec), etc. The network 120 may also connect multiple headsets located in the same or different rooms to the same audio server 130.
[0020] The headset 110 presents media to a user. In one embodiment, the headset 110 may be, e.g., a NED or a HMD. In general, the headset 110 may be worn on the face of a user such that content (e.g., media content) is presented using one or both lens of the headset. However, the headset 110 may also be used such that media content is presented to a user in a different manner. Examples of media content presented by the headset 110 include one or more images, video content, audio content, or some combination thereof. The headset 110 includes an audio assembly, and may also include at least one depth camera assembly (DCA) and/or at least one passive camera assembly (PCA). As described in detail below with regard to FIG. 4, a DCA generates depth image data that describes the 3D geometry for some or all of the local area (e.g., the room 150), and a PCA generates color image data for some or all of the local area. In some embodiments, the DCA and the PCA of the headset 110 are part of simultaneous localization and mapping (SLAM) sensors mounted on the headset 110 for determining visual information of the room 150. Thus, the depth image data captured by the at least one DCA and/or the color image data captured by the at least one PCA can be referred to as visual information determined by the SLAM sensors of the headset 110. Furthermore, the headset 110 may include position sensors or an inertial measurement unit (IMU) that tracks the position (e.g., location and pose) of the headset 110 within the local area. The headset 110 may also include a Global Positioning System (GPS) receiver to further track location of the headset 110 within the local area. The position (includes orientation) of the of the headset 110 within the local area is referred to as location information.
[0021] The audio assembly presents audio content to the user 140 of the headset 110. In some embodiments, the audio content is spatialized. To create spatialized audio content, it is beneficial to have accurate acoustic parameters for the local area. The audio assembly may measure audio data (e.g., reverberation time) in the local area (e.g., using a speaker assembly and a microphone assembly). The audio assembly generates an acoustic parameter query for sending to the audio server 130. An acoustic parameter query is a request for one or more acoustic parameters that the audio assembly can use to present audio content (e.g., spatialized audio content). The acoustic parameter query may include audio data measured by the audio assembly, visual information describing some or all of the local area, location information of the headset 110 within the local area, information of the audio content, or some combination thereof. Audio data includes, e.g., a reverberation time as measured/determined by the audio system from a particular position within the local area (i.e., the room 150). Visual information describes a 3D geometry of some or all of the local area and may also include color image data of some or all of the local area. Information of the audio content includes, e.g., information describing a location of a sound source of the audio content. The sound source of the audio content can be a real object in the local area or a virtual object. The headset 110 may communicate the acoustic parameter query via the network 120 to the audio server 130.
[0022] In some embodiments, the headset 110 obtains one or more acoustic parameters from the audio server 130. Acoustic parameters are parameters describing the local area of the headset that may be used by the audio assembly to render audio content. Acoustic parameters may include, e.g., a reverberation time from a sound source to the headset for each of a plurality of frequency bands, a reverberant level for each frequency band, a direct to reverberant ratio for each frequency band, a direction of a direct sound from the sound source to the headset for each frequency band, an amplitude of the direct sound for each frequency band, a propagation time for the direct sound from the sound source to the headset, relative linear and angular velocities between the sound source and headset, a time of early reflection of a sound from the sound source to the headset, an amplitude of early reflection for each frequency band, a direction of early reflection, room mode frequencies, room mode locations, or some combination thereof.
[0023] The headset 110 uses the acoustic parameters to present the audio content to the user 140. For example, the audio assembly may use the one or more acoustic parameters, head-related transfer functions (HRTFs), and convolution to render spatialized audio content to the user. In some embodiments, the rendered audio content is spatialized audio content. Additional details regarding operations and components of the headset 110 are discussed below in connection with FIG. 2B, FIG. 4, and FIG. 6.
[0024] The audio server 130 determines one or more acoustic parameters based on the acoustic parameter query received from the headset 110. The audio server 130 determines the one or more acoustic parameters using a model of the local area and information within the acoustic parameter query. The model is a 3-dimentional (3D) virtual representation of the local area. The model uses one or more material acoustic parameters to describe acoustic properties of surfaces within the virtual area. A material acoustic parameter may be, e.g., an acoustic absorption coefficient, an acoustic scattering coefficient, an acoustic transmission coefficient, an acoustic bidirectional scattering distribution function (BSDF), or some other parameter that describes acoustic properties of a surface. In some embodiments, the audio server 130 obtains the model using information from the acoustic parameter query. For example, the audio server 130 may update and/or generate the model based on the virtual information of the local area. As another example, the audio server 130 may retrieve the model from a databased based on the local information of the headset.
[0025] The audio server 130 initializes values for the one or more material acoustic parameters. For example, the audio server 130 may set a value of a material acoustic parameter to some default value for all surfaces of the model, use machine learning to predict the value for some or all of the surfaces of the model based in part on the visual information and/or audio data (e.g., room impulse responses), or some combination thereof.
[0026] For a given material acoustic parameter, the audio server 130 performs a simulation (e.g., a ray tracing, finite-difference time-domain, or boundary element method simulation) using the model and the value of the material acoustic parameter. The simulation dynamically modifies the value of the material acoustic parameter until a simulated reverberation time calculated using the value of the material acoustic parameter is within a threshold value of a target reverberation time (e.g., as provided by the headset 110). The audio server 130 updates the model based on the modified value of the material acoustic parameter that causes the simulated reverberation time to be within the threshold value of the target reverberation time. The audio server 130 may perform the simulation for some or all of the one or more material acoustic parameters. The audio server 130 may perform the simulation for each of a plurality of target reverberation times . Additional details of the initialization and simulation are discussed below with regard to FIG. 2A.
[0027] The audio server 130 determines one or more acoustic parameters using the updated model. The one or more acoustic parameters can be a reverberation time from the sound source of the audio content to the headset 110 for each of a plurality of frequency bands, a reverberant level for each frequency band, a direct to reverberant ratio for each frequency band, a direction of a direct sound from the sound source to the headset for each frequency band, an amplitude of the direct sound for each frequency band, a propagation time for the direct sound from the sound source to the headset, relative linear and angular velocities between the sound source and headset, a time of early reflection of a sound from the sound source to the headset 110, an amplitude of early reflection for each frequency band, a direction of early reflection, room mode frequencies, and room mode locations. In some embodiments, the one or more acoustic parameters parametrize impulse responses from the sound source to the headset in the local area. In some cases, the one or more acoustic parameters may have previously been determined and stored, and the audio server 130 simply retrieves them based on the location information of the headset 110 in the acoustic parameter query. The audio server 130 provides the one or more acoustic parameters to the audio assembly on the headset 110.
[0028] In some embodiments, the audio server 130 also determines sound propagation paths of the audio content in the local area based on the updated model. The sound propagation paths may include direct paths, early reflections that correspond to first order acoustic reflections from nearby surfaces, and late reverberations that correspond to the first order acoustic reflections from farther surfaces or higher order acoustic reflections. The audio server 130 provides the sound propagation paths to the headset 110 for rendering the audio content. The audio server 130 may provide to the headset 110 one or more the acoustic parameters that are determined using the updated model.
[0029] FIG. 2A is a block diagram of the audio server 130, in accordance with one or more embodiments. The audio server 130 determines one or more acoustic parameters in response to an acoustic parameter query from an audio assembly. The audio server 130 includes a database 210, a mapping module 220, an initialization module 230, an acoustic simulation module 240, and an acoustic analysis module 250. In other embodiments, the audio server 130 can have any combination of the modules listed with any additional modules. In some other embodiments, the audio server 130 includes one or more modules that combine functions of the modules illustrated in FIG. 2A. One or more processors of the audio server 130 (not shown) may run some or all of the modules within the audio server 130.
[0030] The database 210 stores data for the audio server 130. The stored data may include, e.g., a virtual model, material acoustic parameters for various materials described by the virtual model, acoustic parameters for locations described by the virtual model, target reverberation times for locations in the virtual model, HRTFs for various users, audio data, visual information (depth information, color information, etc.), audio parameter queries, location information of a headset, some other information that may be used by the audio server 130, or some combination thereof. The virtual model describes one or more physical spaces and acoustic properties of those physical spaces. The acoustic properties include values of one or more material acoustic parameters determined by the acoustic simulation module 240 for those physical spaces. The acoustic properties can also include acoustic parameters of those spaces, which are determined based on the values of the material acoustic parameter of those spaces.
[0031] A particular location in the virtual model may correspond to a current physical location of the headset 110 within the room 150. Each location in the virtual model is associated with a set of acoustic parameters for a corresponding physical space that represents one configuration of the local area. The set of acoustic parameters of a location describes various acoustic properties of that one particular configuration of the local area. In some embodiments, the physical spaces whose acoustic properties are described in the virtual model include, but are not limited to, a conference room, a bathroom, a hallway, an office, a bedroom, a dining room, and a living room. Hence, the room 150 of FIG. 1 may be a conference room, a bathroom, a hallway, an office, a bedroom, a dining room, or a living room. In some embodiments, the physical spaces can be certain outside spaces (e.g., patio, garden, etc.) or combination of various inside and outside spaces. Acoustic parameters of the room 150 can be retrieved from the virtual model based on a location of the virtual model obtained from the mapping module 220.
[0032] The databased 210 can also store audio parameter queries from the headset 110. An audio parameter query is a request for acoustic parameters of a local area occupied by the headset 110 (such as the room 150 of FIG. 1) to render audio content. The acoustic parameter query includes information of the local area, the headset 110, and/or the audio content that the audio server 130 can use to determine the requested acoustic parameters. Information of the local area may include depth image data of the local area, color image data of the local area, or some combination thereof. Information of the headset 110 may include location information of the headset 110. Information of the audio content may include location information of a sound source of the audio content.
[0033] The mapping module 220 maps information in the audio parameter query to a location within the virtual model. The mapping module 220 determines the location within the virtual model corresponding to a current physical space where the headset 110 is located, i.e., a current configuration of the room 150. In some embodiments, the mapping module 220 searches the virtual model to identify a mapping between (i) the visual information that include at least e.g., information about geometry of surfaces of the physical space and information about acoustic materials of the surfaces or location information of the headset 110 and (ii) a corresponding configuration of a virtual space within the virtual model. In one embodiment, the mapping is performed by matching a geometry of the received visual information with a geometry of the virtual space within the virtual model. In another embodiment, the mapping is performed by matching location information of the headset 110 with a location within the virtual model. A match suggests that the virtual space in the model is a representation of the physical space. Note that in some instances, there may be multiple matches. In these cases, the mapping module 220 may select one of the matches. For example, the mapping module 220 uses GPS location data (e.g., from the headset 110) to select one of the matches.
[0034] If a match is found, the mapping module 220 retrieves the acoustic parameters that are associated with the virtual space from the virtual model and sent to the headset 110 for rendering the audio content.
[0035] If no match is found, this is an indication that a current configuration of the local area occupied by the headset 110 is not yet described by the virtual model. In such case, the mapping module 220 may develop a 3D virtual representation of the local area based on the visual information received from the headset 110 and update the virtual model with the 3D virtual representation. The 3D virtual representation of the local area that includes virtual representation of surfaces within the local area, such as walls, surfaces of furniture, surfaces of appliances, surfaces of other types of objects, and so on. The virtual model uses one or more material acoustic parameters to describe acoustic properties of the surfaces within the virtual area. In some embodiments, the mapping module 220 may develop a new model that includes the 3D virtual representation and uses one or more material acoustic parameters to describe acoustic properties of the surfaces within the virtual area. The new model can be saved in the database 210.
[0036] The mapping module may also inform at least one of the initialization module 230, the acoustic simulation module 240, and the acoustic analysis module 250 that no matching is found, so that the initialization module 230 and the acoustic simulation module 240 can determine the one or more material acoustic parameters and the acoustic analysis module 250 can use the one or more material acoustic parameters to determine acoustic parameters of the local area.
[0037] The initialization module 230 determines an initial value of each of one or more material acoustic parameters for the local area. In some embodiments, the initialization module 230 assigns a same value (e.g., 0.1) of a material acoustic parameter to the surfaces described in the model. In some other embodiments, the initialization module 230 assigns different initial values of a material acoustic parameter to different surfaces in the model. For example, the initialization module 230 classifies a material of each surface based on the visual information of the local area in the acoustic parameter query. The initialization module 230 determines an initial value of each material acoustic parameter for the surface based on the material classification.
[0038] In one embodiment, the initialization module 230 uses machine learning techniques for the material classification. The initialization module 230 can input the image data (or a part of the image data that is related to the surface) and/or audio data into a machine learning model, the machine learning model outputs a category of material. The machine learning model can be trained with different machine learning techniques, such as linear support vector machine (linear SVM), boosting for other algorithms (e.g., AdaBoost), neural networks, logistic regression, naive Bayes, memory-based learning, random forests, bagged trees, decision trees, boosted trees, or boosted stumps. As part of the training of the machine learning model, a training set is formed. The training set includes image data and/or audio data of a group of surfaces and material categories of the surfaces in the group.
[0039] The acoustic simulation module 240 performs a simulation of acoustic properties of the local area using the virtual model and the value of each material acoustic parameter. The acoustic simulation module 240 receives one or more acoustic probes that describe frequency-dependent acoustic properties of a particular location (i.e., probe location) within the local area.
[0040] An acoustic probe represents a target of the simulation for a particular location within the local area. An acoustic probe may be, e.g., a reverberation time measured from a particular location within the local area. The acoustic simulation module 240 dynamically modifies the one or more material acoustic parameters such that the simulated acoustic properties match the acoustic probes, e.g., the simulated acoustic properties fall within threshold values of the acoustic probes. In some embodiments, the acoustic simulation module 240 performs the simulation at each probe location. In the simulation, the sound source and listener are coincident at a particular probe location, and a direct sound propagation path is not computed. In some embodiments, the simulation is a ray-tracing based simulation. During the simulation, the acoustic simulation module 240 determines the number of rays that bounces off each of the surfaces within the 3D virtual representation of the local area and/or sound energy that bounces off the surfaces. The sound energy of each ray is based in part on the material acoustic parameters of materials the ray interacts with. Accordingly, as a simulated ray leaves a probe location propagates within the local area, and returns to the probe location via one or more reflections of surfaces within the local area, the material acoustic parameters associated with the surfaces can affect the sound ray. The acoustic simulation module 240 computes an impulse response of the local area at the probe location based on the simulated rays and the material acoustic parameters of surfaces within the local area. The acoustic simulation module 240 determines acoustic properties (e.g., reverberation time) based on the impulse response.
[0041] Note in some cases, there may be multiple probes within a particular local area. In these cases, data from each probe may have a weight (referred to as an influence weight) for each surface in the simulation, and the weights may be different from each other. A probe with a higher weight for a particular material means that the surface has a larger impact on the acoustic parameters at the probe location. Probes may be weighted according to how much impact each surface has on that the acoustic parameters at the probe location. In some embodiments, these weights may be determined by calculating the total sound energy emitted from the sound source at the probe location that reflects from each surface in the local area. The weights may also be determined by the age of the probe, the confidence of the acoustic parameters at the probe location,* or any combination thereof*
[0042] In some embodiments, the acoustic probes represent target reverberation times, e.g., reverberation times measured by the headset 110. During the simulation, the acoustic simulation module 240 dynamically modifies the value of the material acoustic parameter until a reverberation time calculated using the value of the material acoustic parameter (e.g., RT60 referred hereinafter as RT60.sub.S) is within a threshold value of a target reverberation time (e.g., RT60, referred hereinafter as RT60.sub.T). The threshold value may be 95% or 105% of RT60.sub.T. The simulation may be frequency dependent. In some embodiments, the acoustic simulation module 240 may perform a simulation for a number of frequency bands or perform a simulation for an individual frequency band.
[0043] In some embodiments, the acoustic simulation module 240 uses the Sabine reverberation time equation in the following to perform the simulation.
RT60=0.161*V/(a*S) (1)
where RT60 is reverberation time, V is local area volume, a is a material acoustic parameter, such as material absorption coefficient, and S is surface area. Based on the Sabine reverberation time equation, the acoustic simulation module 240 can derive a relationship between the ratio of RT60.sub.S) to RT60.sub.T (referred hereinafter as D) and the ratio of a value of the material acoustic parameter corresponding to the simulated reverberation time (referred hereinafter as as) and a value of the material acoustic parameter corresponding to the target reverberation time (referred hereinafter as a.sub.T). The relationship is represented by Equation (2) in the following:
RT60.sub.T/RT60.sub.a=a.sub.S/a.sub.T (2)
[0044] The acoustic simulation module 240 further obtains Equation (3) to calculate a.sub.T:
a T = a S * ( R T 6 0 S R T 6 0 T ) = a S * D ( 3 ) ##EQU00001##
[0045] The acoustic simulation module 240 can use Equation (3) to run a plurality of iterations. In each iteration, the acoustic simulation module 240 obtains a different value of the material acoustic parameter from the previous iteration. For instance, the acoustic simulation module 240 obtains a.sub.n for iteration n, and obtains a.sub.n+1 for the next iteration, iteration n+1. In one embodiment, the acoustic simulation module 240 determines a.sub.n+1 based on a.sub.n by using the Equation (4):
a.sub.n+1=a.sub.n*D (4)
[0046] In another embodiment, the acoustic simulation module 240 modifies the value of the material acoustic material in each iteration by a pre-determined increment. In yet another embodiment, the change in the value of the material acoustic material in an iteration decreases with D. For example, after D falls in the range from 0.9 to 1.1, the acoustic simulation module 240 slows down the modification, meaning the acoustic simulation module 240 makes a smaller change in a in each later iteration.
[0047] In some embodiments, the acoustic simulation module 240 performs the simulation for each surface in the model. For example, for surface m, the acoustic simulation module 240 obtains a value of the material acoustic parameter a.sub.m,n in iteration n and determines a.sub.m,n+1 in the next iteration based on a.sub.m,n using Equation (5):
a.sub.m,n+1=a.sub.m,n*D (5)
where D.sub.m=RT60.sub.S,m/RT60.sub.T,m.
[0048] In some embodiments, the acoustic simulation module 240 determines RT60.sub.T based on one or more reverberation times in the acoustic parameter query. The reverberation times can be measured by the audio assembly or multiple audio assemblies at different positions in the local area. The acoustic simulation module 240 determines an influence weight (w) of each measured reverberation time (referred hereinafter as RT60.sub.p) may have. The acoustic simulation module 240 determines RT60.sub.T as a weighted average of RT60.sub.p based on Equation (6).
RT60.sub.T=SUM(RT60.sub.p*w.sub.p)/SUM(w.sub.p) (6)
[0049] For each surface, the acoustic simulation module 240 may determine a weight average ratio D.sub.m,avg based on Equation (7).
D.sub.m,avg=SUM(D.sub.p*w.sub.mp)/SUM(w.sub.m,p) (7)
where D.sub.m,avg is the weight average D for surface m, D.sub.p is the D for a measured reverberation time p, w.sub.m,p is the influence weight of the measured reverberation time for the surface.
[0050] The acoustic simulation module 240 may determine an importance weight (W) for each measured reverberation time. A measured reverberation time with a higher weight has more control over the simulation. The acoustic simulation module 240 determines RT60.sub.T based on Equation (8) and determines D.sub.m,avg based on Equation (9).
RT60.sub.T=SUM(RT60.sub.p*w.sub.p*W.sub.p)/SUM(w.sub.p*W.sub.p) (8)
D.sub.m,avg=SUM(D.sub.p*w.sub.mp*W.sub.p)/SUM(w.sub.m,p*W.sub.p) (9)
where W.sub.p is the importance weight of the measured reverberation time p for the surface m.
[0051] The acoustic simulation module 240 may un-do an iteration n, in response to D.sub.n being significantly different from another ratio RT60.sub.S,n+1/RT60.sub.S,n. For example, the acoustic simulation module 240 may undo iteration n, in response to a determination that a difference between D.sub.n and RT60.sub.S,n+1/RT60.sub.S,n exceeds a threshold value. To undo the iteration, the acoustic simulation module 240 replaces D.sub.n with a value determined based on D.sub.n-1. In one embodiment, the value equals (1-b)*D.sub.n-1+b*D.sub.n, where b is a value between 0 and 1. The value of b indicates effectiveness of iteration n, i.e., how close D.sub.n is to RT60.sub.S,n+1/RT60.sub.S,n.
[0052] In some embodiments, the acoustic simulation module 240 stops the simulation after RT60.sub.S falls within a threshold value of RT60.sub.T. For example, the acoustic simulation module 240 monitors D and stops the simulation after D falls in a threshold range, such as a range from 0.95 to 1.05. In some embodiments, the acoustic simulation module 240 stops the simulation after D is equal to (or substantively close to) 1, meaning RT60.sub.S matches RT60.sub.T. In some embodiments, the acoustic simulation module 240 stops the simulation after a threshold number of iterations are done, such as 20 iterations, or a maximum computation time has been exceeded, even though RT60.sub.S has not fallen within the threshold value of RT60.sub.T. Data generated during the simulation can be stored at the database 210.
[0053] The acoustic simulation module 240 uses the value of the material acoustic parameter that causes RT60.sub.S to fall within a threshold value of RT60.sub.T to update the model. In embodiments where the acoustic simulation module 240 stops the simulation before RT60.sub.S falls within a threshold value of RT60.sub.T, the acoustic simulation module 240 may use the value of the material acoustic parameter obtained from the last iteration to update the model. The updated model can be stored in the database 210.
[0054] The acoustic analysis module 250 uses the updated model to determine one or more acoustic parameters. In some embodiments, the acoustic analysis module 250 determines the one or more acoustic parameters based on information in the acoustic parameter query, such as the location information of the headset 110 and the location information of the sound source of the audio content. The location information of the headset 110 indicates a location of a listener in the model. The location information of the sound source of the audio content indicates a location of the sound source in the model. The sound source can be a real object in the local area or a virtual sound source. The acoustic analysis module 250 can update the virtual model stored in the database 210 with the one or more acoustic parameters of the local area.
[0055] The acoustic analysis module 250 may also use the updated model and information in the acoustic parameter query to determine sound propagation paths from the sound source to the listener (e.g., the headset 110). The sound propagation paths may include, e.g., direct sound path, early reflections, or late reverberations. The acoustic analysis module 250 transmits the acoustic parameters and/or sound propagation paths to the headset 110, such as the audio assembly implemented on the headset 110, for rendering the audio content.
……
……
……