Google Patent | Directional Emphasis In Ambisonics
Patent: Directional Emphasis In Ambisonics
Publication Number: 10264386
Publication Date: 20190416
Applicants: Google
Abstract
Techniques of rendering high-order ambisonics (HOAs) involve adjusting the weights of a spherical harmonic (SH) expansion of a sound field based on weights of a SH expansion of a direction emphasis function that multiplies a monopole density that, when its product with a Green’s function is integrated over the unit sphere, produces the sound field. An advantage of the improved techniques lies in the ability to better reproduce directionality of a given sound field in a computationally manner, whether the sound field is a temporal function or a time-frequency function.
TECHNICAL FIELD
This description relates to rendering of sound fields in virtual reality (VR) and similar environments and, in particular, to directional emphasis in ambisonics.
BACKGROUND
Ambisonics provides a full-sphere surround sound technique. In addition to providing surround sound in the horizontal plane, ambisonics also covers sound sources above and below the listener. Unlike other multichannel surround formats, ambisonics transmission channels do not carry speaker signals but instead contain a speaker-independent representation of a sound field called B-format, which is then decoded to the listener’s speaker setup. This extra step allows the producer to design audio in terms of source directions rather than in terms of loudspeaker positions, and offers the listener a considerable degree of flexibility as to the layout and number of speakers used for playback.
In ambisonics, an array of virtual loudspeakers surrounding a listener can generate a sound field by decoding a B-format sound file generated from a sound source that is isotropically recorded. In an example implementation, such decoding can be used in the delivery of audio through headphone speakers in Virtual Reality (VR) systems. Binaurally rendered high-order ambisonics (HOA) refers to the creation of many (e.g., at least 16) virtual loudspeakers that combine to provide a pair of signals to left- and right-channel speakers.
SUMMARY
In one general aspect, a method can include receiving, by controlling circuitry of a sound rendering computer configured to render directional sound fields for a listener, sound data resulting from a sound field detected at a microphone, the sound field being represented as a first expansion in spherical harmonic (SH) functions and including a vector of coefficients of the first expansion. The method can also include obtaining, by the controlling circuitry, a vector of coefficients of a second expansion of a direction emphasis field in SH functions, the direction emphasis field producing a direction-emphasized monopole density field upon multiplication with a monopole density field. The method can further include performing, by the controlling circuitry, a direction emphasis operation on the vector of coefficients of the first expansion based on the vector of coefficients of the second expansion to produce a vector of coefficients of a third expansion into SH functions, the third expansion representing a direction-emphasized sound field that reproduces a directional sound field with a perceived directionality and timbre.
The details of one or more implementations are set forth in the accompa-nying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram that illustrates an example electronic environment for implementing improved techniques described herein.
FIG. 2 is a diagram that illustrates example observer position and reference sphere along which monopole sources are distributed with respect to a microphone according to the improved techniques described herein.
FIG. 3 is a flow chart that illustrates an example method of performing the improved techniques within the electronic environment shown in FIG. 1.
FIG. 4 illustrates an example of a computer device and a mobile computer device that can be used with circuits described here.
DETAILED DESCRIPTION
Rendering HOA sound fields can involve summing a weighted sequence of components from each HOA channel and from each source direction. When expressed in a spherical coordinate basis, each component itself can have temporal, angular, and radial terms. The angular term can be expressed as a spherical harmonic function, while the radial factor can be expressed as a spherical Bessel function. Truncation of the sequence of components leads to an accurate description of the sound field within a certain radius (region of sufficient fidelity, or SF) and below a certain frequency. For some applications, the SF can be on the order of the size of a human head.
Nevertheless, because the size of the SF is inversely proportional to the frequency, for a given truncation length, low frequencies will have a greater reach and therefore the signal timbre generally changes as one moves away from the origin. Increasing the number of components Q=(Q+1).sup.2 is an inefficient way of improving performance because, for a particular frequency, the size of the SF is approximately proportional to the square root of the number of components. In some cases, this size can be smaller than the human head.
One conventional approach to rendering ambisonics outside of the SF involves determining a set of source driving signals that produce the Q coefficients (“ambisonic signals”) B of a spherical harmonic (SH) expansion of the measured sound field in the SF. Determining these source driving signals involves solving an underdetermined linear system for the source driving signals. Because such an underdetermined system results in multiple possible signals that produce the measured sound field, one may apply the additional constraint of minimizing the energy of the signals to obtain a single solution or a reduced number of solutions.
Nevertheless, such a conventional approach can result in unnatural sound fields outside the SF, because the additional constraint of minimizing energy of the source driving signals tends to spread audio energy out evenly over a sphere on which the sources are placed. This spreading of the audio energy minimizes the ability of a decoder to describe directionality.
Thus, as described herein and in contrast with the above-described conventional approaches to rendering HOA sound fields, improved techniques can include adjusting the coefficients B based on coefficients of a spherical harmonic (SH) expansion of an emphasis function that multiplies a monopole density that, when its product with a Green’s function is integrated over the unit sphere, produces the sound field. An advantage of the improved techniques is the ability to better reproduce directionality of a given sound field in a computationally efficient manner. The sound field may be a temporal function or a time-frequency function.
FIG. 1 is a diagram that illustrates an example system 100 in which the above-described improved techniques may be implemented. The system 100 can include a sound rendering computer 120 that is configured to render sound fields for a listener. The sound rendering computer 120 can include a network interface 122, one or more processing units 124, and memory 126. The network interface 122 can include, for example, Ethernet adaptors, Token Ring adaptors, etc., for converting electronic and/or optical signals received from a network into an electronic form for use by the sound rendering computer 120. The set of processing units 124 can include one or more processing chips and/or assemblies. The memory 126 can include both volatile memory (e.g., RAM) and non-volatile memory, such as one or more ROMs, disk drives, solid state drives, etc. The set of processing units 124 and the memory 126 together form control circuitry, which is configured and arranged to carry out various methods and functions as described herein.
In some embodiments, one or more of the components of the sound rendering computer 120 can include processors (e.g., processing units 124) configured to process instructions stored in the memory 126. Examples of such instructions include a sound acquisition manager 130, a direction emphasis acquisition manager 140, and a direction emphasis operation manager 150. In addition, the memory 126 can be configured to store various data, which is described with respect to the respective managers that use such data.
The sound acquisition manager 130 can be configured to acquire sound field spherical harmonic (SH) coefficient data 132. The sound acquisition manager 130 may obtain the sound field SH coefficient data 132 from an optical drive or over the network interface 122 and can store the obtained sound field SH coefficient data 132 in memory 126.
In some implementations, the sound field SH coefficient data 132 corresponds to B-format, or first-order ambisonics with four components, or ambisonic channels. In some implementations, the sound field SH coefficient data 132 corresponds to higher-order ambisonics, e.g., to order Q, in which case there are Q=(Q+1).sup.2 ambisonic channels, with each channel corresponding to a term in a spherical harmonic (SH) expansion of a sound field emanating from distant sources over a sphere.
In general, a sound field can be represented as an expansion of a pressure field p into spherical harmonics as follows:
.function..theta..PHI..infin..times..times..function..times..function..ti- mes..function..theta..PHI..times. ##EQU00001## where k is the wavenumber, c is the speed of sound waves, j.sub.n is the spherical Bessel function of the first kind, Y.sub.n.sup.m is a spherical harmonic, (.theta.,.PHI.) is a point on the unit sphere, and the B.sub.n.sup.m are the (frequency-dependent) coefficients of the spherical harmonic expansion of the pressure field p. The spherical harmonics can take the form:
.function..theta..PHI..times..times..times..times..times..pi..times..time- s..function..times..times..theta..times..times..times..PHI..times. ##EQU00002## where the P.sub.n.sup.|m| are the associated Legendre functions.
The pressure field can be truncated to order Q, so that there are Q=(Q+1).sup.2 terms in the sum as disclosed above. These Q terms can be defined by a coefficient vector B.sup.(Q) having Q elements, such that the q.sup.th element of B.sup.(Q) is B.sub.n(q).sup.m(q)(k),* where*
.function. ##EQU00003##* and*
.function..times. ##EQU00004## The elements of the coefficient vector B.sup.(Q) can form the sound field SH coefficient data 132.
The above-defined pressure field p has an alternative representation in terms of a monopole density .mu. distributed over a sphere centered at the origin and having a radius r’ as follows: p(r,.theta.,.PHI.,k)=.intg..intg..sub..OMEGA..mu.(.theta.’,.PHI.’,k)G(x,x- ‘,k)sin .theta.’.sup.2d.theta.’d.PHI.’, #(3) where .OMEGA. is the surface of a sphere (i.e., 4.pi. steradians where .theta.’.di-elect cons.[0,.pi.] and .PHI.’.di-elect cons.[0,2.pi.]), x=(r,.theta.,.PHI.) is an observation point, x’=(r’,.theta.’,.PHI.’) is a point on the sphere over which the monopole density is distributed, and the Green’s function G is written as
.times.’.times..times..pi..times.”.times..times. ##EQU00005## or, alternatively, for r’>r, as an expansion in SHs:
.function.’.infin..times..times..times..times..function.’.times..function- ..times..function..theta.’.PHI.’.times..function..theta..PHI..times..times- . ##EQU00006## where h.sub.n.sup.(2) is a spherical Hankel function of the second kind. Accordingly, the monopole density may be considered a driving field that provides a source of the pressure field.
The geometry of the driving field/observer situation described above is illustrated in FIG. 2, which illustrates an example sound field environment 200 according to the improved techniques. Within this environment 200, there is an origin 210 (open disk) at which a listener may be located. The monopole density/driving field .mu. is distributed over a sphere 230 centered at a microphone that can be a spherical microphone located at the origin 210 that measures and records sound field amplitudes as a function of direction away from the origin.
The sound rendering computer 120 is configured to faithfully reproduce the sound field that would exist at an observation point 220 (gray disk) based on sound field data 132 recorded at the origin 210. In doing this, the sound rendering computer 120 is configured to provide a directionality of the sound field at the observation point 220 by determining the amplitude of the driving field over the sphere 230. The directionality of the sound field is a property that allows a listener to discern from which direction a particular sound appears to originate. In this sense, a first sample of a pressure signal over a first window of time (e.g., one second) would result in first coefficients of the driving signal, a second sample of the pressure signal over a second window of time would result in second coefficient, and so on. For each sample of the sound field over a window of time, the coefficients of the pressure signal over frequency as expressed in Eq. (1) are Fourier transforms of the coefficients of the spherical harmonic expansion of the sound field in time.
As shown in FIG. 2, the observation point 220 is at a position x with respect to the microphone 210. The position x of the observation point 220 is outside of a region of sufficient fidelity (RSF) 250 but inside the sphere 230. In some implementations, the size R of the RSF 250 can be defined such that .left brkt-top.kR.right brkt-bot.=Q. A common situation involves a listener’s ears being outside of the RSF 250 for higher frequencies.
Returning to FIG. 1, the monopole density .mu. may be written as an expansion in SHs as follows:
.mu..function..theta.’.PHI.’.infin..times..times..gamma..function..times.- .function..theta..PHI..times. ##EQU00007## The coefficients .gamma..sub.n.sup.m(k) may be expressed in terms of the pressure field coefficients B.sub.n.sup.m(k). To see this, the expressions for the monopole density .mu. in Eq. (5) and the Green’s function in Eq. (4b) may be inserted into Eq. (3). Using the orthogonality of the SHs, the following expression for the pressure field p results:
.function..theta..PHI..times..infin..times..times..times.’.times..times..- times..gamma..function..times..function.’.times..function..times..function- ..theta..PHI..times. ##EQU00008##
By matching the modes in Eqs. (6) and (1), the coefficients of the pressure field and the monopole density may be related as follows:
.gamma..function..function..times.’.times..times..times..function.’.times- ..times. ##EQU00009## Of interest is the case where the radius r’ of the sphere over which the monopole density is distributed is much larger than the size of the SF. In this case, the Hankel function may be replaced by its asymptotic approximation so that the relation in Eq. (7a)* is simplified to*
.gamma..function..function.’.times..times.’.times..times. ##EQU00010##* so that the monopole density may be simplified to*
.mu..function..theta.’.PHI.”’.times..infin..times..times..times..functio- n..times..function..theta..PHI..times. ##EQU00011##
In some implementations, the pressure field has an explicit time dependence and is operated on in the time domain. In some implementations, the pressure field has both a time dependence and a frequency dependence and is operated on in a mixed time and frequency domain. In this case, a pressure signal p(r,.theta.,.PHI.,k,t) and a driving field signal .mu.(.theta.’,.PHI.’,k,t) may be considered, where t represents the time. In some implementations, when the signals are evaluated, the frequency is sampled such that k.di-elect cons.[0,2.pi.]/c, where c is the speed of sound, and t.di-elect cons.. In addition, the sound field SH coefficient data 132 includes a number of SH coefficient sets corresponding to samples of the pressure signal over time.
Returning to FIG. 1, the direction emphasis acquisition manager 140 is configured to produce a direction emphasis function v by which the directionality of the pressure signal p may be emphasized. In some implementations, the direction emphasis function v has a dependence on the time, t. In some implementations, the direction emphasis function v is independent of the time, t. Accordingly, the direction emphasis function v can be defined as follows: {tilde over (.mu.)}(.theta.’,.PHI.’,k,t)=v(.theta.’,.PHI.’,k).mu.(.theta.’,.PHI.’,k,t- ), #(9) where {tilde over (.mu.)} is a direction-emphasized driving field. Accordingly, the direction emphasis function v can be a multiplier of the driving signal .mu.(.theta.’,.PHI.’,k,t). Nevertheless, it is not the driving field or monopole density that is of interest, but rather the pressure signal or field.
An objective then is to derive an expression for the SH coefficients of a direction-emphasized pressure signal without computing the driving signal. Thus, the direction emphasis acquisition manager 140 can be configured to acquire direction emphasis SH coefficient data 142 that encapsulates coefficients V.sub.n.sup.m(k) of a SH expansion of the direction emphasis function v:
.function..theta.’.PHI.’.infin..times..times..function..times..function..- theta..PHI..times. ##EQU00012##
To derive the SH coefficients of a direction-emphasized pressure signal, the product v.mu. can be expressed in a SH expansion. To begin, it is recognized again that the expansions of each of the factors .mu. and v are each truncated rather than infinite. In particular, the driving signal .mu., like the pressure field, is truncated to order Q so that there are Q=(Q+1).sup.2 terms in the sum as disclosed above. These Q terms are defined by a coefficient vector .gamma..sup.(Q) having Q elements, such that the q.sup.th element of .gamma..sup.(Q) is .gamma..sub.q(k)=.gamma..sub.n(q).sup.m(q)(k), where, as before,
.function. ##EQU00013##* and*
.function..times. ##EQU00014## Similarly, the direction emphasis function v is truncated to order L, so that there are L=(L+1).sup.2 terms in the sum as disclosed above. These L terms are defined by a coefficient vector V.sup.(L) having L elements, such that the l.sup.th element of V.sup.(L) is V.sub.l(k)=V.sub.n(ql).sup.m(l)(k), where, as before,
.function. ##EQU00015##* and*
.function..times. ##EQU00016##* The respective SH expansions at a particular time T sample then take the form*
.mu..function..theta.’.PHI.’.times..gamma..function..times..function..the- ta.’.PHI.’.times. ##EQU00017##
.function..theta.’.PHI.’.times..function..times..function..theta.’.PHI.’.- times. ##EQU00018## where the terms Y.sub.q(.theta.’,.PHI.’)=Y.sub.n(q).sup.m(q)(.theta.’,.PHI.’) are elements of a SH vector Y.sup.(Q)(.theta.’,.PHI.’)=[Y.sub.0(.theta.’,.PHI.’), Y.sub.1(.theta.’,.PHI.’), … , Y.sub.Q-1(.theta.’,.PHI.’)].sup.T. Similarly, the terms Y.sub.l(.theta.’,.PHI.’)=Y.sub.n(l).sup.m(l)(.theta.’,.PHI.’) are elements of a SH vector Y.sup.(L)(.theta.’,.PHI.’)=[Y.sub.0 (.theta.’,.PHI.’), Y.sub.1 (.theta.’,.PHI.’), … , Y.sub.L-1(.theta.’,.PHI.’)].sup.T.
The product of the two SH expansions above of degrees Q and L, respectively, may be written as a single SH expansion of degree P.ltoreq.Q+L. Such a SH expansion may be generated using Clebsch-Gordan coefficients. The result is an expansion over a SH vector Y.sup.(P)(.theta.’,.PHI.’) related to the above SH vectors Y.sup.(Q)(.theta.’,.PHI.’) and Y.sup.(L)(.theta.’,.PHI.’) as follows: Y.sup.(Q)Y.sup.(L)=CY.sup.(P), #(13) where P=(P+1).sup.2, C.di-elect cons..sup.QL.times.P is a conversion matrix that includes, as elements, the Clebsch-Gordan coefficients, and denotes a Kronecker product. The conversion matrix C depends only on the degrees of the SH representations of the driving signal and the direction emphasis function. Accordingly, the conversion matrix C may be computed offline once and stored. In addition, the conversion matrix C is sparse, i.e., it has few nonzero entries.
The direction emphasis operation manager 150 can be configured to generate the coefficients of the SH expansion of the above-described product, i.e., the direction-emphasized sound field SH coefficient data 156. Specifically, the direction emphasis operation manager 150 can include a conversion matrix manager 152 that is configured to generate the conversion matrix data 154 encapsulating the conversion matrix C.
In some implementations, the conversion matrix manager 152 can be configured to produce the conversion matrix data 154 from Eq. (13) based on a random sample of P points on the unit sphere {(.theta..sub.i,.PHI..sub.i)}.sub.i.di-elect cons.{0, … , P-1}. Once the points on the unit sphere have been determined, the conversion matrix manager 152 can be configured to generate, at each of the plurality of points, samples of Y.sup.(Q), Y.sup.(L), and Y.sup.(P) to form P column vectors vect[Y.sup.(Q)(.theta..sub.i,.PHI..sup.i)Y.sup.(L)(.theta..sub.i,- .PHI..sub.i)] (i.e., the Kronecker product of the first two vectors) and P column vectors Y.sup.(P)(.theta..sub.i,.PHI..sub.i). The conversion matrix manager 152 is then configured to invert the P.times.P matrix [Y.sup.(P)(.theta..sub.0,.PHI..sub.0), … , Y.sup.(P) (.theta..sub.P-1,.PHI..sub.P-1)] to produce the conversion matrix data 154.
By substituting the relation in Eq. (13) and the SH expansions in Eqs. (11) and (12) into Eq. (9), the following SH expansion of the direction-emphasized driving signal results:
.mu..function..theta.’.PHI.’.times..function..times. ##EQU00019## Substituting the result in Eq. (7b) into Eq. (14) produces the direction-emphasized pressure signal SH expansion coefficients encapsulated by the direction-emphasized sound field SH coefficient data 156: g.sup.(P).smallcircle.{tilde over (B)}.sup.(Q)=C.sup.T((g.sup.(Q).smallcircle.B.sup.(Q))V.sup.(L)), #(15) where g.sup.(Q) is a vector whose q.sup.th element is (-j).sup.n(q), and so on. Thus, equation (15) implies that the direction emphasis results in a higher-order ambisonics representation. With .smallcircle. defined as a Hadamard (element-wise) product, the direction-emphasized pressure signal is then, by Eq. (1):
.function..theta..PHI..times..times..function..function..times..function.- .function..smallcircle..times. ##EQU00020##* where*
.function. ##EQU00021## Accordingly, the affection emphasis operation manager 150 can be configured to produce the coefficients {tilde over (B)}.sup.(Q) as in Eq. (15) and to generate the direction-emphasized pressure signal (or field if static) as in Eq. (16).
Because the conversion matrix C is sparse, the computation of the direction-emphasized pressure signal SH expansion coefficients is efficient. For example, when Q=1 (i.e., degree-1 pressure signal) and L=2 (degree-2 direction emphasis function), the transpose of the conversion matrix C.sup.T has dimensions 16.times.36. However, only 48 of the 576 matrix elements are nonzero, resulting in four multiplies per output channel per time sample. One issue is that the selection of those non-zero entries by the direction emphasis operation manager 150 may require additional operations.
In some implementations, when the direction emphasis function v is independent of the time t, the direction emphasis operation manager 150 is configured to generate the direction-emphasized sound field SH coefficient data 156 using a more efficient process. Defining 1.sup.(Q) as a Q-dimensional vector of ones, I.sup.(Q) as the Q.times.Q identity matrix, and the matrix A.sup.(QL)=I.sup.(Q)1.sup.(L), such that B.sup.(Q)1.sup.(L)=A.sup.(LQ)B.sup.(Q), then Eq. (15) may be rewritten as: {tilde over (B)}.sup.(P)=(diag.sup.-1(g.sup.(P))C.sup.TA.sup.(LQ)diag(g.sup.(Q)))B.su- p.(Q), #(17) where diag is a diagonal matrix with the argument vector along the diagonal and where C.sup.T=C.sup.T.smallcircle.[1.sup.(P)(V.sup.(L)1.sup.(Q)).sup.T]. Because the quantity in parentheses in Eq. (17) is time invariant, that quantity may be computed offline. Accordingly, only PQ multiplies are needed for each time sample for the direction emphasis operation performed by the direction emphasis operation manager 150. Again, when Q=1 and L=2, there are four multiplies per output channel.
In some implementations, the direction emphasis acquisition manager 140 can be configured to generate the coefficients of the SH expansion of the direction emphasis function based on the sound field SH coefficient data 132. In this case, the generation is based on a particular formulation of the direction emphasis function in terms of time-dependent driving signals, assuming the pressure signal is a stationary stochastic process, as follows:
.function..theta.’.PHI.’.function..mu..function..theta.’.PHI.’.alpha..int- g..intg..OMEGA..times..function..mu..function..theta.’.PHI.’.alpha..times.- .times..times..theta.’.times..times..times..theta.’.times..times..times..P- HI.’.times. ##EQU00022## where E an ensemble average that in practice can be approximated by an average over time (i.e., time samples) and .alpha.>1 is a real constant. The denominator in Eq. (18) represents a normalization, so that the integral of v over the unit sphere is unity. The time-dependent driving signal may be written in a similar fashion to the time-independent formulation shown in Eq. (8) when kr’.fwdarw..infin.:
.mu..function..theta.’.PHI.”’.times..infin..times..times..times..functio- n..times..function..theta..PHI..times..times. ##EQU00023## or, in terms of a single sum,
.mu..function..theta.’.PHI.”’.times..infin..times..function..times..time- s..times..times. ##EQU00024## In the same limit (kr’.fwdarw..infin.),* the complex conjugate of the driving signal may be written as*
.mu..function..theta.’.PHI.”’.times..infin..times..function..times..time- s..times. ##EQU00025## where {hacek over (B)}.sub.n.sup.m=B.sub.n*.sup.-m. Again, the coefficients of the SH functions are time-dependent.
When .alpha.=2, the direction emphasis function may be determined based on the sound field SH coefficient data 132. Thus, it can be shown that:
.function..mu..function..theta.’.PHI.”.times..times..times..infin..times- ..infin..times..function..function..times..function..times..times..times..- times. ##EQU00026## Eq. (21) may then be written in terms of a single SH expansion as described previously. It is assumed here that the driving signal .mu. has a SH expansion that has been truncated with degree Q, with the number of terms in the expansion being Q=(Q+1).sup.2. When the direction emphasis function is normalized such that v(.theta.’,.PHI.’,k)=r’.sup.2E[|.mu.(.theta.’,.PHI.’,k,t)|.sup.2],* the SH expansion of the direction emphasis function becomes*
.function..theta.’.PHI.’.times..function..function..smallcircle..function- ..times. ##EQU00027## where P=(2Q+1).sup.2. Note that the expression derived in Eq. (22) can be used to compute an emphasized monopole density and an emphasized pressure field by using Eqs. (14) and (16), respectively.
Accordingly, the direction emphasis acquisition manager 140 can be configured to generate the direction emphasis SH coefficient data 142 according to Eq. (22) with the above assumptions. The direction emphasis acquisition manager 140 also can be configured to generate the ensemble average of the sound field SH coefficient data 132 to perform the generation of the direction emphasis SH coefficient data 142.
FIG. 3 is a flow chart that illustrates an example method 300 of rendering high-order ambisonics (HOA). The method 300 may be performed by software constructs described in connection with FIG. 1, which reside in memory 126 of the sound rendering computer 120 and which are run by the set of processing units 124.
At 302, the sound acquisition manager 130 receives sound data resulting from a sound field detected at a microphone. The sound field is represented as a first expansion in spherical harmonic (SH) functions including a vector of coefficients of the first expansion, e.g., the vector B.sup.(Q).
At 304, the direction emphasis acquisition manager 140 obtains a vector of coefficients of a second expansion of a direction emphasis field in SH functions, e.g., the vector V.sup.(L). The direction emphasis field v defines a direction-emphasized monopole density field {tilde over (.mu.)} upon multiplication with a monopole density field .mu., e.g., as in Eq. (9). It is noted that the neither the monopole density field nor the direction-emphasized monopole density field are computed. Rather, the concepts of the fields provides the basis for defining the direction emphasis field. The monopole density field .mu., when represented as an expansion in SH functions, includes a vector of coefficients. The vector of coefficients of the expansion is based on the vector of coefficients of the first expansion, e.g., as in Eq. 7b.
At 306, the direction emphasis operation manager 150 performs a direction emphasis operation, e.g., Eq. (15), on the vector of coefficients of the first expansion based on the vector of coefficients of the second expansion to produce a vector of coefficients of a third expansion into SH functions, e.g., {tilde over (B)}.sup.(Q). The third expansion represents a direction-emphasized sound field, e.g., {tilde over (p)}, that reproduces a directional sound field with a perceived directionality and timbre.
In some implementations, the conversion matrix generation manager 152 conversion matrix data, e.g., conversion matrix data 152 representing a conversion matrix, e.g., C defined in Eq. (13), resulting from conversion of an expansion in pairs of SHs into an expansion over single SHs. The direction emphasis operation manager 150 then producing the vector of coefficients of the third expansion based on the conversion matrix.
In some implementations, the conversion matrix generation manager 152 generates, as an element of the conversion matrix, a Clebsch-Gordan coefficient representing a weight of a SH function in the expansion in pairs of SHs. In some implementations, the conversion matrix generation manager 152 generates the elements of the conversion matrix by generating a plurality of points on a unit sphere {(.theta..sub.i,.PHI..sub.i)}.sub.i.di-elect cons.{0, … , P-1}, generating, at each of the plurality of points, samples of a first vector of SH functions Y.sup.(Q) to produce a first matrix, a second vector of SH functions Y.sup.(L) to produce a second matrix, and samples of a third vector of SH functions Y.sup.(P) to produce a third matrix; and producing, as the conversion matrix, a product of an inverse of the third matrix of SH functions, e.g., P column vectors Y.sup.(P)(.theta..sub.i,.PHI..sub.i) and a Kronecker product of the first matrix and the second matrix of SH functions, e.g., vect[Y.sup.(Q)(.theta..sub.i,.PHI..sub.i)Y.sup.(L)(.theta..sub.i,.PHI..su- b.i)].
In some implementations, the direction emphasis operation manager 150 generates a Kronecker product of the vector of coefficients of the first expansion and the vector of coefficients of the second expansion to produce a vector of coefficient products, e.g., B.sup.(Q)V.sup.(L) in Eq. (15). The direction emphasis operation manager 150 then produces, as the vector of coefficients of the third expansion, a product of a transpose of the conversion matrix and the vector of coefficient products, e.g., as in Eq. (15).
In some implementations, the direction emphasis operation manager 150 generates a Kronecker product of the vector of coefficients of the second expansion and a first vector of ones to produce a first product vector, e.g., V.sup.(L)1.sup.(Q) in Eq. (17). The direction emphasis operation manager 150 then generates a product of a second vector of ones and a transpose of the first product vector to produce a second product vector, e.g., 1.sup.(P)(V.sup.(L)1.sup.(Q)).sup.T in Eq. (17). The direction emphasis operation manager 150 then generates a Hadamard product of a transpose of the conversion matrix and the second product vector to produce a second conversion matrix, e.g. C.sup.T=C.sup.T.smallcircle.[1.sup.(P)V.sup.(L)1.sup.(Q)).sup.T] in Eq. (17). The direction emphasis operation manager 150 then generates a Kronecker product of an identity matrix and a third vector of ones to produce a matrix of units, e.g., A.sup.(QL)=I.sup.(Q)1.sup.(L) in Eq. (17). The direction emphasis operation manager 150 then produces, as the vector of coefficients of the fourth expansion, a product of a transpose of the second conversion matrix, the matrix of units, and the vector of coefficients of the first expansion, e.g., {tilde over (B)}.sup.(P)=(diag.sup.-1(g.sup.(P))C.sup.TA.sup.(LQ)diag(g.sup.(Q)))B.su- p.(Q) in Eq. (17), where g.sup.(Q) is a vector whose q.sup.th element is (-j).sup.n(q), and so on.
In some implementations, the direction emphasis acquisition manager 140 performs an ensemble average over time of a power of a magnitude of the monopole density field, e.g., as in Eq. (18). In some implementations, the power is equal to 2. In that case, the direction emphasis acquisition manager 140 generating an ensemble average over time of a Kronecker product of the vector of coefficients of the first expansion and a complex conjugate of the vector of coefficients of the first expansion to produce a first vector of ensemble-averaged coefficient products, e.g., as in Eq. (22) with the complex conjugate being the vector {hacek over (B)}.sup.(Q). The direction emphasis acquisition manager 140 then generates a Hadamard product of a vector of powers of an imaginary unit, e.g., g and the first vector of ensemble-averaged coefficient products to produce a second vector of ensemble-averaged coefficient products, e.g., as in Eq. (22). The direction emphasis acquisition manager 140 then produces, as an element of the vector of coefficients of the second expansion, a product of a transpose of the conversion matrix and a corresponding element of the second vector of ensemble-averaged coefficient products, e.g., as in Eq. (22). Again, it is noted that, in the framework described here, the ensemble average may be approximated with a time average.
In some implementations, the vector of coefficients of the second expansion is based on the vector of coefficients of the first expansion.
In some implementations, the memory 126 can be any type of memory such as a random-access memory, a disk drive memory, flash memory, and/or so forth. In some implementations, the memory 126 can be implemented as more than one memory component (e.g., more than one RAM component or disk drive memory) associated with the components of the sound rendering computer 120. In some implementations, the memory 126 can be a database memory. In some implementations, the memory 126 can be, or can include, a non-local memory. For example, the memory 126 can be, or can include, a memory shared by multiple devices (not shown). In some implementations, the memory 126 can be associated with a server device (not shown) within a network and configured to serve the components of the sound rendering computer 120.
The components (e.g., managers, processing units 124) of the sound rendering computer 120 can be configured to operate based on one or more platforms (e.g., one or more similar or different platforms) that can include one or more types of hardware, software, firmware, operating systems, runtime libraries, and/or so forth.
The components of the sound rendering computer 120 can be, or can include, any type of hardware and/or software configured to process attributes. In some implementations, one or more portions of the components shown in the components of the sound rendering computer 120 in FIG. 1 can be, or can include, a hardware-based module (e.g., a digital signal processor (DSP), a field programmable gate array (FPGA), a memory), a firmware module, and/or a software-based module (e.g., a module of computer code, a set of computer-readable instructions that can be executed at a computer). For example, in some implementations, one or more portions of the components of the sound rendering computer 120 can be, or can include, a software module configured for execution by at least one processor (not shown). In some implementations, the functionality of the components can be included in different modules and/or different components than those shown in FIG. 1.
In some implementations, the components of the sound rendering computer 120 (or portions thereof) can be configured to operate within a network. Thus, the components of the sound rendering computer 120 (or portions thereof) can be configured to function within various types of network environments that can include one or more devices and/or one or more server devices. For example, the network can be, or can include, a local area network (LAN), a wide area network (WAN), and/or so forth. The network can be, or can include, a wireless network and/or wireless network implemented using, for example, gateway devices, bridges, switches, and/or so forth. The network can include one or more segments and/or can have portions based on various protocols such as Internet Protocol (IP) and/or a proprietary protocol. The network can include at least a portion of the Internet.
In some embodiments, one or more of the components of the sound rendering computer 120 can be, or can include, processors configured to process instructions stored in a memory. For example, the sound acquisition manager 130 (and/or a portion thereof), the direction emphasis acquisition manager 140 (and/or a portion thereof), and the direction emphasis operation manager 150 (and/or a portion thereof can include a combination of a memory storing instructions related to a process to implement one or more functions and a configured to execute the instructions.
FIG. 4 shows an example of a computer device 400 and a mobile computer device 450, which may be used with the techniques described here. The computing device 400 is intended to represent various forms of digital computers, such as laptops, desktops, tablets, workstations, personal digital assistants, televisions, servers, blade servers, mainframes, and other appropriate computing devices. The computing device 450 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
The computing device 400 includes a processor 402, memory 404, a storage device 406, a high-speed interface 408 connecting to memory 404 and high-speed expansion ports 410, and a low speed interface 412 connecting to low speed bus 414 and storage device 406. The processor 402 can be a semiconductor-based processor. The memory 404 can be a semiconductor-based memory. Each of the components 402, 404, 406, 408, 410, and 412, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 402 can process instructions for execution within the computing device 400, including instructions stored in the memory 404 or on the storage device 406 to display graphical information for a GUI on an external input/output device, such as display 416 coupled to high speed interface 408. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 400 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
The memory 404 stores information within the computing device 400. In one implementation, the memory 404 is a volatile memory unit or units. In another implementation, the memory 404 is a non-volatile memory unit or units. The memory 404 may also be another form of computer-readable medium, such as a magnetic or optical disk.
The storage device 406 is capable of providing mass storage for the computing device 400. In one implementation, the storage device 406 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 404, the storage device 406, or memory on processor 402.
The high speed controller 408 manages bandwidth-intensive operations for the computing device 400, while the low speed controller 412 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 408 is coupled to memory 404, display 416 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 410, which may accept various expansion cards (not shown). In the implementation, low-speed controller 412 is coupled to storage device 406 and low-speed expansion port 414. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
The computing device 400 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 420, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 424. In addition, it may be implemented in a personal computer such as a laptop computer 422. Alternatively, components from computing device 400 may be combined with other components in a mobile device (not shown), such as device 450. Each of such devices may contain one or more of computing device 400, 450, and an entire system may be made up of multiple computing devices 400, 450 communicating with each other.
The computing device 450 includes a processor 452, memory 464, an input/output device such as a display 454, a communication interface 466, and a transceiver 468, among other components. The device 450 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 450, 452, 464, 454, 466, and 468, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
The processor 452 can execute instructions within the computing device 450, including instructions stored in the memory 464. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 450, such as control of user interfaces, applications run by device 450, and wireless communication by device 450.
Processor 452 may communicate with a user through control interface 458 and display interface 456 coupled to a display 454. The display 454 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 456 may comprise appropriate circuitry for driving the display 454 to present graphical and other information to a user. The control interface 458 may receive commands from a user and convert them for submission to the processor 452. In addition, an external interface 462 may be provided in communication with processor 452, so as to enable near area communication of device 450 with other devices. External interface 462 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
The memory 464 stores information within the computing device 450. The memory 464 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 474 may also be provided and connected to device 450 through expansion interface 472, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 474 may provide extra storage space for device 450, or may also store applications or other information for device 450. Specifically, expansion memory 474 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 474 may be provide as a security module for device 450, and may be programmed with instructions that permit secure use of device 450. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 464, expansion memory 474, or memory on processor 452 that may be received, for example, over transceiver 468 or external interface 462.
The computing device 450 may communicate wirelessly through communication interface 466, which may include digital signal processing circuitry where necessary. Communication interface 466 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 468. In addition, short-range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 470 may provide additional navigation- and location-related wireless data to device 450, which may be used as appropriate by applications running on device 450.
The computing device 450 may also communicate audibly using audio codec 460, which may receive spoken information from a user and convert it to usable digital information. Audio codec 460 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 450. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 450.
The computing device 450 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 480. It may also be implemented as part of a smart phone 482, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Although certain example methods, apparatuses and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. It is to be understood that terminology employed herein is for the purpose of describing particular aspects, and is not intended to be limiting. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.