Facebook Patent | Audio System And Method
Patent: Audio System And Method
Publication Number: 20170208417
Publication Date: 20170720
Applicants: Facebook
Abstract
Embodiments relate to, for a scene comprising a representation of at least one object and at least one sound source: obtaining a decomposition of the at least one object, the decomposition comprising at least one geometric component; modelling at least one interaction of the at least one object and the at least one sound source using the at least one geometric component; and, in dependence on the modelling of the at least one interaction, processing an audio input associated with the at least one sound source to obtain an audio output.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority under 35 U.S.C. .sctn.119(a) to United Kingdom Patent Application No. 1601000.1 filed on Jan. 19, 2016, which is incorporated by reference herein in its entirety.
BACKGROUND
[0002] The present invention relates to an audio system and method, for example an audio system and method for obtaining an audio output for a scene comprising at least one object and at least one sound source.
[0003] It is known that a computer game or other interactive application may be made up of multiple computer-generated objects in a computer-generated scene. The objects may be represented in 3D. For example, objects may be represented as polygonal meshes, which may also be referred to as a wire-frame representation.
[0004] A scene may also contain multiple sound sources. A given sound source may be associated with an object in the scene, or may be independent of the objects in the scene. Sound associated with a given sound source may comprise, for example, recorded sound and/or computer-generated sound.
[0005] A scene may further contain a position with respect to which sound is determined. The position with respect to which sound is determined may be referred to as the position of a listener. The position of the listener may be used to determine what a user (for example, a player of the game) hears.
[0006] The position of the listener may be attached to a virtual camera position. If the position of the listener is attached to a virtual camera position, the position from which the user sees the scene may be the same as the position from which the user hears sounds associated with the scene. However, while the user may only be able to see objects that are in front of the virtual camera, in some circumstances the user may be able to hear sound sources at any angle relative to the listener, including sound sources behind the listener.
[0007] 3D audio may refer to a technique used to process audio in such a way that a sound may be positioned anywhere in 3D space. The positioning of sounds in 3D space may give a user the effect of being able to hear a sound over a pair of headphones, or from another source, as if it came from any direction (for example, above, below or behind). The positioning of sound in 3D space may be referred to as spatialisation. 3D audio may be used in applications such as games, virtual reality or augmented reality to enhance the realism of computer-generated sound effects supplied to the user.
[0008] One method of obtaining 3D audio may be binaural synthesis. Binaural synthesis may aim to process monaural sound (a single channel of sound) into binaural sound (a plurality of channels, for instance at least one channel for each ear, for example a channel for each headphone of a set of headphones) such that it appears to a listener that sounds originate from sources at different positions relative to the listener, including sounds above, below and behind the listener.
[0009] For example, binaural synthesis may be used to synthesise sound to be delivered to a pair of headphones, in which different signals are sent to the left and right ears of a user. To the user, a difference between the signal received through the user’s left ear and the signal received by the user’s right ear (for example, a relative time delay) may make it seem that the sound is coming from a particular position. The use of binaural synthesis to obtain audio signals for two headphones or other devices may be described as binaural rendering.
[0010] Effects of a user’s body on sound received by the user may be simulated. For example, HRTF (head related transfer function) methods may be used to simulate the effect of a listener’s head, pinnae and shoulders on sound received from a particular direction.
[0011] Various forms of spatialisation may be used. For example, audio signals may be processed to produce spatialisation over speakers or over surround sound, for example over a 5.1 surround sound system. The audio signals may be processed so that a user listening to the signals over speakers or over surround sound perceives the sound as coming from a particular position.
[0012] There exist systems in which sound propagation is calculated by ray tracing. See, for example, Carl Schissler and Dinesh Manocha, GSound: Interactive Sound Propagation for Games, AES 41.sup.st Conference: Audio for Games, 2011.
SUMMARY
[0013] In a first aspect of the invention, there is provided a method comprising, for a scene comprising a representation of at least one object and at least one sound source: obtaining a decomposition of the at least one object, the decomposition comprising at least one geometric component; modelling at least one interaction of the at least one object and the at least one sound source using the at least one geometric component; and, in dependence on the modelling of the at least one interaction, processing an audio input associated with the at least one sound source to obtain an audio output.
[0014] By modelling interactions of objects and sound sources, an audio output may be obtained that is experienced by a user as being more realistic than an output in which such interactions are not modelled. Using a decomposition of objects may provide a method that is computationally efficient and/or accurate.
[0015] A calculation of interactions that is based on a full representation of object geometry (without decomposition into geometrical components) may be complicated and/or CPU intensive. The decomposition into geometric components may allow the calculation of interactions to be less complicated and/or less CPU intensive. Breaking down each object into smaller parts (the geometric components) may allow calculations to be performed on smaller regions than if the original object representation was used.
[0016] The audio input may comprise a monaural audio input. The processing may comprise performing binaural synthesis. The audio output may comprise binaural audio output.
[0017] In some circumstances, synthesis, for example binaural synthesis, may be computationally intensive. The method of modelling interactions using geometric components may allow such interactions to be added to the synthesis process without an excessive increase in computational load. In some applications, the computational resources used for calculating sound may be limited, and it may be particularly important that sound is processed efficiently.
[0018] Each geometric component may comprise a representation of a shape for example a three-dimensional shape. The shape may comprise a simplified representation of the corresponding object or at least part of the corresponding object. Each geometric component may comprise a respective convex hull. Obtaining the decomposition of the at least one object may comprise performing a convex hull decomposition of each object. The convex hull decomposition method may be computationally efficient.
[0019] The representation of the at least one object may comprise a mesh representation, for example a polygonal mesh representation of each object.
[0020] The at least one interaction may comprise, for each sound source, at least one occlusion. By providing an audio output that includes the effect of occlusions, a user may experience the audio output as being more realistic than if occlusions were not occluded. By providing more realistic sound, a user experience may be improved.
[0021] For each sound source, modelling the at least one occlusion for that sound source may comprise: obtaining relative positions of the sound source, each geometric component, and a position in the scene with respect to which the processing is to be performed; and determining at least one occlusion coefficient in dependence on the relative positions.
[0022] For each sound source, modelling the at least one occlusion may comprise determining that one or more of the at least one geometric components is positioned between the sound source and the position in the scene.
[0023] Determining the at least one occlusion coefficient may comprise determining a respective occlusion coefficient for each of the geometric components that is positioned between the sound source and the position in the scene.
[0024] Determining that a geometric component is positioned between the sound source and the position in the scene may comprise determining at least one occlusion point at which the geometric component intersects a straight line between the sound source and the position in the scene.
[0025] Each occlusion coefficient may be dependent on a number of occlusion points and/or a spacing of occlusion points.
[0026] The audio input associated with the at least one sound source may comprise, for each sound source, a respective audio signal.
[0027] Processing of the audio input may comprise, for each audio signal, adjusting in dependence on the at least one occlusion coefficient for the sound source associated with that audio signal at least one parameter of the audio signal, for example at least one of an amplitude or gain of the audio signal, a frequency spectrum of the audio signal. Changing the amplitude or gain and/or frequency spectrum of audio signals for sound sources that are occluded may provide sound that is perceived by a user to be more realistic.
[0028] The method may further comprise selecting the at least one object in dependence on a distance between each object and a position in the scene with respect to which the processing is performed. Selecting the at least one object may comprise selecting at least one object that is within a threshold distance of the position in the scene. Considering only objects that are within a maximum distance may improve computational efficiency. By calculating interactions only for objects that are near the listener, it may be possible to calculate only interactions that provide a significant contribution to the final audio signal.
[0029] The at least one interaction may comprise, for each sound source, at least one reflection. By providing an audio output that includes the effect of reflections, a user may experience the audio output as being more realistic than if reflections were not occluded. By providing more realistic sound, a user experience may be improved.
[0030] For each sound source, modelling each reflection for the sound source may comprise determining a reflected ray, for example from the sound source to a position in the scene with respect to which the processing is to be performed. Determining a reflected ray may comprise determining a path of the reflected ray. The path of the reflected ray may comprise a portion of the path before reflection and a portion of the path after reflection.
[0031] Determining a reflected ray, for example from the sound source to the position in the scene, may comprise determining a reflected ray from the sound source to the position in the scene via a face of a geometric component.
[0032] The method may further comprise, for each geometric component, obtaining at least one further component corresponding to the geometric component. Each further component may comprise a representation of a shape, for example a three-dimensional shape. Each further component may comprise a representation of a respective one or more of the geometric component(s), for example a simplified representation of the geometric component(s). Each further component may comprise an oriented bounding box. Each further component may be substantially convex. Each further component may be calculated using a callipers method, for example a 3D vernier callipers method.
[0033] Each further component may be a simpler shape than its respective geometric component. Simpler shapes may require fewer CPU cycles to carry out calculations, for example reflection calculations. The calculation of the reflections may be more computationally efficient than may have been the case if a more complex representation of the object were used.
[0034] Determining a reflected ray, for example from the sound source to the position in the scene, may comprise determining a reflected ray (for example from the sound source to the position in the scene) via a face of a further component.
[0035] Determining a reflected ray from the sound source to the position in the scene via a face of the further component may comprise determining that the face of the further component faces both the sound source and the position in the scene.
[0036] The determining of each reflected ray may comprise an image-source method.
[0037] The method may further comprise, for each reflected ray, determining whether the reflected ray is obstructed. The method may further comprise, for any reflected ray that is determined to be obstructed, removing that reflected ray and/or determining an occlusion coefficient for that reflected ray.
[0038] The processing of the audio signal in dependence on the modelling of the at least one interaction may comprise processing the audio input in dependence on the determined at least one reflection.
[0039] The audio input may comprise, for each sound source, a respective audio signal. Determining the at least one reflection for each sound source may comprise determining at least one reflection coefficient for each sound source.
[0040] Processing the audio input may comprise, for each audio signal, adjusting in dependence on the at least one reflection coefficient for its associated sound source at least one parameter of the audio signal, for example at least one of an amplitude or gain of the audio signal, a frequency spectrum of the audio signal. Processing the audio input may comprise applying at least one time delay to each audio signal. Changing the amplitude or gain and/or frequency spectrum of audio signals for reflections and/or applying a time delay to reflections may provide sound that is perceived by a user to be more realistic.
[0041] Applying at least one time delay to each audio signal may comprise, for each reflected ray determined for the sound source associated with the audio signal, applying a time delay that is dependent on a length of the reflected ray.
[0042] The method may comprise registering with a grid each object and/or bounding box and/or convex hull or other geometric component. The method may comprise using a grid based search system to find objects, convex hulls, and/or bounding boxes based on their positions on the grid.
[0043] In a further aspect of the invention, which may be provided independently, there is provided an apparatus comprising, for a scene comprising a representation of at least one object and at least one sound source: means for obtaining a decomposition of the at least one object, the decomposition comprising at least one geometric component; means for modelling at least one interaction of the at least one object and the at least one sound source using the at least one geometric component; and, means for, in dependence on the modelling of the at least one interaction, processing an audio input associated with the at least one sound source to obtain an audio output.
[0044] In another aspect of the invention, which may be provided independently, there is provided an apparatus comprising a processor configured to, for a scene comprising a representation of at least one object and at least one sound source: obtain a decomposition of the at least one object, the decomposition comprising at least one geometric component; model at least one interaction of the at least one object and the at least one sound source using the at least one geometric component; and, in dependence on the modelling of the at least one interaction, process an audio input associated with the at least one sound source to obtain an audio output.
[0045] In a further aspect of the invention, which may be provided independently, there is provided an apparatus comprising a processing resource configured to perform a method as claimed or described herein.
[0046] The apparatus may further comprise an input device configured to receive audio input representing sound from at least one audio source. The processing resource may be configured to obtain the audio output by processing the audio input in dependence on the modelling of the at least one interaction. The apparatus may further comprise an output device configured to output the audio output.
[0047] In another aspect of the invention, which may be provided independently, there is provided a computer program product comprising computer readable instructions that are executable by a processor to perform a method as claimed or described herein.
[0048] There may also be provided an apparatus or method substantially as described herein with reference to the accompanying drawings.
[0049] Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. For example, apparatus features may be applied to method features and vice versa.
BRIEF DESCRIPTION OF DRAWINGS
[0050] Embodiments of the invention are now described, by way of non-limiting examples, and are illustrated in the following figures, in which:
[0051] FIG. 1 is a schematic diagram of an audio system according to an embodiment;
[0052] FIG. 2 is a flow chart illustrating in overview a convex hull decomposition method;
[0053] FIG. 3 is a flow chart illustrating in overview a method for calculating oriented bounding boxes;
[0054] FIG. 4 is a schematic representation of a cylinder;
[0055] FIG. 5 is a schematic representation of a convex hull of the cylinder of FIG. 4;
[0056] FIG. 6 is a schematic representation of an oriented bounding box of the convex hull of FIG. 5;
[0057] FIG. 7 is a schematic representation of an object;
[0058] FIG. 8 is a schematic representation of a decomposition of the object of FIG. 7 into a plurality of convex hulls;
[0059] FIG. 9 is a schematic representation of oriented bounding boxes of the convex hulls of FIG. 8;
[0060] FIG. 10 is a flow chart illustrating in overview the creation of a container of sound sources;
[0061] FIG. 11 is a flow chart illustrating in overview the process of an embodiment;
[0062] FIG. 12 is a schematic diagram representing a reflected ray;
[0063] FIG. 13 is a flow chart illustrating in overview the process of an embodiment.
DETAILED DESCRIPTION OF EMBODIMENTS
[0064] An audio system 10 according to an embodiment is illustrated schematically in FIG. 1. The audio system 10 comprises a computing apparatus 12 that is configured to receive monaural audio input from an input device, for example in the form of external source or data store 14, process the audio input to obtain a binaural output comprising a left output and a right output, and to deliver the binaural output to headphones 16a, 16b. The left output is delivered to left headphone 16a and the right output is delivered to right headphone 16b. In other embodiments, the binaural output may be delivered to at least two loudspeakers. For example, the left output may be delivered to a left loudspeaker and the right output may be delivered to a right loudspeaker. In some embodiments, the monaural audio input may be generated by or stored in computing apparatus 12 rather than being received from an external source or data store 14.
[0065] In further embodiments, any audio input may be used. The audio input may be processed in any appropriate way to produce an audio output to be delivered to headphones, speakers, a surround sound system, or any other suitable audio device. The processing may be such that a user may perceive the audio output as being localized in space.
[0066] The computing apparatus 12 comprises a processor 18 for processing audio data and a memory 20 for storing data. The computing apparatus 12 also includes a hard drive and other components including RAM, ROM, a data bus, an operating system including various device drivers, and hardware devices including a graphics card. Such components are not shown in FIG. 1 for clarity.
[0067] In the embodiment of FIG. 1, computing apparatus 12 is configured to perform a convex hull decomposition of objects and to generate an oriented bounding box for each convex hull. Computing apparatus 12 is also configured to calculate reflections using the oriented bounding boxes, calculate occlusions using the convex hulls, and use the occlusions and reflections in processing audio input to obtain audio output that includes occlusion and reflection effects. In alternative embodiments, alternative geometric components in place of convex hulls and bounding boxes may be used, for example any suitable representation of shapes for example three-dimensional shapes. The geometric components may comprise a simplified representation of a corresponding object or at least part of the corresponding object.
[0068] In other embodiments, audio system 10 may comprise a plurality of computing apparatuses. For example, a first computing apparatus may generate convex hulls and oriented bounding boxes, or other components, and a second, different computing apparatus may calculate occlusions and/or reflections and perform synthesis.
[0069] The system of FIG. 1 is configured to perform the methods of the flow charts of FIGS. 2, 3, 10, 11 and 13.
[0070] In the present embodiment, a plurality of objects and a plurality of sound sources are positioned in a scene. The scene also comprises a position with respect to which sound is calculated, which may be referred to as the position of a listener. The scene may comprise a part of or the whole of any virtual environment, for example the virtual world of a game.
[0071] The computing apparatus 12 is used to model interactions of the objects and sound sources. In the present embodiment, the interactions of the objects and sound sources comprise occlusions and reflections. The computing apparatus 12 is used for a geometry- or object-based calculation of reflections and occlusions.
[0072] In the present embodiment, each object is initially represented by a respective 3D polygonal mesh. Each polygonal mesh may comprise a plurality of vertices that may be joined by edges to form faces, in this case triangular faces. Each object may have a complex shape. For example, some objects may represent parts of a scene such as walls, while other objects may represent, for example, people or animals. Therefore, the initial polygonal mesh representation of each object may take any shape. The shape of an object may be complex. The shape of an object may comprise at least one convex portion and/or at least one concave portion. An object may be large and/or the polygonal mesh representation of an object may comprise a large number of vertices.
[0073] The plurality of objects are each decomposed into a respective set of convex hulls using convex hull decomposition, as described below with reference to the flow chart of FIG. 2. Each object’s mesh data is reduced to one or more convex hulls. If an object is non-convex, it is broken down by the convex hull decomposition into convex parts. Each convex hull may have any appropriate geometrical shape. Each convex hull may be represented by a respective convex hull mesh.
[0074] In other embodiments, the objects are each decomposed into any appropriate convex geometrical components. The convex geometrical components may or may not meet the mathematical definition of convex hulls. The geometrical components may be used to calculate occlusion, reflection, or both occlusion and reflection.
[0075] Convex hull analysis of a mesh (for example, of a polygonal mesh representing a game object) may return hulls containing new vertices which are not part of the original mesh. Convex hull analysis of a mesh may return hulls whose vertices are a subset of the original mesh vertices, or a combination of new vertices and original mesh vertices.
[0076] The resulting convex hulls are used to calculate occlusion and reflection of audio from sound sources as described below with reference to the flow chart of FIG. 11. In other embodiments, the convex hulls may be used to calculate only occlusion (and not reflection) effects or only reflection (and not occlusion) effects.
[0077] In the present embodiment, oriented bounding boxes (OBBs) are obtained from the convex hulls and are used in the calculation of reflection. In other embodiments, reflection may be calculated from the convex hulls directly. It may be important for the reflection calculation that the shapes on which the reflection calculations are performed (for example, the oriented bounding boxes) are convex.
[0078] In some embodiments, further components (which may or may not be oriented bounding boxes) are used for calculating reflections. The further components may provide a representation of the objects that is simpler than the representation provided by the geometric components. The further components may each approximate a convex hull. The further components may comprise 3D geometrical shapes. The further components may be, for example, cuboids, spheres, or general hexahedrons. The further components may be used in calculating occlusion coefficients, reflection coefficients, or any other appropriate coefficients.
[0079] In the present embodiment, if an object is very thin (i.e. if one spatial dimension of the object is of negligible size) then the object (or a convex hull that is representative of at least part of the object) may be approximated by a plane instead of an oriented bounding box. Objects which have negligible size in more than one dimension may be omitted from calculation. For example, no oriented bounding box may be calculated for any object and/or convex hull that is negligible in size in more than one dimension. Other objects are each approximated by at least one oriented bounding box.
[0080] FIG. 2 is a flow chart illustrating in overview a process of convex hull decomposition of computer-generated objects. The processor 18 obtains a geometry 30 for a first object, a geometry 32 for a second object and geometries for a further plurality of objects (only one of the further plurality, the geometry 34 for the nth object, is shown in FIG. 2). In the present embodiment, each geometry comprises a 3D polygonal mesh.
[0081] A convex hull decomposition process with respect to the first object geometry 30 is described below with reference to stages 36 to 44 of FIG. 2. A similar process is also performed on each of the remaining object geometries, including the second object geometry 32 and nth object geometry 34.
[0082] At stage 36, the processor 18 performs a convex hull decomposition of the first object geometry 30, which comprises analysing the first object geometry 30 and representing the first object as a collection of convex hulls 38. In the present embodiment, the processor 18 performs the convex hull decomposition using an HACD (Hierarchical Approximate Convex Decomposition) method, for example a method as described in Khaled Mamou and Faouzi Ghorbel, A simple and efficient approach for 3D mesh approximate convex decomposition, Proceedings of the International Conference on Image Processing, ICIP 2009, 7-10 Nov. 2009, which is incorporated by reference herein in its entirety. In other embodiments, the processor 18 may use any suitable convex hull decomposition method. In further embodiments, any suitable geometric components may be used to represent the first object, and any suitable method of obtaining the geometric components may be used.
[0083] By decomposing the first object into a plurality of convex hulls, a representation of the first object is obtained that may in some circumstances be simpler than the original 3D polygonal mesh representation of first object geometry 30. In some cases, the representation of the first object that is obtained (in this case, the convex hull) may be described as a simplified representation.
[0084] In other circumstances, the convex hulls may be more complex than the mesh that they are approximating. However, in some such embodiments, less computational resources may be used than if using the original mesh representation. For example, less computational resources may be used by approximating objects with simple shapes like boxes etc.
[0085] The hull analysis may break down large meshes (which may often be non-convex) into smaller parts. Breaking down larger meshes into smaller parts may be useful for applying algorithms (for example, algorithms calculating reflection and/or occlusion) to only local sections of the map.
[0086] In the present embodiment, the coordinates of each object mesh are defined on a local set of axes which may be described as a frame of reference (FoR). The FoR often has its origin somewhere near the geometrical centre of the mesh. In some circumstances, the FoR may have its origin at a corner point.
[0087] In the present embodiment, the convex hull decomposition is performed on the object mesh data. Therefore, any convex hull derived from a game object is defined in the game object’s local FoR.
[0088] An advantage to running the hull analysis in the game object mesh’s FoR and not the real world FoR (i.e. the frame of reference of the overall scene) may be that any duplicated game objects still have the exact same mesh whether or not they are in a different real world position, rotation or scale. This means that the convex hull analysis may only need to be run once and the same output hull decomposition may be used by all of these duplicate game objects. This may result in significant performance gains during the analysis process since often game worlds are constructed using many duplicate objects, e.g. floor, wall and ceiling tiles.
[0089] The convex hull decomposition method that is used may in some circumstances be similar to a convex hull decomposition technique that may be used for collision detection of objects. The method of convex hull decomposition in this embodiment may be less accurate than a method of convex hull decomposition that may be used in collision detection.
[0090] At stage 40, the decomposed hulls 38 resulting from the convex hull decomposition of stage 36 are displayed to a developer or designer via a user interface. In the present embodiment, a rendering of the convex hulls 38 is displayed on display screen 22. In other embodiments, any display method or device may be used.
[0091] In some circumstances, the convex hull decomposition of stage 36 may return a set of convex hulls 38 that the developer or designer considers to be incorrect. For example, the developer or designer may consider that the returned set of convex hulls 38 does not correctly represent the first object.
[0092] At stage 42, the developer or designer corrects any of the convex hulls that the developer or designer considers to be incorrect. In the present embodiment, the developer or designer may fix any errors through a graphical user interface by moving points on a displayed convex hull using an input device 24, for example a mouse or trackball. In other embodiments, any suitable method for editing the convex hulls may be used. In further embodiments, an automatic convex hull decomposition may be performed without any input from a user (for example, from a designer or developer) and stages 40 and 42 of the process of FIG. 2 may be omitted.
[0093] At stage 44, the convex hulls 38 (including any changes made to the convex hulls 38 by the designer or developer at stage 42) are stored in a file. In the present embodiment, the set of convex hulls is stored in memory 20. In other embodiments, the convex hulls may be stored in any appropriate memory. The convex hulls 38 are tagged to associate the convex hulls 38 with the original geometry (in this case, the first object geometry 30). In other embodiments, any method of associating the convex hulls with the object from which they were obtained may be used.
[0094] The process of stages 36 to 44 is repeated for each of the objects in the scene to obtain one or more convex hulls for each of the objects. For example, the process of stages 36 to 44 is performed for the second object geometry 32 and for the nth object geometry 34. In other embodiments, only some objects in the scene may be decomposed into convex hulls and other objects may not be decomposed into convex hulls.
[0095] In the present embodiment, the process of FIG. 2 is performed offline (not in real time). In other embodiments, the process of FIG. 2 may be performed in real time.
[0096] Each object is represented by one or more convex hulls. In some circumstances, more complicated object geometries may be decomposed into a larger number of convex hulls than simpler object geometries. The number of convex hulls may be such that all important features of the original geometry are represented. In some circumstances, for example for embodiments in which the convex hulls are simpler than the original geometries, calculations performed using the convex hulls may require less computational resources than calculations performed using the original geometries.
[0097] Breaking down each object (in this case, originally represented by a mesh) into components, for example convex components, may allow reflection calculations to be performed as described below. Breaking down each object into smaller parts may allow calculations (for example, reflection calculations) to be performed on smaller regions than if the original object representation was used.
[0098] By allowing a designer or developer to adjust the convex hulls, in some cases a better representation of the object may be obtained than if the process were fully automatic.
[0099] The convex hulls obtained in FIG. 2 are used for calculation of occlusions as described below with reference to FIG. 11. However, for the calculation of reflections, it has been found that in some circumstances a further representation that is simpler than the convex hull representation may be used. (In embodiments in which the original objects are decomposed into convex geometric components that are not convex hulls, a further representation that is simpler than those convex geometric components may be used.)
[0100] In the present embodiment, the further representation of each object is a representation of the object as one or more oriented bounding boxes. In other embodiments, any further representation of the object as one or more further components may be used. The further components may be of any appropriate geometrical shape, for example cuboids, spheres or general hexahedrons.
[0101] FIG. 3 is a flow chart illustrating in overview a process of obtaining a respective oriented bounding box (OBB) for each of a plurality of stored convex hulls. Each oriented bounding box may be considered to approximate the convex hull to which it corresponds. In the present embodiment, oriented bounding boxes are obtained for all the convex hulls stored at the end of the process of FIG. 2, i.e. for all convex hulls obtained from all objects in the scene. In other embodiments, oriented bounding boxes may be obtained for a subset of the convex hulls that were stored at the end of the process of FIG. 2.
[0102] In alternative embodiments, oriented bounding boxes are obtained from convex hulls that have been obtained using a method other than the process of FIG. 2. In further embodiments, oriented bounding boxes may be obtained for each object using any suitable method, which may or may not comprise a convex hull decomposition. For example, the oriented bounding boxes may be obtained directly from the original representation of the objects, for example from a 3D polygonal mesh representation.
[0103] In the process of FIG. 3, the processor 18 obtains a first stored convex hull 50, a second stored convex hull 52 and a further plurality of stored convex hulls (only one of the further plurality, the nth stored convex hull 54, is shown in FIG. 3). The determining of an oriented bounding box for the first stored convex hull 50 is described below with reference to stages 56 to 70 of FIG. 3. A similar process to that of stages 56 to 70 is performed for each of the remaining convex hulls, including the second convex hull 52 and nth convex hull 54.
[0104] At stage 56, the processor 18 calculates and stores an oriented bounding box for the first convex hull 50. In the present embodiment, a 3D vernier callipers method is used to calculate the oriented bounding box. In other embodiments, any suitable method of determining the oriented bounding box may be used. The oriented bounding box for the first convex hull 50 may be the minimum-volume cuboid that contains the first convex hull 50.
[0105] Stage 56 comprises sub-stages 58 to 66. At sub-stage 58, the processor 18 reads the hull data for the first convex hull 50. The hull data for the first convex hull may be information about the first convex hull that has been stored in memory, for example information that has been stored in memory 20 at the end of the process of FIG. 2.
[0106] The 3D vernier callipers method may be an extension of a 2D vernier callipers method to work in 3D. In the present embodiment, the 3D vernier callipers method comprises sub-stages 60 to 64 of FIG. 3.
[0107] At sub-stage 60, the processor 18 projects the first convex hull onto a plane and performs a 2D vernier callipers method on the projection of the convex hull onto that plane. In the present embodiment, the first convex hull is projected onto an xy plane at sub-stage 60. In other embodiments, a different plane may be used.
[0108] A 2D vernier callipers method that is performed computationally may considered to be analogous to the use of physical vernier callipers. In the present embodiment, the projection of the convex hull is positioned at a first angle relative to the x and y axes. A minimum-area rectangle containing the projection of the convex hull and having sides parallel to the x and y axes is determined. The sides of the rectangle that are parallel to the x axis just touch the highest and lowest points in y of the projection of the convex hull, as if a pair of opposed callipers were applied to the widest extent in y of the projection of the convex hull. The sides of the rectangle that are parallel to they axis just touch the highest and lowest points in x of the projection of the convex hull, as if a pair of opposed callipers were applied to the widest extent in x of the projection of the convex hull.
[0109] The projection of the convex hull is then rotated so that it is positioned at a second angle relative to the x and y axis, and a further minimum-area rectangle is determined that has sides that are parallel to the x and y axes and touch the highest and lowest points in x and y of the rotated projection. The rotation of the convex hull and the determining of further minimum-area rectangles is repeated until minimum-area rectangles have been obtained for a full range of rotation of the projection of the convex hull. The processor 18 selects the angle for which the smallest minimum-area rectangle was obtained for the projection of the convex hull.
[0110] At sub-stage 61, the processor 18 positions the convex hull at the selected angle relative to x and y and projects the convex hull onto a plane that is perpendicular to the plane of sub-stage 60. In the present embodiment, the processor 18 projects the convex hull onto the xz plane.
[0111] At sub-stage 62, the processor 18 determines the smallest minimum-area rectangle for the projection of the convex hull in the xz plane, using the 2D vernier callipers method that is described above with reference to sub-stage 60. The processor 18 selects the angle in the xz plane for which the minimum-angle rectangle is smallest.
[0112] At sub-stage 63, the processor 18 positions the convex hull at the selected angle in xz and projects the convex hull onto the remaining plane, in this case the yz plane.
[0113] At sub-stage 64, the processor 18 determines the smallest minimum-area rectangle for the projection of the convex hull in the yz plane, using the 2D vernier callipers method that is described above with reference to sub-stage 60. The processor 18 determines the angle in the yz plane for which the minimum-area rectangle is smallest.
[0114] At sub-stage 66, the processor 18 determines an oriented bounding box from the smallest minimum-area rectangles determined at sub-stages 60, 62 and 64. The oriented bounding box may be the smallest cuboid that contains the first convex hull 50. The processor 18 saves the oriented bounding box in memory, for example in memory 20.
[0115] At stage 68, the processor 18 registers a position, size and reference of the oriented bounding box with a grid. The size may comprise a length in each dimension. A position, size and reference of the convex hull from which the oriented bounding box was obtained is also registered with the grid.
[0116] The reference for a convex hull may comprise a unique ID representing that convex hull. The reference for an oriented bounding box may comprise a unique ID representing that oriented bounding box. The reference for a convex hull or oriented bounding box (or other geometric component or further component) may comprise a unique ID representing an object with which it is associated.
[0117] The reference to a convex hull or bounding box may comprise an actual memory address to the stored data for that convex hull or bounding box.
[0118] The grid may comprise a regular Cartesian grid of lines and/or of points. The grid may be associated with a coordinate system of the scene in which the object may be placed. The grid may be associated with a virtual world. Each position of an OBB or convex hull may be the position of the OBB or convex hull in a coordinate system of the virtual world.
[0119] The positions, sizes and references of the oriented bounding box and convex hull are provided to a grid based search system 70. The grid based search system 70 is configured to be used to find objects, convex hulls, and/or oriented bounding boxes based on their positions on the grid. An example of the use of the grid based search system 70 is described below with reference to FIG. 11.
[0120] The process of FIG. 3 is repeated to obtain an oriented bounding box for each of the plurality of convex hulls. For example, the process of FIG. 3 is used to obtain an oriented bounding box for second convex hull 52 and for nth convex hull 54. The processor 18 registers the positions, sizes and references of each of the oriented bounding boxes and convex hulls to the grid, and provides the positions, sizes and references to the grid based search system 70.
[0121] In the present embodiment, the oriented bounding boxes are calculated offline after convex hull decomposition and user corrections as described above with reference to FIG. 2. In other embodiments, oriented bounding boxes may be calculated in real time. In some embodiments, the process of FIG. 3 may performed in real time as an initialisation process on start-up of an application, for example on start-up of a computer game. The convex hulls may have previously been obtained in an offline process, for example using the process of FIG. 2. Oriented bounding boxes and hulls may be registered to the grid.
[0122] In some embodiments, the processes of FIGS. 2 and 3 (obtaining convex hulls and obtaining oriented bounding boxes) are performed together by the same computing apparatus.
[0123] Oriented bounding boxes may be calculated in real time if needed for more dynamic geometry. In general, a plurality of objects is added to the grid on start-up. However, the grid may also deal with objects that are created and destroyed during run-time as well as objects moving in space during run-time. In the case of dynamic object creation, any hull analysis and bounding box calculations may be calculated during run time or retrieved from file.
……
……
……