空 挡 广 告 位 | 空 挡 广 告 位

Nvidia Patent | Visually tracked spatial audio

Patent: Visually tracked spatial audio

Drawings: Click to check drawins

Publication Number: 20220191638

Publication Date: 20220616

Applicant: Nvidia

Abstract

Apparatuses, systems, and techniques to determine head poses of users and provide audio for the users. In at least one embodiment, a head pose is determined based, at least in part, on camera frame information, and an audio signal is generated, based at least in part, on the determined head pose.

Claims

  1. A processor, comprising: one or more circuits to: determine a head pose based, at least in part, on camera frame information; and generate an audio signal based, at least in part, on the determined head pose.

  2. The processor of claim 1, wherein the one or more circuits are to determine a translational position of the head pose in relation to a display screen based, at least in part, on the camera frame information, and generate the audio signal based, at least in part, on the translational position.

  3. The processor of claim 1, wherein the one or more circuits are to determine a distance of the head pose in relation to a display screen based, at least in part, on the camera frame information, and generate the audio signal based, at least in part, on the distance.

  4. The processor of claim 1, wherein the camera frame information is from a single camera.

  5. The processor of claim 1, wherein the camera frame information is first camera frame information, and the one or more circuits are to determine a head pose change based at least in part on a second camera frame information, and generate the audio signal based, at least in part, on a magnification of the determined head pose change.

  6. The processor of claim 1, wherein the audio signal is a synthesized binaural audio signal.

  7. The processor of claim 1, wherein the one or more circuits are to determine the head pose based, at least in part, on a model that accounts for six degrees-of-freedom of head movement.

  8. The processor of claim 1, wherein the camera frame information is from a single camera and the one or more circuits are to: determine a three-dimensional translational position of the head pose in relation to a display screen based, at least in part, on the camera frame information; and generate binaural audio signals based, at least in part, on the determined three-dimensional translational position.

  9. A system, comprising: one or more processors to determine a head pose based, at least in part, on camera frame information and generate an audio signal based, at least in part, on the determined head pose; and one or more memories to store the camera frame information for processing by the one or more processors.

  10. The system of claim 9, wherein the camera frame information is from a single camera and the audio signal is a synthesized binaural audio signal.

  11. The system of claim 9, wherein the audio signal is a binaural audio signal that localizes an audio source in a three-dimensional game environment.

  12. The system of claim 9, wherein the one or more processors are to determine the head pose based, at least in part, on a three-dimensional translational position of a head of a user in relation to a display screen.

  13. The system of claim 9, wherein the one or more processors are to determine the head pose based, at least in part, on a three-dimensional translational position of a head of a user in relation to a display screen and a three-dimensional rotational orientation of the head of the user in relation to the display screen.

  14. The system of claim 9, wherein the camera frame information is from a single camera and the determined head pose includes a distance of a user’s head from a display screen.

  15. The system of claim 9, further including a display screen and a camera, wherein the camera generates the camera frame information and the one or more processors determine the head pose in relation to the display screen.

  16. The system of claim 9, further comprising a camera to generate the camera frame information based, at least in part, on capturing an image of a head of a user.

  17. A machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least: determine a head pose based, at least in part, on camera frame information; and generate an audio signal based, at least in part, on the determined head pose.

  18. The machine-readable medium of claim 17, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to determine the head pose based, at least in part, on a three-dimensional translational position of a head of a user in relation to a display screen.

  19. The machine-readable medium of claim 17, wherein the camera frame information is from a single camera positioned to image a head of a user.

  20. The machine-readable medium of claim 17, wherein the head pose accounts for six degrees-of-freedom of head movement.

  21. The machine-readable medium of claim 17, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to determine a relationship of the head pose to an audio source in a three-dimensional virtual space, and generate the audio signal based, at least in part, on the relationship of the head pose to the audio source.

  22. The machine-readable medium of claim 17, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to adjust a volume level based, at least in part, on the head pose.

  23. The machine-readable medium of claim 17, wherein the camera frame information is from a single camera and the set of instructions, which if performed by the one or more processors, further cause the one or more processors to determine the head pose based, at least in part, on estimating two-dimensional facial landmarks from the camera frame information using a parallel processor.

  24. The machine-readable medium of claim 17, wherein the head pose is with respect to a frame in a virtual three-dimensional world aligned with a desktop display.

  25. A method, comprising: determining a head pose based, at least in part, on camera frame information; and generating an audio signal based, at least in part, on the determined head pose.

  26. The method of claim 25, wherein the camera frame information is from a single camera.

  27. The method of claim 25, wherein determining the head pose includes determining a three-dimensional translational position of a user’s head in relation to a display screen.

  28. The method of claim 25, wherein determining the head pose includes determining a three-dimensional translational position and a three-dimensional rotational orientation of a user’s head in relation to a display screen.

  29. The method of claim 25, wherein the determined head pose includes a distance of a user’s head from a display screen.

  30. The method of claim 25, wherein generating the audio signal is based, at least in part, on the determined head pose and an initial audio signal.

  31. The method of claim 25, further comprising receiving a digital signal that includes multichannel audio information at an input port of a display, wherein the camera frame information is generated by a camera integrated in the display, and generating the audio signal is based, at least in part, on the multichannel audio information.

  32. The method of claim 25, wherein generating the audio signal includes modifying audio associated with a user of a video conferencing application.

  33. A display, comprising: a display screen; a camera; and a processing system to determine a head pose based, at least in part, on camera frame information from the camera and generate an audio signal based, at least in part, on the determined head pose.

  34. The display of claim 33, wherein the processing system includes a processor to determine the head pose based, at least in part, on inferencing with a neural network.

  35. The display of claim 33, wherein the processing system is to determine the head pose based, at least in part, on a three-dimensional translational position of a head of a user in relation to the display screen.

  36. The display of claim 33, wherein processing system includes a first processor to determine the head pose based, at least in part, on inferencing with a neural network, and a second processor to generate the audio signal based, at least in part, on the determined head pose and a multichannel audio signal.

  37. The display of claim 33, further comprising an input port to receive a digital signal that includes multichannel audio information, wherein the processing system is to generate the audio signal based, at least in part, on the determined head pose and the multichannel audio information.

  38. The display of claim 33, wherein the processing system is to determine the head pose based, at least in part, on a three-dimensional translational position of a head of a user in relation to the display screen and a three-dimensional rotational orientation of a head of the user in relation to the display screen.

  39. The display of claim 33, further comprising: an input port to receive a digital signal that includes multichannel audio information, wherein the processing system is to generate the audio signal based, at least in part, on the multichannel audio information; and an output port to provide head pose information based, at least in part, on the determined head pose.

  40. The display of claim 33, further comprising a stereo headphone output port, wherein the generated audio signal is output via the stereo headphone output port.

Description

CLAIM OF PRIORITY

[0001] This application claims the benefit of U.S. Provisional Application No. 63/126,418 titled “VISUALLY TRACKED SPACIAL AUDIO,” filed Dec. 16, 2020, the entire contents of which is incorporated herein by reference.

TECHNICAL FIELD

[0002] At least one embodiment pertains to processing resources used to determine a head pose of a user and provide audio based on head pose of user. For example, at least one embodiment pertains to processors or computing systems used to determine a head pose of a user and provide audio based on head pose of user according to various novel techniques described herein.

BACKGROUND

[0003] Determining a head pose of a user and providing audio based on head pose of user can use significant memory, time, or computing resources. Amounts of memory, time, or computing resources used to determine a head pose of a user and provide audio based on head pose of user can be improved.

BRIEF DESCRIPTION OF DRAWINGS

[0004] FIG. 1 is a block diagram illustrating a system for visually tracked spatial audio, according to at least one embodiment;

[0005] FIG. 2 is a block diagram illustrating an environment of a system that determines a head pose and generates audio, according to at least one embodiment;

[0006] FIG. 3 is a diagram illustrating a display that includes an integrated head pose tracker, according to at least one embodiment;

[0007] FIG. 4 is a diagram illustrating a system with an audio engine that generates audio based, at least in part, on a determined head pose, according to at least one embodiment;

[0008] FIG. 5 is a diagram illustrating a system with a driver that generates audio based, at least in part, on a determined head pose, according to at least one embodiment;

[0009] FIG. 6 illustrates a flowchart of a technique of determining a head pose, and generating audio, according to at least one embodiment;

[0010] FIG. 7 illustrates a flowchart of a technique of transforming a head pose, according to at least one embodiment;

[0011] FIG. 8 illustrates a flowchart of a technique of rendering audio, according to at least one embodiment;

[0012] FIG. 9A illustrates inference and/or training logic, according to at least one embodiment;

[0013] FIG. 9B illustrates inference and/or training logic, according to at least one embodiment;

[0014] FIG. 10 illustrates training and deployment of a neural network, according to at least one embodiment;

[0015] FIG. 11 illustrates an example data center system, according to at least one embodiment;

[0016] FIG. 12A illustrates an example of an autonomous vehicle, according to at least one embodiment;

[0017] FIG. 12B illustrates an example of camera locations and fields of view for the autonomous vehicle of FIG. 12A, according to at least one embodiment;

[0018] FIG. 12C is a block diagram illustrating an example system architecture for the autonomous vehicle of FIG. 12A, according to at least one embodiment;

[0019] FIG. 12D is a diagram illustrating a system for communication between cloud-based server(s) and the autonomous vehicle of FIG. 12A, according to at least one embodiment;

[0020] FIG. 13 is a block diagram illustrating a computer system, according to at least one embodiment;

[0021] FIG. 14 is a block diagram illustrating a computer system, according to at least one embodiment;

[0022] FIG. 15 illustrates a computer system, according to at least one embodiment;

[0023] FIG. 16 illustrates a computer system, according to at least one embodiment;

[0024] FIG. 17A illustrates a computer system, according to at least one embodiment;

[0025] FIG. 17B illustrates a computer system, according to at least one embodiment;

[0026] FIG. 17C illustrates a computer system, according to at least one embodiment;

[0027] FIG. 17D illustrates a computer system, according to at least one embodiment;

[0028] FIGS. 17E and 17F illustrate a shared programming model, according to at least one embodiment;

[0029] FIG. 18 illustrates exemplary integrated circuits and associated graphics processors, according to at least one embodiment;

[0030] FIGS. 19A-19B illustrate exemplary integrated circuits and associated graphics processors, according to at least one embodiment;

[0031] FIGS. 20A-20B illustrate additional exemplary graphics processor logic according to at least one embodiment;

[0032] FIG. 21 illustrates a computer system, according to at least one embodiment;

[0033] FIG. 22A illustrates a parallel processor, according to at least one embodiment;

[0034] FIG. 22B illustrates a partition unit, according to at least one embodiment;

[0035] FIG. 22C illustrates a processing cluster, according to at least one embodiment;

[0036] FIG. 22D illustrates a graphics multiprocessor, according to at least one embodiment;

[0037] FIG. 23 illustrates a multi-graphics processing unit (GPU) system, according to at least one embodiment;

[0038] FIG. 24 illustrates a graphics processor, according to at least one embodiment;

[0039] FIG. 25 is a block diagram illustrating a processor micro-architecture for a processor, according to at least one embodiment;

[0040] FIG. 26 illustrates a deep learning application processor, according to at least one embodiment;

[0041] FIG. 27 is a block diagram illustrating an example neuromorphic processor, according to at least one embodiment;

[0042] FIG. 28 illustrates at least portions of a graphics processor, according to one or more embodiments;

[0043] FIG. 29 illustrates at least portions of a graphics processor, according to one or more embodiments;

[0044] FIG. 30 illustrates at least portions of a graphics processor, according to one or more embodiments;

[0045] FIG. 31 is a block diagram of a graphics processing engine of a graphics processor in accordance with at least one embodiment;

[0046] FIG. 32 is a block diagram of at least portions of a graphics processor core, according to at least one embodiment;

[0047] FIGS. 33A-33B illustrate thread execution logic including an array of processing elements of a graphics processor core according to at least one embodiment;

[0048] FIG. 34 illustrates a parallel processing unit (“PPU”), according to at least one embodiment;

[0049] FIG. 35 illustrates a general processing cluster (“GPC”), according to at least one embodiment;

[0050] FIG. 36 illustrates a memory partition unit of a parallel processing unit (“PPU”), according to at least one embodiment;

[0051] FIG. 37 illustrates a streaming multi-processor, according to at least one embodiment.

[0052] FIG. 38 is an example data flow diagram for an advanced computing pipeline, in accordance with at least one embodiment;

[0053] FIG. 39 is a system diagram for an example system for training, adapting, instantiating and deploying machine learning models in an advanced computing pipeline, in accordance with at least one embodiment;

[0054] FIG. 40 includes an example illustration of an advanced computing pipeline 3910A for processing imaging data, in accordance with at least one embodiment;

[0055] FIG. 41A includes an example data flow diagram of a virtual instrument supporting an ultrasound device, in accordance with at least one embodiment;

[0056] FIG. 41B includes an example data flow diagram of a virtual instrument supporting an CT scanner, in accordance with at least one embodiment;

[0057] FIG. 42A illustrates a data flow diagram for a process to train a machine learning model, in accordance with at least one embodiment; and

[0058] FIG. 42B is an example illustration of a client-server architecture to enhance annotation tools with pre-trained annotation models, in accordance with at least one embodiment.

DETAILED DESCRIPTION

[0059] FIG. 1 is a block diagram illustrating a system 100 for visually tracked spatial audio, according to at least one embodiment. In at least one embodiment, system 100 for visually tracked spatial audio includes a 3D head pose determination 104 that obtains and processes input frames 102, and a 3D audio modulation 106 that, based on outputs from 3D head pose determination 104, generates output audio 110 from input audio 108.

[0060] In at least one embodiment, 3D head pose determination 104 is a collection of one or more hardware and/or software computing resources with instructions that, when executed, determines a 3D head pose from images of a user. In at least one embodiment, instructions are a set of instructions stored on a machine-readable medium that, which if performed by one or more processors, cause the one or more processors to determine a head pose based, at least in part, on camera frame information. In at least one embodiment, a 3D head pose refers to a position and orientation of a head (e.g., a human head) in an environment, and may be expressed as a rotation and/or position with respect to one or more axes (e.g., x axis, y axis, and/or z axis) of an environment, or other metrics. In at least one embodiment, 3D head pose determination 104 is a software program executing on computer hardware, an application executing on computer hardware, or any other suitable combination of software and hardware. In at least one embodiment, 3D head pose determination 104 executes on one or more computing devices that include one or more processing devices such as graphics processing units (GPUs), central processing units (CPUs), parallel processing units (PPUs), and/or variations thereof. In at least one embodiment, 3D head pose determination 104 determines a head pose in relation to a display screen. In at least one embodiment, 3D head pose determination 104 determines head pose in relation to a frame of reference in a 3D virtual environment (e.g., by mapping head pose in relation to a display screen to a position in a 3D virtual environment). In at least one embodiment, determining a head pose is referred to as estimating a head pose.

[0061] In at least one embodiment, input frames 102 are frames (e.g., images) captured from one or more image and/or video capturing devices. In at least one embodiment, input frames 102 are image files of a bitmap image file format. In at least one embodiment, input frames 102 are compressed and/or uncompressed computer files. In at least one embodiment, input frames 102 are image files of any suitable file format, such as JPEG, TIFF, PNG, GIF, and/or variations thereof. In at least one embodiment, input frames 102 are frames captured from a web-camera device and (e.g., of a user interacting with one or more computing devices). In at least one embodiment, input frames 102 are frames of a video, or images, captured from a camera. In at least one embodiment, input frames 102 are captured from a single monocular RGB camera. In at least one embodiment, input frames 102 are frames of a video of a user. In at least one embodiment, input frames 102 depict a user or user body part (e.g., user’s head). In at least one embodiment, input frames 102 are captured continuously by a camera and provided continuously (e.g., to a 3D head pose determination 104) as part of a video stream from camera. In at least one embodiment, input frames 102 depict a face of a user. In at least one embodiment, input frames 102 are referred to as camera frame information.

[0062] In at least one embodiment, input frames 102 are captured from a calibrated camera with known camera intrinsics and extrinsics with respect to a desktop display utilized by a user. In at least one embodiment, a camera (e.g., a camera that captures input frames 102) is external and attached to a user’s display. In at least one embodiment, a camera (e.g., a camera that captures input frames 102) is integrated into a display forming a single hardware unit, and/or variations thereof. In at least one embodiment, camera intrinsics are estimated and utilized to remove potential existing lens distortion.

[0063] In at least one embodiment, camera intrinsics, also referred to as intrinsic parameters, refer to characteristics and/or parameters of a camera that are internal and fixed to a camera/setup of camera. In at least one embodiment, camera intrinsics are utilized to map between camera coordinates and pixel coordinates in an image frame. In at least one embodiment, camera intrinsics are estimated using at least one model that accounts for effects of camera distortion, such as radial, decentering, and thin prism distortions. In at least one embodiment, camera intrinsics include parameters such as focal length, lens distortion, and/or an optical center of camera. In at least one embodiment, a model for camera intrinsics estimation includes two steps, in which at a first step, calibration parameters are estimated using a closed-form solution based on a distortion-free camera model, and in which at a second step, estimated parameters are improved iteratively through a non-linear optimization based on camera distortions. In at least one embodiment, an objective function to be minimized is a mean-square discrepancy between observed image points and their inferred image projections computed with estimated calibration parameters. In at least one embodiment, non-distortion and distortion parameters are decoupled to suppress interactions.

[0064] In at least one embodiment, camera extrinsics, also referred to as extrinsic parameters, refer to characteristics and/or parameters of a camera that are external to camera and change with respect to a frame of a world environment. In at least one embodiment, camera extrinsics are utilized to define a location and orientation of a camera with respect to a world frame. In at least one embodiment, camera extrinsics include information that represents a location and/or view of camera in a three dimensional environment and/or a transformation of view of camera to another reference frame (e.g., in relation to a reference frame that includes a plane of a display screen, and that is centered at a center of display screen). In at least one embodiment, camera extrinsics are estimated using at least one model that utilizes physical hardware calibration. In at least one embodiment, a model for camera extrinsics estimation utilizes a single reflection in a spherical mirror of a known radius. In at least one embodiment, a spherical mirror reflection results in a non-perspective axial camera. In at least one embodiment, an axial nature of rays is utilized to compute an axis (e.g., direction of sphere center) and pose parameters. In at least one embodiment, a distance to a sphere center and pose parameters are obtained and correspond to one or more polynomial equations.

[0065] In at least one embodiment, input frames 102 are received or otherwise obtained by 3D head pose determination 104. In at least one embodiment, input frames 102 are provided to 3D head pose determination 104 through one or more communication networks, such as Internet. In at least one embodiment, 3D head pose determination 104 is a software program executing on a computing device, in which input frames 102 are provided to 3D head pose determination 104 via a computing device through one or more data transfer operations. In at least one embodiment, 3D head pose determination 104 executes on one or more computing devices comprising one or more image and/or video capturing devices that capture input frames 102, in which input frames 102 are provided locally (e.g., through one or more local data transfer operations) to 3D head pose determination 104.

[0066] In at least one embodiment, 3D head pose determination 104 estimates 2D facial landmarks in a camera image space based at least in part on input frames 102. In at least one embodiment, 3D head pose determination 104 estimates and/or detects 2D facial landmarks by analyzing frames of input frames 102. In at least one embodiment, given a facial image denoted by I (e.g., a frame of input frames 102), a facial landmark estimation/detection algorithm determines locations of a number, denoted by D, of landmarks, denoted by x={x.sub.1, y.sub.1, x.sub.2, y.sub.2, … , x.sub.D, y.sub.D}, in which x and y correspond to image coordinates of facial landmarks.

[0067] In at least one embodiment, 3D head pose determination 104 estimates 2D facial landmarks using at least one algorithm such as generative and/or discriminative algorithms. In at least one embodiment, generative algorithms build an object representation, and then search for a region most similar to object without taking background information into account. In at least one embodiment, discriminative methods train a binary classifier to adaptively separate object from background. In at least one embodiment, generative models include part-based generative models, such as Active Shape Models (ASM), and holistic generative models, such as Active Appearance Models (AAM) that model a shape of face and its appearance as probabilistic distributions. In at least one embodiment, discriminative models include cascaded regression models (CRMs).

[0068] In at least one embodiment, 3D head pose determination 104 prepares corresponding 2D landmark points (e.g., in a 2D camera image space) and 3D landmark points (e.g., in a 3D camera view space) from a template-based 3D face model with known dimensions. In at least one embodiment, 3D head pose determination 104 adjusts a model’s properties to a user’s real dimensions (e.g., a user’s interpupillary distance (IPD)). In at least one embodiment, 3D head pose determination 104 determines 2D facial landmarks of a user from input frames 102 to model a face and/or head of user, and adjusts model to correspond to user’s dimensions, which may be obtained from user. In at least one embodiment, a user provides dimensions of user (e.g., IPD, facial measurements) to 3D head pose determination 104 through one or more computing devices (e.g., through user input hardware such as a keyboard). In at least one embodiment, 3D head pose determination 104 obtains dimensions of a user from one or more databases or systems.

[0069] In at least one embodiment, 3D head pose determination 104 finds a transformation matrix from corresponding 3D landmark points to projected 2D landmark image points using camera intrinsics. In at least one embodiment, a transformation matrix describes a head pose of a user in a 3D camera view space. In at least one embodiment, 3D head pose determination 104 finds a transformation matrix through one or more perspective-n-point (PnP) pose estimation methods. In at least one embodiment, 3D head pose determination 104 utilizes a method that is completed in O(n) time. In at least one embodiment, 3D head pose determination 104 utilizes a method that expresses n 3D points as a weighted sum of four virtual control points. In at least one embodiment, 3D head pose determination 104 estimates coordinates of control points in a camera referential by expressing coordinates as a weighted sum of eigenvectors of a matrix (e.g., a 12.times.12 matrix), and solving a constant number of quadratic equations to determine right weights, in which output is utilized to initialize a Gauss-Newton scheme to increase accuracy.

[0070] In at least one embodiment, 3D head pose determination 104 transforms a head pose of a user from a 3D camera view space to a display space (e.g., a space of a display utilized by user) centered at a display origin using extrinsics of a camera (e.g., camera that captured input frames 102) with respect to display. In at least one embodiment, 3D head pose determination 104 generates a transformation matrix that indicates a head aligned frame in a virtual 3D space that is aligned with a display (e.g., a desktop display utilized by a user). In at least one embodiment, 3D head pose determination 104 generates one or more data objects that indicate a 3D head pose of a user from images of user’s head (e.g., input frames 102). In at least one embodiment, input frames 102 are continuously generated, in which a 3D head pose determination 104 continuously determines and outputs a 3D head pose (e.g., through one or more transformation matrices or other data) of a user from input frames 102. In at least one embodiment, a 3D head pose in a virtual world space is utilized for spatialized audio generated from any suitable audio source in a 3D audio engine with a default or measured head-related transfer function (HRTF). In at least one embodiment, a head pose is utilized by a system for visually tracked spatial audio with one or more HRTFs to simulate acoustic interactions of sound in a 3D environment (e.g., natural sound wave propagation, attenuation, and/or interactions) for a user of 3D environment, in which user may receive or be provided with sound through one or more audio devices (e.g., headphones) associated with one or more computing devices that generate 3D environment.

[0071] In at least one embodiment, one or more processes of 3D head pose determination 104 are performed by one or more GPUs and/or CPUs. In at least one embodiment, 3D head pose determination 104 estimates 2D facial landmarks using one or more GPUs, and performs other processes on one or more CPUs. In at least one embodiment, a single unit computing device comprises a display, camera, and processor, in which processes of 3D head pose determination 104 are performed on single unit computing device.

[0072] In at least one embodiment, 3D head pose determination 104 performs at least one or more of following operations: (1) calibration of a camera, which, in some examples, may be provided or assumed; (2) face tracking to produce 2D landmarks; (3) utilizing a face model to generate 3D landmarks; (4) utilizing IPD information to estimate a scale of a face model, or utilizing other data indicating a scale of a user’s head; (5) mapping from a camera space into a virtual 3D world space; (6) determining transformation between a display, camera, and calibrated camera, which, in some examples, may be provided or assumed; as well as various other operations. In at least one embodiment, system 100 for visually tracked spatial audio includes at least one neural network model such as a perceptron model, a radial basis network (RBN), an auto encoder (AE), Boltzmann Machine (BM), Restricted Boltzmann Machine (RBM), deep belief network (DBN), deep convolutional network (DCN), extreme learning machine (ELM), deep residual network (DRN), support vector machines (SVM), and/or variations thereof, that perform various processes (e.g., processes of 3D head pose determination 104 and/or 3D audio modulation 106).

[0073] In at least one embodiment, 3D audio modulation 106 is a collection of one or more hardware and/or software computing resources with instructions that, when executed, modulates audio in a 3D space for a user based on a head pose of user. In at least one embodiment, 3D audio modulation 106 is a software program executing on computer hardware, application executing on computer hardware, or any other suitable combination of software and/or hardware. In at least one embodiment, instructions are a set of instructions stored on a machine-readable medium that, which if performed by one or more processors, cause the one or more processors to generate and/or modulate audio signals based, at least in part, on a determined head pose. In at least one embodiment, 3D audio modulation 106 is a software program executing on a computing device. In at least one embodiment, 3D audio modulation 106 executes on one or more computing devices that include one or more processing devices such as GPUs, CPUs, PPUs, and/or variations thereof. In at least one embodiment, 3D audio modulation 106 modulates audio with respect to a 3D location and orientation of a user in a virtual world.

[0074] In at least one embodiment, 3D audio modulation 106, based on a head pose determined by 3D head pose determination 104, generates output audio 110 from input audio 108. In at least one embodiment, input audio 108 is one or more computer files or data objects that encode audio. In at least one embodiment, input audio 108 is a digitized audio signal. In at least one embodiment, input audio 108 is an audio stream. In at least one embodiment, input audio 108 is referred to as an initial audio signal. In at least one embodiment, input audio 108 is a suitable computer file storing audio, such as WAV, M4A, FLAC, MP3, AAC, and/or variations thereof. In at least one embodiment, input audio 108 is audio that is to be output through an audio source in a 3D environment. In at least one embodiment, input audio 108 is mono-track audio captured from an audio capturing device, such as a microphone. In at least one embodiment, input audio 108 is audio output from one or more computer programs, such as a video game, video conferencing application, and/or variations thereof. In at least one embodiment, input audio 108 is not associated with any particular direction. In at least one embodiment, input audio 108 is obtained by 3D head pose determination 104 associated with or separate from input frames 102. In at least one embodiment, input audio 108 is obtained by 3D audio modulation 106.

[0075] In at least one embodiment, output audio 110 is audio output from one or more audio devices and simulates aspects of realistic or natural sound. In at least one embodiment, realistic or natural sound refers to sound output from one or more audio devices that simulates various physical properties of sound in real-world, such as sound propagation, attenuation, interactions, and/or variations thereof. In at least one embodiment, output audio 110 is one or more computer files or data objects that encode audio. In at least one embodiment, output audio 110 is a digitized audio signal (e.g., digital stereo audio signal, digital binaural audio signal). In at least one embodiment, output audio 110 is an audio stream. In at least one embodiment, output audio 110 is any suitable computer file storing audio, such as WAV, M4A, FLAC, MP3, AAC, and/or variations thereof. In at least one embodiment, output audio 110 is a stereo audio signal. In at least one embodiment, output audio 110 is a binaural audio signal. In at least one embodiment, output audio 110 is an analog audio signal (e.g., analog stereo audio signal, analog binaural audio signal).

[0076] In at least one embodiment, a system for visually tracked spatial audio (e.g., via 3D head pose determination 104 and/or 3D audio modulation 106) obtains input audio 108 that encodes audio to be output through an audio source in a 3D environment, in which 3D audio modulation 106 utilizes a head pose determined for a user to generate output audio 110, which is input audio 108 with effects that simulate effects of user’s head pose (e.g., user’s head position/orientation relative to audio source in 3D environment) and other environmental effects (e.g., objects and/or conditions of 3D environment) on input audio 108.

[0077] In at least one embodiment, 3D audio modulation 106 utilizes information of a head pose in a virtual 3D space to simulate sound dynamically, which results in realistic sound and increased sound presence for a tracked user/listener. In at least one embodiment, 3D audio modulation 106 produces audio for a 3D environment with effects such as distance-based sound attenuation, sound dissipation, sound reflection (e.g., echo and reverberations), and diffraction at sound occluders (e.g., objects in a 3D environment that occlude or influence sound wave paths). In at least one embodiment, sound attenuation refers to a sound level drop of various decibels (e.g., 6 decibels) under air conditions when increasing a distance between an audio source and a listener, independent of frequency. In at least one embodiment, sound dissipation refers to an audible effect of decreasing an amplitude of sound waves due to damping by particles in atmosphere. In a least one embodiment, sound dissipation is frequency dependent.

[0078] In at least one embodiment, 3D audio modulation 106 utilizes information of a head pose in a virtual 3D space for a user of 3D space to simulate an influence of aspects of virtual 3D space on sound received by and/or generated by user. In at least one embodiment, 3D audio modulation 106, in connection with audio for a 3D environment, simulates various audible phenomena and features such as directional sound propagation, which refers to an adjustment of a propagation of sound based on a direction of sound, which may improve speech quality when a speaker in a virtual world turns toward a user (e.g., audio of a speaking person may increase in volume when speaking person turns towards user) and realistic sound in other software such as video games, video conferencing, and/or any other suitable application. In at least one embodiment, a user’s captured voice is altered with respect to a head orientation of a user and with respect to nearby objects to a representation of user in a virtual environment, which may result in realistic sound for virtual environment. In at least one embodiment, sound sources in a virtual environment are muted when a user is turning way, or a user’s microphone is automatically muted when a user is turning away from a desktop to avoid background noise for other users in video conferencing scenarios. In at least one embodiment, 3D audio modulation 106 utilizes determined head pose to simulate various other audible phenomena and features.

[0079] In at least one embodiment, 3D audio modulation 106 utilizes information of a head pose to allow dynamic mapping of multi-channel surround sound formats (e.g., 5.1, 7.1, 7.2.4, and/or any other suitable multi-channel surround sound format) to stereo headphones worn by a listener such that compatibility is maintained with sound formats. In at least one embodiment, 3D audio modulation 106 utilizes a tracked head position and orientation to dynamically optimize sound for a user by adjusting a volume of different channels at runtime and down-mixing multi-channel audio to a stereo signal that is delivered to headphones of a user. In at least one embodiment, audio computation is performed on a CPU of a user’s computer. In at least one embodiment, audio computation is performed inside a display in which a tracking camera, processor, and display are integrated into a single unit. In at least one embodiment, where audio computation is performed inside of a display, display receives multi-channel or multi-object audio from an external device (e.g., desktop computer, laptop, game console, audio/video receiver, media player, and/or variations thereof).

[0080] In at least one embodiment, 3D audio modulation 106 performs one or more of following processes: (1) distance-based sound attenuation for a 3D environment for a user of 3D environment based on a head pose of user; (2) directional sound distribution (e.g., increased audio volume when speaking into a direction of another user in a virtual world); (3) utilizing speaker location for sound distribution simulation (e.g., sound occlusion, reverb, and/or variations thereof) to simulate different sound distributions (e.g., a speaker sounds different when speaker speaks next to a wall, as opposed to in an open room, as opposed to in a large room, and so on). In at least one embodiment, system 100 for visually tracked spatial audio is utilized for various applications, such as video conferencing. In at least one embodiment, for video conferencing, system 100 automatically mutes a microphone of a person when person is getting out of a view of a camera or is turning away. In at least one embodiment, system 100 adjusts a sensitivity and/or gain of a microphone based on an orientation of a speaker such that audio levels of speaker are consistent. In at least one embodiment, system 100 also tracks other body parts to generate and/or alter sound.

[0081] In at least one embodiment, techniques described herein provide 3-dimensional (3D) audio with respect to a user in a virtual 3D space. In at least one embodiment, a system provides 3D spatial audio effects for stationary audio sources in a 3D space. In at least one embodiment, a system provides 3D spatial audio effects for use in environments such as video conferencing and/or gaming, utilizing a monocular web camera for head tracking. In at least one embodiment, a system determines a user’s head pose and utilizes determined head pose to generate 3D spatial audio effects for user. In at least one embodiment, a user or user of a 3D environment refers to a person or other entity that interacts with one or more 3D environments using one or more computing devices, displays, and audio devices. In at least one embodiment, a user of a 3D environment interacts with 3D environment through various hardware such as a desktop computer and a display, an augmented reality (AR) headset, a virtual reality (VR) headset, and/or variations thereof. In at least one embodiment, a system allows audio sources that are fixed in a virtual 3D environment to appear stationary to a user, while supporting a full range of motion with six degrees-of-freedom (6DOF).

[0082] In at least one embodiment, a 3D environment, also referred to as a virtual environment, virtual world, virtual space, 3D space, and/or variations thereof, is generated by a computer program. In at least one embodiment, a user is associated with one or more entities in 3D environment that represents user. In at least one embodiment, a user is associated with a virtual camera in a virtual world that corresponds to a position of user (e.g., when user rotates user’s head, camera in world also rotates). In at least one embodiment, a system for visually tracked spatial audio, through one or more cameras, determines a head pose of user to represent user in 3D environment. In at least one embodiment, system, based on determined head pose, renders 3D audio for user. In at least one embodiment, 3D audio is rendered with respect to user’s position in virtual environment and user’s determined head pose to precisely simulate audio waves and interactions in virtual environment. In at least one embodiment, system registers user’s head pose in a camera space (e.g., via a webcam) and maps it to 3D environment with minimal latency.

[0083] In at least one embodiment, minimal latency refers to latency that is at or below a predefined latency time interval threshold. In at least one embodiment, a system maps a user’s head pose to a 3D environment with minimal latency such that information of user’s head pose is available and utilized for various audio effects in 3D environment within a defined time interval, which can be any suitable value, such as seconds, fractions of a second, and/or variations thereof. In at least one embodiment, a system tracks a user’s head remotely using a single red-green-blue (RGB) camera and generates 3D spatial audio effects for audio output from any suitable audio speaker device(s) (e.g., headphones, loudspeaker, sound-bar, or other sound producing device) for a user.

[0084] In at least one embodiment, a system for visually tracked spatial audio, for a 3D environment, determines a 3D location of a user’s head, which is utilized in reference to 3D locations of audio sources in 3D environment to spatialize all audio sources in reference to 3D location of user’s head. In at least one embodiment, a system for visually tracked spatial audio does not require any additional hardware, sensors, or other tracking devices, and only requires a single monocular camera. In at least one embodiment, a system for visually tracked spatial audio utilizes determined head poses to determine directional information for audio sources. In at least one embodiment, a system for visually tracked spatial audio tracks head poses based solely on video of a user’s head (e.g., frames captured by a camera facing user). In at least one embodiment, a system for visually tracked spatial audio does not require various axis tracking hardware that may be utilized in various AR and VR hardware. In at least one embodiment, a system for visually tracked spatial audio provides directional information of a user such that audio provided to user (e.g., via a computer program such as a video game or video conferencing application) is affected by directional information of user (e.g., when user’s head is turned away from an audio source, audio volume may be decreased).

[0085] In at least one embodiment, a system for visually tracked spatial audio, in connection with a computer program that generates a 3D environment for users, obtains head poses of users and utilizes a simulation engine to generate 3D audio based on obtained head poses. In at least one embodiment, head poses are determined based on camera frames of users. In at least one embodiment, simulation engine generates directional audio for each user of 3D environment. In at least one embodiment, directional audio refers to audio or sound output that is modified based on a head pose of a speaker and/or a listener (e.g., when a listener is facing a speaker, audio for listener is louder; when a listener is not facing a speaker, audio for listener is softer; and so on). In at least one embodiment, system additionally scales head poses to generate directional audio. In at least one embodiment, system utilizes information of various materials and objects of 3D environment to generate directional audio. In at least one embodiment, system allows 6DOF user movement while providing directional audio to users.

[0086] FIG. 2 is a block diagram illustrating an environment 200 of a system 202 that determines a head pose and generates audio, according to at least one embodiment. In at least one embodiment, system 202 includes a camera 204 that captures images of a user 206. In at least one embodiment, system 202 includes a display screen 208, and camera 204 captures images (e.g., input frames 102 of FIG. 1) of a head of user 206 in relation to display screen 208. In at least one embodiment, camera 204 is integrated in a display that includes display screen 208. In at least one embodiment, camera 204 is external to a display that includes display screen 208. In at least one embodiment, camera 204 is a single camera positioned to image a head of a user (e.g., while user is viewing images on display screen 208). In at least one embodiment, system 202 includes a computer 210 that includes a processor 212 and memory 214. In at least one embodiment, computer 210 personal computer (PC) (e.g., a desktop computer or a notebook computer). In at least one embodiment, display screen 208 is part of a display such as a desktop monitor or a notebook computer display. In at least one embodiment, camera 204 is a webcam separate from a display that includes display screen 208. In at least one embodiment, camera 204 is integrated into a display that includes display screen 208.

[0087] In at least one embodiment, computer 210 includes a head pose tracker 216 and an audio generator 218. In at least one embodiment, head pose tracker 216 performs 3D head pose determination 104 of FIG. 1 and/or audio generator 218 performs 3D audio modulation 106 of FIG. 1. In at least one embodiment, audio generator 218 generates audio signals (e.g., output audio 110 of FIG. 1) that are output to headphones 220 (e.g., via an output port, not shown for clarity, or a wireless signal such as Bluetooth). In at least one embodiment, head pose tracker 216 executes on processor 212. In at least one embodiment, camera frame information from camera 204 is stored in memory 214 before processing by head pose tracker 216. In at least one embodiment, audio generator 218 generates audio signals based, at least in part, on determined 3D head pose from head pose tracker 216 and input audio (e.g., input audio 108 of FIG. 1). In at least one embodiment, input audio used by audio generator 218 is from an application executing on computer 210 (e.g., a 3D game or a video conferencing application). In at least one embodiment, audio generator 218 is an audio engine of an application executing on computer 210. In at least one embodiment, audio generator 218 is a driver that uses audio from an audio engine of an application executing on computer 210. In at least one embodiment, at least one component shown as being in computer 210 is integrated into headphones 220 (e.g., head pose tracker 216 and/or audio generator 218). In at least one embodiment, head pose tracker 216 uses information from a sensor in addition to camera 204 to estimate head pose (e.g., an orientation sensor in headphones 220). In at least one embodiment, head pose tracker 216 uses information from only a single camera without additional sensor information to estimate head pose.

[0088] In at least one embodiment, computer 210 is in communication with system 222 and/or system 224 over a network 226. In at least one embodiment, system 222 and/or system 224 have elements shown or described with respect to system 202 and/or system 100 of FIG. 1. In at least one embodiment, input audio used by audio generator 218 is generated by system 222 and/or system 224 (e.g., by a video conferencing application). In at least one embodiment, a video conferencing application executing on computer 210 displays a first image of a user associated with system 222 and a second image of a user associated with system 224 on display screen 208. In at least one embodiment, head pose tracker 216 determines whether a head of user 206 is oriented toward first image or second image, and audio generator 218 generates audio based, at least in part, on determination by head pose tracker 216 (e.g., by changing a volume level such as increasing volume associated with user associated with first image when head of user 206 is oriented toward first image). In at least one embodiment, a multiuser application other than video conferencing is used (e.g., a multiplayer 3D game) and/or avatars of users rather than images of users themselves are shown on display screen 208. In at least one embodiment, determined pose from head pose tracker 216 is sent over network 226 to system 222 and/or system 224 so an audio generator running on system 222 and/or system 224 generates audio for users of those systems based, at least in part, on determined head pose. In at least one embodiment, audio generator 218 receives head pose information from system 222 and/or system 224 and generates audio for user 206 based, at least in part on head pose information from system 222 and/or system 224 (e.g., by amplifying sound from a user of system 222 and/or system 224 having a head pose oriented toward a representation of user 206 on system 222 and/or system 224 respectively).

[0089] FIG. 3 is a diagram illustrating a display 300 that includes a head pose tracker 302, according to at least one embodiment. In at least one embodiment, head pose tracker 302 is integrated in display 300. In at least one embodiment, head pose tracker 302 runs on hardware such as a deep learning accelerator (DLA). In at least one embodiment, a DLA is an inferencing accelerator for a trained neural network (e.g., a chip that includes a representation of a trained neural network). In at least one embodiment, head pose tracker runs on a parallel processing unit (PPU), a graphics processing unit (GPU), or any other suitable processor. In at least one embodiment, head pose tracker 302 uses at least one neural network inferencing operation.

[0090] In at least one embodiment, head pose tracker 302 determines a head pose based, at least in part, on a model that accounts for six degrees-of-freedom (6 DOF) of head movement. In at least one embodiment, 6 DOF includes a 3D translational position of a head pose in relation to a display screen 306 and a 3D rotational orientation of head pose (e.g., in relation to display screen 306). In at least one embodiment, head pose translational position is determined in relation to a point of origin at center of display screen 306, with a first axis through point of origin that is normal to plane of display screen 306, a second axis that runs horizontally through point of origin in plane of display screen 306, and a third axis that runs vertically through point of origin in plane of display screen 306. In at least one embodiment, a distance of head pose is equivalent to distance along first axis from display screen 306.

[0091] In at least one embodiment, display 300 includes a camera 304, display screen 306, and an audio processor 308. In at least one embodiment, head pose tracker 302 and/or audio processor 308 are referred to as a processing system. In at least one embodiment, display screen 306 is a liquid crystal display (LCD) screen, a light emitting diode (LED) display screen, an organic LED (OLED) display screen, or any other suitable type of display screen. In at least one embodiment, camera 304 is integrated into display 300 and is calibrated at a factory (e.g., precise factory calibration information related to camera 304 intrinsic and/or extrinsic information is stored in head pose tracker 302 for later use and/or camera 304 is calibrated to output camera frame information after correction for calibration such that camera frame information does not require additional correction).

[0092] In at least one embodiment, head pose tracker 302 performs 3D head pose determination 104 of FIG. 1 and/or audio processor 308 performs 3D audio modulation 106 of FIG. 1. In at least one embodiment, camera 304 generates input frames 102 of FIG. 1. In at least one embodiment, display 300 includes an input port, (e.g., a High-Definition Multimedia Interface (HDMI) port or any other suitable input port, not shown for clarity), not shown for clarity, to receive display and/or audio signals from a device such as a computer 310. In at least one embodiment, a digital signal that includes multichannel audio information is received at input port. In at least one embodiment, display 300 receives input audio 108 of FIG. 1 via input port.

[0093] In at least one embodiment, computer 310 is a personal computer (PC) or a gaming console. In at least one embodiment, display 300 includes Extended Display Identification Data (EDID) 312 (e.g., stored in a serial programmable read-only memory (PROM), an electrically erasable PROM (EEPROM), or any other suitable storage device). In at least one embodiment, EDID 312 includes information that indicates display 300 is capable of decoding multichannel audio input (e.g., 7.1 surround sound). In at least one embodiment, display 300 includes one or more auxiliary (AUX) registers 314. In at least one embodiment, display 300 includes an output port (e.g., a stereo headphone output port), not shown for clarity. In at least one embodiment, audio processor 308 generates audio signals (e.g., output audio 110 of FIG. 1) that are output via output port to stereo headphones 316. In at least one embodiment, integrating camera 304 and head pose tracker 302 in display 300 provides for precise factory calibration and low latency head pose estimation.

[0094] FIG. 4 is a diagram illustrating a system 400 with an audio engine 402 that generates audio based, at least in part, on a determined head pose, according to at least one embodiment. In at least one embodiment, system 400 includes a camera 404. In at least one embodiment, camera 404 captures images (e.g., input frames 102 of FIG. 1) of a head of user in relation to a display screen, not shown for clarity. In at least one embodiment, system 400 includes a head pose tracker 406. In at least one embodiment, head pose tracker 406 is a DLA head pose tracker. In at least one embodiment, audio engine 402 generates binaurally rendered audio signals based, at least in part, on head pose determined by head pose tracker 406. In at least one embodiment, audio engine 402 renders binaural audio signals based, at least in part, on one or more audio input signals from an application with spatial audio running on a computer 410 (e.g., a PC, gaming console, or any other suitable computer device). In at least embodiment, rendering of audio signals by audio engine 402 is referred to as generating audio signals. In at least one embodiment, audio driver 408 generates audio signals (e.g., output audio 110 of FIG. 1) that are output to stereo headphones 412 (e.g., via an output port, not shown for clarity).

[0095] FIG. 5 is a diagram illustrating a system 500 with an audio driver 508 that generates audio (e.g., output audio 110 of FIG. 1) based, at least in part, on a determined head pose, according to at least one embodiment. In at least one embodiment, system 500 includes a camera 504. In at least one embodiment, camera 504 captures images (e.g., input frames 102 of FIG. 1) of a head of user in relation to a display screen, not shown for clarity. In at least one embodiment, system 500 includes a head pose tracker 506. In at least one embodiment, head pose tracker 506 is a DLA head pose tracker. In at least one embodiment, a computer 510 (e.g., a PC, gaming console, or any other suitable computing device) executes an application with spatial audio. In at least one embodiment, application with spatial audio executing on computer 510 includes audio engine 502. In at least one embodiment, an audio processing object (APO) 512 performs one or more audio processing techniques and/or effects (e.g., speaker virtualization, binaural rendering, or any other suitable audio processing technique) based, at least in part, on head pose information generated by head pose tracker 506. In at least one embodiment, audio driver 508 generates an audio signal that is output to headphones 514 (e.g., via an output port, not shown for clarity, or a wireless communications link such as Bluetooth). In at least one embodiment, audio driver 508 generates audio signal base, at least in part, on an output of audio engine 502 (e.g., a multichannel surround sound signal such as 7.1 audio) and an output of APO 512. In at least one embodiment, APO 512 and/or audio driver 508 execute on computer 510.

[0096] FIG. 6 illustrates a flowchart of a technique 600 of determining a head pose, and generating audio, according to at least one embodiment. In at least one embodiment, technique 600 is performed by at least one circuit, at least one system, at least one processor, at least one graphics processing unit, at least one parallel processor, and/or at least some other processor or component thereof described and/or shown herein.

[0097] In at least one embodiment, at a block 602, technique 600 includes calibrating a camera (e.g., camera 204 of FIG. 2, camera 304 of FIG. 3, camera 404 of FIG. 4, or camera 504 of FIG. 5). In at least one embodiment, calibrating camera at block 602 is performed once during a calibration at factory (e.g., of integrated camera 304 after manufacture of display 300 of FIG. 3). In at least one embodiment, calibrating camera at block 602 is performed by a head pose tracker (e.g., head pose tracker 216 of FIG. 2, head tracker 406 of FIG. 4, or head tracker 506 of FIG. 5) before determining a head pose if a camera is newly associated with a display screen, or if a position and/or orientation of a camera is changed in relation to a display screen. In at least one embodiment, calibrating camera at block 602 is performed as described with respect to FIG. 1. In at least one embodiment, calibrating camera includes using information that corresponds to camera intrinsic characteristics to remove potential existing lens distortion. In at least one embodiment, calibrating camera includes using information that corresponds to extrinsic camera characteristics used to express camera space coordinates in a virtual 3D world space. In at least one embodiment, extrinsic camera characteristics are estimated using a spherical mirror based calibration technique, or by using a technique that uses physical hardware calibration.

[0098] In at least one embodiment, at a block 604, technique 600 includes registering a head pose. In at least one embodiment, registering a head pose includes determining a head pose (e.g., 3D head pose determination 104 of FIG. 1, and/or determining a head pose with head pose tracker 216 of FIG. 2, head tracker 302 of FIG. 3, head tracker 406 of FIG. 4, or head tracker 506 of FIG. 5). In at least one embodiment, determining head pose is based, at least in part, on calibration information from camera calibration performed at block 602. In at least one embodiment, registering head pose at block 604 is based, at least in part, on camera frame information. In at least one embodiment, registering head pose at block 604 uses camera frame from only a single monocular camera (e.g., a webcam). In at least one embodiment, determining a head pose at block 604 includes determining a translational position and/or an orientation of a head pose in relation to a display screen. In at least one embodiment, determining a head pose at block 604 includes determining a translational position and/or an orientation of a head pose in relation to a frame of reference in a 3D virtual environment (e.g., by mapping translational position and/or orientation of head pose in relation to display screen to 3D virtual environment). In at least one embodiment, determining head pose at block 604 includes determining a distance of head pose (e.g., from a display screen). In at least one embodiment, determined distance is a part of determined translational position along an axis normal to a plane of display screen. In at least one embodiment, determining head pose at block 604 is based, at least in part, on a model that accounts for six degrees-of-freedom (6DOF) of head movement. In at least one embodiment, determining head pose at block 604 includes determining a head pose change (e.g., from a determined first head pose and a determined second head pose). In at least one embodiment, determining head pose at block 604 includes determining a modified head pose change (e.g., a magnified head pose change by multiplying a head pose change by a predetermined scaling value). In at least one embodiment, determining head pose at block 604 includes determining head pose based, at least in part, on inferencing with a neural network (e.g., using camera frame information as an input to a trained convolutional neural network).

[0099] In at least one embodiment, at a block 606, technique 600 includes rendering audio. In at least one embodiment, rendering audio is based, at least in part, on head pose determined at block 602. In at least one embodiment, rendering audio includes generating an audio signal (e.g., output audio 110 of FIG. 1, and/or audio signal generated by audio generator 218 of FIG. 2, audio processor 308 of FIG. 3, audio driver 408 of FIG. 4, or audio driver 508 of FIG. 5). In at least one embodiment, rendering audio at block 606 includes generating spatialized audio based, at least in part, on head pose determined at block 604. In at least one embodiment, generating spatialized audio includes generating spatialized audio from an audio source from a 3D audio engine (e.g., an audio engine for a 3D game) using a head-related transfer function (e.g., a default or measured head-related transfer function). In at least one embodiment, rendering audio at block 606 includes generating an audio signal based, at least in part, on a determined head pose (e.g., distance, translational position, and/or orientation) and an input audio signal (e.g., a multichannel audio signal, audio generated by an audio engine of a 3D game virtual environment, audio from other users of a video conferencing application, or any other suitable audio input source). In at least one embodiment, generated audio signal is a synthesized binaural audio signal. In at least one embodiment, generating audio at block 606 includes generating an audio signal based, at least in part, on a magnification of a determined head pose change (e.g., magnifying a degree to which audio effects such as volume changes are applied based on a user turning away from a camera). In at least one embodiment, generating audio at block 606 includes determining a relationship of head pose to an audio source in a three-dimensional virtual space, and generating audio signal based, at least in part, on relationship of head pose to audio source. In at least one embodiment, at a block 608, technique 600 include performing other actions such as providing rendered audio to an output port as audio signals, obtaining a head pose update, and/or providing audio signals to a system or application.

[0100] In at least one embodiment, technique 600 is used to make 3D spatial audio sources that should be fixed in a virtual 3D space seem stationary in space when a user moves their head, for use cases such as video conferencing and gaming. In at least one embodiment, technique 600 allows audio sources that should be fixed in a virtual 3D space appear to be actually stationary to a user, while supporting a full range of motion of user’s head with six degrees-of-freedom. In at least one embodiment, technique 600 includes estimating head pose using a single monocular RGB camera stream. In at least one embodiment, any camera already being used for video conferencing and/or game streaming is sufficient to generate camera frame information used to determine head pose. In at least one embodiment, estimated head pose includes head orientation and head position in a 3D world space.

[0101] In at least one embodiment, technique 600 provides advantages over legacy techniques that use additional hardware (e.g., a head-mounted orientation sensor and/or headphones with an integrated inertial measurement unit (IMU), or a head-worn VR/AR display to provide external tracking) by using passive tracking (e.g., with a single webcam) that functions with any type of headphones. In at least one embodiment, technique 600 also provides advantages over legacy IMU-based approaches that do not accurately provide positional tracking data, by providing positional head pose tracking data with a full six degrees-of-freedom using camera frame information that is used to generate spatial audio. In at least one embodiment, technique 600 provides advantages over legacy approaches that are not able to generate sound modulation effects that require distance information and translational head motion.

[0102] FIG. 7 illustrates a flowchart of a technique 700 of transforming a head pose, according to at least one embodiment. In at least one embodiment, technique 700 is performed by at least one circuit, at least one system, at least one processor, at least one graphics processing unit, at least one parallel processor, and/or at least some other processor or component thereof described and/or shown herein. In at least one embodiment, technique 700 is performed by 3D head pose determination 104 of FIG. 1, head pose tracker 216 of FIG. 2, head tracker 302 of FIG. 3, head tracker 406 of FIG. 4, and/or head tracker 506 of FIG. 5. In at least one embodiment, technique 700 corresponds to registering a head pose at block 604 of FIG. 6.

[0103] In at least one embodiment, at a block 702, technique 700 includes estimating two-dimensional facial landmarks. In at least one embodiment, estimating two-dimensional facial landmarks at block 702 is performed in a camera image space. In at least one embodiment, estimating 2D facial landmarks at block 702 includes uploading acquired camera frames (e.g., input frames 102 of FIG. 1) to a parallel processor (e.g., a graphics processing unit (GPU) or parallel processing unit (PPU)), which performs landmark detection. In at least one embodiment, estimating 2D facial landmarks at block 702 includes performing one or more inferencing operations using a neural network.

[0104] In at least one embodiment, at a block 704, technique 700 includes preparing landmark points. In at least one embodiment, preparing landmark points at block 704 includes preparing corresponding 2D landmark points (e.g., in a 2D camera image space from block 702) and 3D landmark points (e.g., in a 3D camera view space). In at least one embodiment, preparing corresponding 2D and 3D landmark points is performed using a template-based 3D face model with known dimensions. In at least one embodiment, preparing landmark points at block 704 includes adjusting 3D face model properties to correspond to user’s real dimensions (e.g., by using user’s interpupillary distance), and using adjusted 3D face model to prepare landmark points.

[0105] In at least one embodiment, at a block 706, technique 700 includes generating a transformation matrix. In at least one embodiment, generating a transformation matrix at block 706 includes determining a transformation matrix from corresponding 3D points to projected 2D image points from block 704. In at least one embodiment, determining transformation matrix is performed using a perspective-n-point pose estimation approach using camera intrinsics information. In at least one embodiment, transformation matrix generated at block 706 describes a head pose in a 3D camera view space.

[0106] In at least one embodiment, at a block 708, technique 700 includes transforming a head pose. In at least one embodiment, transforming a head pose at block 708 includes transforming head pose from a 3D camera view space (e.g., as described by transformation matrix from block 706) to a display space centered at a display origin. In at least one embodiment, display origin is considered to be a center of a display screen (e.g., display screen 208 of FIG. 2, display screen 306 of FIG. 3). In at least one embodiment, display origin is considered to be a different point with respect to a display screen. In at least one embodiment, transforming head pose at block 708 uses known extrinsics of a camera with respect to a display (e.g., extrinsics of camera 204 with respect to display screen 208 of FIG. 2, or of camera 304 with respect to display screen 306 of FIG. 3). In at least one embodiment, transforming a head pose at block 708 is referred to as determining a head pose. In at least one embodiment, transforming head pose at block 708 generates a transformation matrix that expresses a head aligned frame in a virtual 3D world that is aligned with a desktop display. In at least one embodiment, at a block 710, technique 700 includes performing other actions such as providing information corresponding to transformed head pose to an audio engine, audio driver, and/or audio processor.

[0107] FIG. 8 illustrates a flowchart of a technique 800 of rendering audio, according to at least one embodiment. In at least one embodiment, technique 800 is performed by at least one circuit, at least one system, at least one processor, at least one graphics processing unit, at least one parallel processor, and/or at least some other processor or component thereof described and/or shown herein. In at least one embodiment, technique 800 is performed by 3D audio modulation 106 of FIG. 1, audio generator 218 of FIG. 2, audio processor 308 of FIG. 3, audio driver 408 of FIG. 4, and/or audio driver 508 of FIG. 5. In at least one embodiment, technique 800 corresponds to rendering audio at block 606 of FIG. 6.

[0108] In at least one embodiment, at a block 802, technique 800 includes generating spatialized audio. In at least one embodiment, generating spatialized audio includes generating spatialized audio based, at least in part, on a determined head pose (e.g., from 3D head pose determination 104 of FIG. 1, head pose tracker 216 of FIG. 2, head tracker 302 of FIG. 3, head tracker 406 of FIG. 4, or head tracker 506 of FIG. 5). In at least one embodiment, generating spatialized audio at block 802 includes generating spatialized audio using a head-related transfer function based, at least in part, on determined head pose. In at least one embodiment, generating spatialized audio at block 802 includes generating spatialized audio based, at least in part, on a head pose of a user in relation to a display screen. In at least one embodiment, generating spatialized audio at block 802 includes generating spatialized audio based, at least in part, on mapping a determined head pose of a user to a virtual 3D world space, and generating spatialized audio based, at least in part, on mapped head pose in virtual 3D world space.

[0109] In at least one embodiment, at a block 804, technique 800 includes modulating audio. In at least one embodiment, modulating audio includes modulating audio with respect to a 3D location and orientation of a user in a virtual world. In at least one embodiment, modulating audio at block 804 includes mapping determined head pose to a virtual 3D world space, and simulating sound dynamically. In at least one embodiment, modulating audio at block 804 includes attenuating sound based on distance in 3D world space, simulating sound dissipation, generating sound reflections (e.g., echoes and/or reverberations). In at least one embodiment, modulating audio at block 804 includes simulating diffraction at sound occluders in 3D world space.

[0110] In at least one embodiment, attenuating sound at block 804 includes attenuating sound in a manner that simulates sound attenuation under typical air conditions (e.g., a sound level drop of 6 decibels (dB) when doubling a distance between an audio source and a listener independent of frequency). In at least one embodiment, attenuating sound at block 804 includes attenuating sound in a different manner (e.g., attenuating sound in a manner that simulates sound attenuation in a fluid such as water, or in other simulated atmospheric conditions such as a simulated planet with a different atmospheric composition than present on Earth). In at least one embodiment, dissipating sound at block 804 refers to generating an audible effect of decreasing amplitude of sound waves due to damping by particles in an atmosphere. In at least one embodiment, dissipating sound at block 804 is frequency dependent.

[0111] In at least one embodiment, modulating sound at block 804 includes simulating an influence of a virtual world on sound received by and/or generated by a user based, at least in part, on a determined head pose (e.g., as mapped to a virtual 3D world space). In at least one embodiment, simulating an influence of a virtual world and/or various audible phenomena at block 804 includes propagating sound directionally. In at least one embodiment, propagating sound directionally improves speech quality when a speaker in a virtual world turns toward user, which improves speech quality and adds realism to games and/or video conferencing. In at least one embodiment, simulating an influence of a virtual world at block 804 includes altering user’s captured voice (e.g., from a microphone) with respect to determined head orientation (e.g., as mapped to 3D virtual environment) and with respect to nearby objects of environment to add realism. In at least one embodiment, modulating sound at block 804 includes muting sound sources when a user is turning away. In at least one embodiment, modulating sound at block 804 includes muting a user’s microphone when user is turning away (e.g., from a screen or desktop). In at least one embodiment, muting microphone when user is turning away reduces unwanted background noise for other users in video conferencing scenarios. In at least one embodiment, at a block 806, technique 800 includes performing other actions such as obtaining a head pose update, and/or providing modulated audio signals to a system, application, and/or output port.

Inference and Training Logic

[0112] FIG. 9A illustrates inference and/or training logic 915 used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 915 are provided below in conjunction with FIGS. 9A and/or 9B.

[0113] In at least one embodiment, inference and/or training logic 915 may include, without limitation, code and/or data storage 901 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic 915 may include, or be coupled to code and/or data storage 901 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs)). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, code and/or data storage 901 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 901 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory.

[0114] In at least one embodiment, any portion of code and/or data storage 901 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage 901 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or code and/or data storage 901 is internal or external to a processor, for example, or comprising DRAM, SRAM, flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.

[0115] In at least one embodiment, inference and/or training logic 915 may include, without limitation, a code and/or data storage 905 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, code and/or data storage 905 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, training logic 915 may include, or be coupled to code and/or data storage 905 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs)).

[0116] In at least one embodiment, code, such as graph code, causes the loading of weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, any portion of code and/or data storage 905 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage 905 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage 905 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or data storage 905 is internal or external to a processor, for example, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.

[0117] In at least one embodiment, code and/or data storage 901 and code and/or data storage 905 may be separate storage structures. In at least one embodiment, code and/or data storage 901 and code and/or data storage 905 may be a combined storage structure. In at least one embodiment, code and/or data storage 901 and code and/or data storage 905 may be partially combined and partially separate. In at least one embodiment, any portion of code and/or data storage 901 and code and/or data storage 905 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory.

[0118] In at least one embodiment, inference and/or training logic 915 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 910, including integer and/or floating point units, to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code (e.g., graph code), a result of which may produce activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 920 that are functions of input/output and/or weight parameter data stored in code and/or data storage 901 and/or code and/or data storage 905. In at least one embodiment, activations stored in activation storage 920 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 910 in response to performing instructions or other code, wherein weight values stored in code and/or data storage 905 and/or data storage 901 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 905 or code and/or data storage 901 or another storage on or off-chip.

[0119] In at least one embodiment, ALU(s) 910 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 910 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment, ALUs 910 may be included within a processor’s execution units or otherwise within a bank of ALUs accessible by a processor’s execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, code and/or data storage 901, code and/or data storage 905, and activation storage 920 may share a processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage 920 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor’s fetch, decode, scheduling, execution, retirement and/or other logical circuits.

[0120] In at least one embodiment, activation storage 920 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, activation storage 920 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, a choice of whether activation storage 920 is internal or external to a processor, for example, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.

[0121] In at least one embodiment, inference and/or training logic 915 illustrated in FIG. 9A may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as a TensorFlow.RTM. Processing Unit from Google, an inference processing unit (IPU) from Graphcore.TM., or a Nervana.RTM. (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 915 illustrated in FIG. 9A may be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”).

[0122] FIG. 9B illustrates inference and/or training logic 915, according to at least one embodiment. In at least one embodiment, inference and/or training logic 915 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 915 illustrated in FIG. 9B may be used in conjunction with an application-specific integrated circuit (ASIC), such as TensorFlow.RTM. Processing Unit from Google, an inference processing unit (IPU) from Graphcore.TM., or a Nervana.RTM. (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 915 illustrated in FIG. 9B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 915 includes, without limitation, code and/or data storage 901 and code and/or data storage 905, which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in FIG. 9B, each of code and/or data storage 901 and code and/or data storage 905 is associated with a dedicated computational resource, such as computational hardware 902 and computational hardware 906, respectively. In at least one embodiment, each of computational hardware 902 and computational hardware 906 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage 901 and code and/or data storage 905, respectively, result of which is stored in activation storage 920.

[0123] In at least one embodiment, each of code and/or data storage 901 and 905 and corresponding computational hardware 902 and 906, respectively, correspond to different layers of a neural network, such that resulting activation from one storage/computational pair 901/902 of code and/or data storage 901 and computational hardware 902 is provided as an input to a next storage/computational pair 905/906 of code and/or data storage 905 and computational hardware 906, in order to mirror a conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs 901/902 and 905/906 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage/computation pairs 901/902 and 905/906 may be included in inference and/or training logic 915.

[0124] In at least one embodiment, at least one component shown or described with respect to FIG. 9A and/or FIG. 9B is utilized to implement techniques and/or functions described in connection with FIGS. 1-8. In at least one embodiment, or more systems depicted in FIGS. 9A-9B are utilized to implement a system for visually tracked spatial audio. In at least one embodiment, one or more systems depicted in FIGS. 9A-9B are utilized to implement one or more systems or processes such as those described in connection with one or more of FIGS. 1-8. In at least one embodiment, one or more systems depicted in FIGS. 9A-9B are utilized to determine a head pose of a user and provide spatial audio for user based, at least in part, on head pose. In at least one embodiment, inference and/or training logic 915 includes and/or runs at least one aspect described with respect to FIG. 1 (e.g., 3D head pose determination 104 and/or 3D audio modulation 106), FIG. 2 (e.g., head pose tracker 216), FIG. 3 (e.g., head pose tracker 302), FIG. 4 (e.g., head pose tracker 406) and/or FIG. 5 (e.g., head pose tracker 506). In at least one embodiment, inference and/or training logic 915 performs at least one inferencing operation to determine a head pose using a neural network (e.g., based at least in part on using input frames 102 of FIG. 1, input frames from camera 204 of FIG. 2, input frames from camera 304 of FIG. 3, input frames from camera 404 of FIG. 4, and/or input frames from camera 504 of FIG. 5) as described with respect to one or more of FIGS. 1-8.

Neural Network Training and Deployment

[0125] FIG. 10 illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network 1006 is trained using a training dataset 1002. In at least one embodiment, training framework 1004 is a PyTorch framework, whereas in other embodiments, training framework 1004 is a TensorFlow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment, training framework 1004 trains an untrained neural network 1006 and enables it to be trained using processing resources described herein to generate a trained neural network 1008. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner.

[0126] In at least one embodiment, untrained neural network 1006 is trained using supervised learning, wherein training dataset 1002 includes an input paired with a desired output for an input, or where training dataset 1002 includes input having a known output and an output of neural network 1006 is manually graded. In at least one embodiment, untrained neural network 1006 is trained in a supervised manner and processes inputs from training dataset 1002 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 1006. In at least one embodiment, training framework 1004 adjusts weights that control untrained neural network 1006. In at least one embodiment, training framework 1004 includes tools to monitor how well untrained neural network 1006 is converging towards a model, such as trained neural network 1008, suitable to generating correct answers, such as in result 1014, based on input data such as a new dataset 1012. In at least one embodiment, training framework 1004 trains untrained neural network 1006 repeatedly while adjust weights to refine an output of untrained neural network 1006 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 1004 trains untrained neural network 1006 until untrained neural network 1006 achieves a desired accuracy. In at least one embodiment, trained neural network 1008 can then be deployed to implement any number of machine learning operations.

[0127] In at least one embodiment, untrained neural network 1006 is trained using unsupervised learning, wherein untrained neural network 1006 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 1002 will include input data without any associated output data or “ground truth” data. In at least one embodiment, untrained neural network 1006 can learn groupings within training dataset 1002 and can determine how individual inputs are related to untrained dataset 1002. In at least one embodiment, unsupervised training can be used to generate a self-organizing map in trained neural network 1008 capable of performing operations useful in reducing dimensionality of new dataset 1012. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in new dataset 1012 that deviate from normal patterns of new dataset 1012.

[0128] In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 1002 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 1004 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 1008 to adapt to new dataset 1012 without forgetting knowledge instilled within trained neural network 1008 during initial training.

[0129] In at least one embodiment, at least one component shown or described with respect to FIG. 10 is utilized to implement techniques and/or functions described in connection with FIGS. 1-8. In at least one embodiment, or more systems depicted in FIG. 10 are utilized to implement a system for visually tracked spatial audio. In at least one embodiment, one or more systems depicted in FIG. 10 are utilized to implement one or more systems or processes such as those described in connection with one or more of FIGS. 1-8. In at least one embodiment, one or more systems depicted in FIG. 10 are utilized to determine a head pose of a user and provide spatial audio for user based, at least in part, on head pose. In at least one embodiment, trained neural network 1008 is used to perform and/or run at least one aspect described with respect to FIG. 1 (e.g., 3D head pose determination 104 and/or 3D audio modulation 106), FIG. 2 (e.g., head pose tracker 216), FIG. 3 (e.g., head pose tracker 302), FIG. 4 (e.g., head pose tracker 406) and/or FIG. 5 (e.g., head pose tracker 506). In at least one embodiment, trained neural network 1008 is used to perform at least one inferencing operation to determine a head pose (e.g., based at least in part on using input frames 102 of FIG. 1, input frames from camera 204 of FIG. 2, input frames from camera 304 of FIG. 3, input frames from camera 404 of FIG. 4, and/or input frames from camera 504 of FIG. 5) as described with respect to one or more of FIGS. 1-8.

Data Center

[0130] FIG. 11 illustrates an example data center 1100, in which at least one embodiment may be used. In at least one embodiment, data center 1100 includes a data center infrastructure layer 1110, a framework layer 1120, a software layer 1130 and an application layer 1140.

[0131] In at least one embodiment, as shown in FIG. 11, data center infrastructure layer 1110 may include a resource orchestrator 1112, grouped computing resources 1114, and node computing resources (“node C.R.s”) 1116(1)-1116(N), where “N” represents a positive integer (which may be a different integer “N” than used in other figures). In at least one embodiment, node C.R.s 1116(1)-1116(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory storage devices 1118(1)-1118(N) (e.g., dynamic read-only memory, solid state storage or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 1116(1)-1116(N) may be a server having one or more of above-mentioned computing resources.

[0132] In at least one embodiment, grouped computing resources 1114 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). In at least one embodiment, separate groupings of node C.R.s within grouped computing resources 1114 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.

[0133] In at least one embodiment, resource orchestrator 1112 may configure or otherwise control one or more node C.R.s 1116(1)-1116(N) and/or grouped computing resources 1114. In at least one embodiment, resource orchestrator 1112 may include a software design infrastructure (“SDI”) management entity for data center 1100. In at least one embodiment, resource orchestrator 912 may include hardware, software or some combination thereof.

[0134] In at least one embodiment, as shown in FIG. 11, framework layer 1120 includes a job scheduler 1122, a configuration manager 1124, a resource manager 1126 and a distributed file system 1128. In at least one embodiment, framework layer 1120 may include a framework to support software 1132 of software layer 1130 and/or one or more application(s) 1142 of application layer 1140. In at least one embodiment, software 1132 or application(s) 1142 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 1120 may be, but is not limited to, a type of free and open-source software web application framework such as Apache Spark.TM. (hereinafter “Spark”) that may utilize distributed file system 1128 for large-scale data processing (e.g., “big data”). In at least one embodiment, job scheduler 1122 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 1100. In at least one embodiment, configuration manager 1124 may be capable of configuring different layers such as software layer 1130 and framework layer 1120 including Spark and distributed file system 1128 for supporting large-scale data processing. In at least one embodiment, resource manager 1126 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 1128 and job scheduler 1122. In at least one embodiment, clustered or grouped computing resources may include grouped computing resources 1114 at data center infrastructure layer 1110. In at least one embodiment, resource manager 1126 may coordinate with resource orchestrator 1112 to manage these mapped or allocated computing resources.

[0135] In at least one embodiment, software 1132 included in software layer 1130 may include software used by at least portions of node C.R.s 1116(1)-1116(N), grouped computing resources 1114, and/or distributed file system 1128 of framework layer 1120. In at least one embodiment, one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.

[0136] In at least one embodiment, application(s) 1142 included in application layer 1140 may include one or more types of applications used by at least portions of node C.R.s 1116(1)-1116(N), grouped computing resources 1114, and/or distributed file system 1128 of framework layer 1120. In at least one embodiment, one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, application and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments.

[0137] In at least one embodiment, any of configuration manager 1124, resource manager 1126, and resource orchestrator 1112 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center 1100 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center.

[0138] In at least one embodiment, data center 1100 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 1100. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 1100 by using weight parameters calculated through one or more training techniques described herein.

[0139] In at least one embodiment, data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources. Moreover, one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services.

[0140] Inference and/or training logic 915 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 915 are provided herein in conjunction with FIGS. 9A and/or 9B. In at least one embodiment, inference and/or training logic 915 may be used in system FIG. 11 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.

[0141] In at least one embodiment, at least one component shown or described with respect to FIG. 11 is utilized to implement techniques and/or functions described in connection with FIGS. 1-8. In at least one embodiment, or more systems depicted in FIG. 11 are utilized to implement a system for visually tracked spatial audio. In at least one embodiment, one or more systems depicted in FIG. 11 are utilized to implement one or more systems or processes such as those described in connection with one or more of FIGS. 1-8. In at least one embodiment, one or more systems depicted in FIG. 11 are utilized to determine a head pose of a user and provide spatial audio for user based, at least in part, on head pose. In at least one embodiment, inference and/or training logic 915 includes and/or runs at least one aspect described with respect to FIG. 1 (e.g., 3D head pose determination 104 and/or 3D audio modulation 106), FIG. 2 (e.g., head pose tracker 216), FIG. 3 (e.g., head pose tracker 302), FIG. 4 (e.g., head pose tracker 406) and/or FIG. 5 (e.g., head pose tracker 506). In at least one embodiment, inference and/or training logic 915 performs at least one inferencing operation to determine a head pose using a neural network (e.g., based at least in part on using input frames 102 of FIG. 1, input frames from camera 204 of FIG. 2, input frames from camera 304 of FIG. 3, input frames from camera 404 of FIG. 4, and/or input frames from camera 504 of FIG. 5) as described with respect to one or more of FIGS. 1-8.

Autonomous Vehicle

[0142] FIG. 12A illustrates an example of an autonomous vehicle 1200, according to at least one embodiment. In at least one embodiment, autonomous vehicle 1200 (alternatively referred to herein as “vehicle 1200”) may be, without limitation, a passenger vehicle, such as a car, a truck, a bus, and/or another type of vehicle that accommodates one or more passengers. In at least one embodiment, vehicle 1200 may be a semi-tractor-trailer truck used for hauling cargo. In at least one embodiment, vehicle 1200 may be an airplane, robotic vehicle, or other kind of vehicle.

[0143] Autonomous vehicles may be described in terms of automation levels, defined by National Highway Traffic Safety Administration (“NHTSA”), a division of US Department of Transportation, and Society of Automotive Engineers (“SAE”) “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” (e.g., Standard No. J3016-201806, published on Jun. 15, 2018, Standard No. J3016-201609, published on Sep. 30, 2016, and previous and future versions of this standard). In at least one embodiment, vehicle 1200 may be capable of functionality in accordance with one or more of Level 1 through Level 5 of autonomous driving levels. For example, in at least one embodiment, vehicle 1200 may be capable of conditional automation (Level 3), high automation (Level 4), and/or full automation (Level 5), depending on embodiment.

[0144] In at least one embodiment, vehicle 1200 may include, without limitation, components such as a chassis, a vehicle body, wheels (e.g., 2, 4, 6, 8, 18, etc.), tires, axles, and other components of a vehicle. In at least one embodiment, vehicle 1200 may include, without limitation, a propulsion system 1250, such as an internal combustion engine, hybrid electric power plant, an all-electric engine, and/or another propulsion system type. In at least one embodiment, propulsion system 1250 may be connected to a drive train of vehicle 1200, which may include, without limitation, a transmission, to enable propulsion of vehicle 1200. In at least one embodiment, propulsion system 1250 may be controlled in response to receiving signals from a throttle/accelerator(s) 1252.

[0145] In at least one embodiment, a steering system 1254, which may include, without limitation, a steering wheel, is used to steer vehicle 1200 (e.g., along a desired path or route) when propulsion system 1250 is operating (e.g., when vehicle 1200 is in motion). In at least one embodiment, steering system 1254 may receive signals from steering actuator(s) 1256. In at least one embodiment, a steering wheel may be optional for full automation (Level 5) functionality. In at least one embodiment, a brake sensor system 1246 may be used to operate vehicle brakes in response to receiving signals from brake actuator(s) 1248 and/or brake sensors.

[0146] In at least one embodiment, controller(s) 1236, which may include, without limitation, one or more system on chips (“SoCs”) (not shown in FIG. 12A) and/or graphics processing unit(s) (“GPU(s)”), provide signals (e.g., representative of commands) to one or more components and/or systems of vehicle 1200. For instance, in at least one embodiment, controller(s) 1236 may send signals to operate vehicle brakes via brake actuator(s) 1248, to operate steering system 1254 via steering actuator(s) 1256, to operate propulsion system 1250 via throttle/accelerator(s) 1252. In at least one embodiment, controller(s) 1236 may include one or more onboard (e.g., integrated) computing devices that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving vehicle 1200. In at least one embodiment, controller(s) 1236 may include a first controller for autonomous driving functions, a second controller for functional safety functions, a third controller for artificial intelligence functionality (e.g., computer vision), a fourth controller for infotainment functionality, a fifth controller for redundancy in emergency conditions, and/or other controllers. In at least one embodiment, a single controller may handle two or more of above functionalities, two or more controllers may handle a single functionality, and/or any combination thereof.

[0147] In at least one embodiment, controller(s) 1236 provide signals for controlling one or more components and/or systems of vehicle 1200 in response to sensor data received from one or more sensors (e.g., sensor inputs). In at least one embodiment, sensor data may be received from, for example and without limitation, global navigation satellite systems (“GNSS”) sensor(s) 1258 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1260, ultrasonic sensor(s) 1262, LIDAR sensor(s) 1264, inertial measurement unit (“IMU”) sensor(s) 1266 (e.g., accelerometer(s), gyroscope(s), a magnetic compass or magnetic compasses, magnetometer(s), etc.), microphone(s) 1296, stereo camera(s) 1268, wide-view camera(s) 1270 (e.g., fisheye cameras), infrared camera(s) 1272, surround camera(s) 1274 (e.g., 360 degree cameras), long-range cameras (not shown in FIG. 12A), mid-range camera(s) (not shown in FIG. 12A), speed sensor(s) 1244 (e.g., for measuring speed of vehicle 1200), vibration sensor(s) 1242, steering sensor(s) 1240, brake sensor(s) (e.g., as part of brake sensor system 1246), and/or other sensor types.

[0148] In at least one embodiment, one or more of controller(s) 1236 may receive inputs (e.g., represented by input data) from an instrument cluster 1232 of vehicle 1200 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (“HMI”) display 1234, an audible annunciator, a loudspeaker, and/or via other components of vehicle 1200. In at least one embodiment, outputs may include information such as vehicle velocity, speed, time, map data (e.g., a High Definition map (not shown in FIG. 12A)), location data (e.g., vehicle’s 1200 location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by controller(s) 1236, etc. For example, in at least one embodiment, HMI display 1234 may display information about presence of one or more objects (e.g., a street sign, caution sign, traffic light changing, etc.), and/or information about driving maneuvers vehicle has made, is making, or will make (e.g., changing lanes now, taking exit 34B in two miles, etc.).

[0149] In at least one embodiment, vehicle 1200 further includes a network interface 1224 which may use wireless antenna(s) 1226 and/or modem(s) to communicate over one or more networks. For example, in at least one embodiment, network interface 1224 may be capable of communication over Long-Term Evolution (“LTE”), Wideband Code Division Multiple Access (“WCDMA”), Universal Mobile Telecommunications System (“UMTS”), Global System for Mobile communication (“GSM”), IMT-CDMA Multi-Carrier (“CDMA2000”) networks, etc. In at least one embodiment, wireless antenna(s) 1226 may also enable communication between objects in environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth Low Energy (“LE”), Z-Wave, ZigBee, etc., and/or low power wide-area network(s) (“LPWANs”), such as LoRaWAN, SigFox, etc. protocols.

[0150] Inference and/or training logic 915 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 915 are provided herein in conjunction with FIGS. 9A and/or 9B. In at least one embodiment, inference and/or training logic 915 may be used in system FIG. 12A for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.

[0151] In at least one embodiment, at least one component shown or described with respect to FIG. 12A is utilized to implement techniques and/or functions described in connection with FIGS. 1-8. In at least one embodiment, or more systems depicted in FIG. 12A are utilized to implement a system for visually tracked spatial audio. In at least one embodiment, one or more systems depicted in FIG. 12A are utilized to implement one or more systems or processes such as those described in connection with one or more of FIGS. 1-8. In at least one embodiment, techniques and/or functions described in connection with FIGS. 1-8 are used in an in-vehicle entertainment system with respect to a vehicle shown or described in connection with FIG. 12A. In at least one embodiment, one or more systems depicted in FIG. 12A are utilized to determine a head pose of a user and provide spatial audio for user based, at least in part, on head pose. In at least one embodiment, inference and/or training logic 915 of vehicle 1200 (shown with respect to FIG. 12C as part of CPU(s) 1206 and GPU(s) 1208) includes and/or runs at least one aspect described with respect to FIG. 1 (e.g., 3D head pose determination 104 and/or 3D audio modulation 106), FIG. 2 (e.g., head pose tracker 216), FIG. 3 (e.g., head pose tracker 302), FIG. 4 (e.g., head pose tracker 406) and/or FIG. 5 (e.g., head pose tracker 506). In at least one embodiment, inference and/or training logic 915 performs at least one inferencing operation to determine a head pose using a neural network (e.g., based at least in part on using input frames 102 of FIG. 1, input frames from camera 204 of FIG. 2, input frames from camera 304 of FIG. 3, input frames from camera 404 of FIG. 4, and/or input frames from camera 504 of FIG. 5) as described with respect to one or more of FIGS. 1-8.

[0152] FIG. 12B illustrates an example of camera locations and fields of view for autonomous vehicle 1200 of FIG. 12A, according to at least one embodiment. In at least one embodiment, cameras and respective fields of view are one example embodiment and are not intended to be limiting. For instance, in at least one embodiment, additional and/or alternative cameras may be included and/or cameras may be located at different locations on vehicle 1200.

[0153] In at least one embodiment, camera types for cameras may include, but are not limited to, digital cameras that may be adapted for use with components and/or systems of vehicle 1200. In at least one embodiment, camera(s) may operate at automotive safety integrity level (“ASIL”) B and/or at another ASIL. In at least one embodiment, camera types may be capable of any image capture rate, such as 60 frames per second (fps), 1220 fps, 240 fps, etc., depending on embodiment. In at least one embodiment, cameras may be capable of using rolling shutters, global shutters, another type of shutter, or a combination thereof. In at least one embodiment, color filter array may include a red clear clear clear (“RCCC”) color filter array, a red clear clear blue (“RCCB”) color filter array, a red blue green clear (“RBGC”) color filter array, a Foveon X3 color filter array, a Bayer sensors (“RGGB”) color filter array, a monochrome sensor color filter array, and/or another type of color filter array. In at least one embodiment, clear pixel cameras, such as cameras with an RCCC, an RCCB, and/or an RBGC color filter array, may be used in an effort to increase light sensitivity.

[0154] In at least one embodiment, one or more of camera(s) may be used to perform advanced driver assistance systems (“ADAS”) functions (e.g., as part of a redundant or fail-safe design). For example, in at least one embodiment, a Multi-Function Mono Camera may be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlamp control. In at least one embodiment, one or more of camera(s) (e.g., all cameras) may record and provide image data (e.g., video) simultaneously.

[0155] In at least one embodiment, one or more camera may be mounted in a mounting assembly, such as a custom designed (three-dimensional (“3D”) printed) assembly, in order to cut out stray light and reflections from within vehicle 1200 (e.g., reflections from dashboard reflected in windshield mirrors) which may interfere with camera image data capture abilities. With reference to wing-mirror mounting assemblies, in at least one embodiment, wing-mirror assemblies may be custom 3D printed so that a camera mounting plate matches a shape of a wing-mirror. In at least one embodiment, camera(s) may be integrated into wing-mirrors. In at least one embodiment, for side-view cameras, camera(s) may also be integrated within four pillars at each corner of a cabin.

[0156] In at least one embodiment, cameras with a field of view that include portions of an environment in front of vehicle 1200 (e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well as aid in, with help of one or more of controller(s) 1236 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining preferred vehicle paths. In at least one embodiment, front-facing cameras may be used to perform many similar ADAS functions as LIDAR, including, without limitation, emergency braking, pedestrian detection, and collision avoidance. In at least one embodiment, front-facing cameras may also be used for ADAS functions and systems including, without limitation, Lane Departure Warnings (“LDW”), Autonomous Cruise Control (“ACC”), and/or other functions such as traffic sign recognition.

[0157] In at least one embodiment, a variety of cameras may be used in a front-facing configuration, including, for example, a monocular camera platform that includes a CMOS (“complementary metal oxide semiconductor”) color imager. In at least one embodiment, a wide-view camera 1270 may be used to perceive objects coming into view from a periphery (e.g., pedestrians, crossing traffic or bicycles). Although only one wide-view camera 1270 is illustrated in FIG. 12B, in other embodiments, there may be any number (including zero) wide-view cameras on vehicle 1200. In at least one embodiment, any number of long-range camera(s) 1298 (e.g., a long-view stereo camera pair) may be used for depth-based object detection, especially for objects for which a neural network has not yet been trained. In at least one embodiment, long-range camera(s) 1298 may also be used for object detection and classification, as well as basic object tracking.

[0158] In at least one embodiment, any number of stereo camera(s) 1268 may also be included in a front-facing configuration. In at least one embodiment, one or more of stereo camera(s) 1268 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (“FPGA”) and a multi-core micro-processor with an integrated Controller Area Network (“CAN”) or Ethernet interface on a single chip. In at least one embodiment, such a unit may be used to generate a 3D map of an environment of vehicle 1200, including a distance estimate for all points in an image. In at least one embodiment, one or more of stereo camera(s) 1268 may include, without limitation, compact stereo vision sensor(s) that may include, without limitation, two camera lenses (one each on left and right) and an image processing chip that may measure distance from vehicle 1200 to target object and use generated information (e.g., metadata) to activate autonomous emergency braking and lane departure warning functions. In at least one embodiment, other types of stereo camera(s) 1268 may be used in addition to, or alternatively from, those described herein.

[0159] In at least one embodiment, cameras with a field of view that include portions of environment to sides of vehicle 1200 (e.g., side-view cameras) may be used for surround view, providing information used to create and update an occupancy grid, as well as to generate side impact collision warnings. For example, in at least one embodiment, surround camera(s) 1274 (e.g., four surround cameras as illustrated in FIG. 12B) could be positioned on vehicle 1200. In at least one embodiment, surround camera(s) 1274 may include, without limitation, any number and combination of wide-view cameras, fisheye camera(s), 360 degree camera(s), and/or similar cameras. For instance, in at least one embodiment, four fisheye cameras may be positioned on a front, a rear, and sides of vehicle 1200. In at least one embodiment, vehicle 1200 may use three surround camera(s) 1274 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround-view camera.

[0160] In at least one embodiment, cameras with a field of view that include portions of an environment behind vehicle 1200 (e.g., rear-view cameras) may be used for parking assistance, surround view, rear collision warnings, and creating and updating an occupancy grid. In at least one embodiment, a wide variety of cameras may be used including, but not limited to, cameras that are also suitable as a front-facing camera(s) (e.g., long-range cameras 1298 and/or mid-range camera(s) 1276, stereo camera(s) 1268), infrared camera(s) 1272, etc., as described herein.

[0161] Inference and/or training logic 915 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 915 are provided herein in conjunction with FIGS. 9A and/or 9B. In at least one embodiment, inference and/or training logic 915 may be used in system FIG. 12B for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.

[0162] In at least one embodiment, at least one component shown or described with respect to FIG. 12B is utilized to implement techniques and/or functions described in connection with FIGS. 1-8. In at least one embodiment, or more systems depicted in FIG. 12B are utilized to implement a system for visually tracked spatial audio. In at least one embodiment, one or more systems depicted in FIG. 12B are utilized to implement one or more systems or processes such as those described in connection with one or more of FIGS. 1-8. In at least one embodiment, techniques and/or functions described in connection with FIGS. 1-8 are used in an in-vehicle entertainment system with respect to a vehicle shown or described in connection with FIG. 12B. In at least one embodiment, one or more systems depicted in FIG. 12B are utilized to determine a head pose of a user and provide spatial audio for user based, at least in part, on head pose. In at least one embodiment, inference and/or training logic 915 of vehicle 1200 includes and/or runs at least one aspect described with respect to FIG. 1 (e.g., 3D head pose determination 104 and/or 3D audio modulation 106), FIG. 2 (e.g., head pose tracker 216), FIG. 3 (e.g., head pose tracker 302), FIG. 4 (e.g., head pose tracker 406) and/or FIG. 5 (e.g., head pose tracker 506). In at least one embodiment, inference and/or training logic 915 performs at least one inferencing operation to determine a head pose using a neural network (e.g., based at least in part on using input frames 102 of FIG. 1, input frames from camera 204 of FIG. 2, input frames from camera 304 of FIG. 3, input frames from camera 404 of FIG. 4, and/or input frames from camera 504 of FIG. 5) as described with respect to one or more of FIGS. 1-8.

[0163] FIG. 12C is a block diagram illustrating an example system architecture for autonomous vehicle 1200 of FIG. 12A, according to at least one embodiment. In at least one embodiment, each of components, features, and systems of vehicle 1200 in FIG. 12C is illustrated as being connected via a bus 1202. In at least one embodiment, bus 1202 may include, without limitation, a CAN data interface (alternatively referred to herein as a “CAN bus”). In at least one embodiment, a CAN may be a network inside vehicle 1200 used to aid in control of various features and functionality of vehicle 1200, such as actuation of brakes, acceleration, braking, steering, windshield wipers, etc. In at least one embodiment, bus 1202 may be configured to have dozens or even hundreds of nodes, each with its own unique identifier (e.g., a CAN ID). In at least one embodiment, bus 1202 may be read to find steering wheel angle, ground speed, engine revolutions per minute (“RPMs”), button positions, and/or other vehicle status indicators. In at least one embodiment, bus 1202 may be a CAN bus that is ASIL B compliant.

[0164] In at least one embodiment, in addition to, or alternatively from CAN, FlexRay and/or Ethernet protocols may be used. In at least one embodiment, there may be any number of busses forming bus 1202, which may include, without limitation, zero or more CAN busses, zero or more FlexRay busses, zero or more Ethernet busses, and/or zero or more other types of busses using different protocols. In at least one embodiment, two or more busses may be used to perform different functions, and/or may be used for redundancy. For example, a first bus may be used for collision avoidance functionality and a second bus may be used for actuation control. In at least one embodiment, each bus of bus 1202 may communicate with any of components of vehicle 1200, and two or more busses of bus 1202 may communicate with corresponding components. In at least one embodiment, each of any number of system(s) on chip(s) (“SoC(s)”) 1204 (such as SoC 1204(A) and SoC 1204(B)), each of controller(s) 1236, and/or each computer within vehicle may have access to same input data (e.g., inputs from sensors of vehicle 1200), and may be connected to a common bus, such CAN bus.

[0165] In at least one embodiment, vehicle 1200 may include one or more controller(s) 1236, such as those described herein with respect to FIG. 12A. In at least one embodiment, controller(s) 1236 may be used for a variety of functions. In at least one embodiment, controller(s) 1236 may be coupled to any of various other components and systems of vehicle 1200, and may be used for control of vehicle 1200, artificial intelligence of vehicle 1200, infotainment for vehicle 1200, and/or other functions.

[0166] In at least one embodiment, vehicle 1200 may include any number of SoCs 1204. In at least one embodiment, each of SoCs 1204 may include, without limitation, central processing units (“CPU(s)”) 1206, graphics processing units (“GPU(s)”) 1208, processor(s) 1210, cache(s) 1212, accelerator(s) 1214, data store(s) 1216, and/or other components and features not illustrated. In at least one embodiment, SoC(s) 1204 may be used to control vehicle 1200 in a variety of platforms and systems. For example, in at least one embodiment, SoC(s) 1204 may be combined in a system (e.g., system of vehicle 1200) with a High Definition (“HD”) map 1222 which may obtain map refreshes and/or updates via network interface 1224 from one or more servers (not shown in FIG. 12C).

[0167] In at least one embodiment, CPU(s) 1206 may include a CPU cluster or CPU complex (alternatively referred to herein as a “CCPLEX”). In at least one embodiment, CPU(s) 1206 may include multiple cores and/or level two (“L2”) caches. For instance, in at least one embodiment, CPU(s) 1206 may include eight cores in a coherent multi-processor configuration. In at least one embodiment, CPU(s) 1206 may include four dual-core clusters where each cluster has a dedicated L2 cache (e.g., a 2 megabyte (MB) L2 cache). In at least one embodiment, CPU(s) 1206 (e.g., CCPLEX) may be configured to support simultaneous cluster operations enabling any combination of clusters of CPU(s) 1206 to be active at any given time.

[0168] In at least one embodiment, one or more of CPU(s) 1206 may implement power management capabilities that include, without limitation, one or more of following features: individual hardware blocks may be clock-gated automatically when idle to save dynamic power; each core clock may be gated when such core is not actively executing instructions due to execution of Wait for Interrupt (“WFI”)/Wait for Event (“WFE”) instructions; each core may be independently power-gated; each core cluster may be independently clock-gated when all cores are clock-gated or power-gated; and/or each core cluster may be independently power-gated when all cores are power-gated. In at least one embodiment, CPU(s) 1206 may further implement an enhanced algorithm for managing power states, where allowed power states and expected wakeup times are specified, and hardware/microcode determines which best power state to enter for core, cluster, and CCPLEX. In at least one embodiment, processing cores may support simplified power state entry sequences in software with work offloaded to microcode.

[0169] In at least one embodiment, GPU(s) 1208 may include an integrated GPU (alternatively referred to herein as an “iGPU”). In at least one embodiment, GPU(s) 1208 may be programmable and may be efficient for parallel workloads. In at least one embodiment, GPU(s) 1208 may use an enhanced tensor instruction set. In at least one embodiment, GPU(s) 1208 may include one or more streaming microprocessors, where each streaming microprocessor may include a level one (“L1”) cache (e.g., an L1 cache with at least 96 KB storage capacity), and two or more streaming microprocessors may share an L2 cache (e.g., an L2 cache with a 512 KB storage capacity). In at least one embodiment, GPU(s) 1208 may include at least eight streaming microprocessors. In at least one embodiment, GPU(s) 1208 may use compute application programming interface(s) (API(s)). In at least one embodiment, GPU(s) 1208 may use one or more parallel computing platforms and/or programming models (e.g., NVIDIA’s CUDA model).

[0170] In at least one embodiment, one or more of GPU(s) 1208 may be power-optimized for best performance in automotive and embedded use cases. For example, in at least one embodiment, GPU(s) 1208 could be fabricated on Fin field-effect transistor (“FinFET”) circuitry. In at least one embodiment, each streaming microprocessor may incorporate a number of mixed-precision processing cores partitioned into multiple blocks. For example, and without limitation, 64 PF32 cores and 32 PF64 cores could be partitioned into four processing blocks. In at least one embodiment, each processing block could be allocated 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two mixed-precision NVIDIA Tensor cores for deep learning matrix arithmetic, a level zero (“L0”) instruction cache, a warp scheduler, a dispatch unit, and/or a 64 KB register file. In at least one embodiment, streaming microprocessors may include independent parallel integer and floating-point data paths to provide for efficient execution of workloads with a mix of computation and addressing calculations. In at least one embodiment, streaming microprocessors may include independent thread scheduling capability to enable finer-grain synchronization and cooperation between parallel threads. In at least one embodiment, streaming microprocessors may include a combined L1 data cache and shared memory unit in order to improve performance while simplifying programming.

[0171] In at least one embodiment, one or more of GPU(s) 1208 may include a high bandwidth memory (“HBM”) and/or a 16 GB HBM2 memory subsystem to provide, in some examples, about 900 GB/second peak memory bandwidth. In at least one embodiment, in addition to, or alternatively from, HBM memory, a synchronous graphics random-access memory (“SGRAM”) may be used, such as a graphics double data rate type five synchronous random-access memory (“GDDR5”).

[0172] In at least one embodiment, GPU(s) 1208 may include unified memory technology. In at least one embodiment, address translation services (“ATS”) support may be used to allow GPU(s) 1208 to access CPU(s) 1206 page tables directly. In at least one embodiment, embodiment, when a GPU of GPU(s) 1208 memory management unit (“MMU”) experiences a miss, an address translation request may be transmitted to CPU(s) 1206. In response, 2 CPU of CPU(s) 1206 may look in its page tables for a virtual-to-physical mapping for an address and transmit translation back to GPU(s) 1208, in at least one embodiment. In at least one embodiment, unified memory technology may allow a single unified virtual address space for memory of both CPU(s) 1206 and GPU(s) 1208, thereby simplifying GPU(s) 1208 programming and porting of applications to GPU(s) 1208.

[0173] In at least one embodiment, GPU(s) 1208 may include any number of access counters that may keep track of frequency of access of GPU(s) 1208 to memory of other processors. In at least one embodiment, access counter(s) may help ensure that memory pages are moved to physical memory of a processor that is accessing pages most frequently, thereby improving efficiency for memory ranges shared between processors.

[0174] In at least one embodiment, one or more of SoC(s) 1204 may include any number of cache(s) 1212, including those described herein. For example, in at least one embodiment, cache(s) 1212 could include a level three (“L3”) cache that is available to both CPU(s) 1206 and GPU(s) 1208 (e.g., that is connected to CPU(s) 1206 and GPU(s) 1208). In at least one embodiment, cache(s) 1212 may include a write-back cache that may keep track of states of lines, such as by using a cache coherence protocol (e.g., MEI, MESI, MSI, etc.). In at least one embodiment, a L3 cache may include 4 MB of memory or more, depending on embodiment, although smaller cache sizes may be used.

[0175] In at least one embodiment, one or more of SoC(s) 1204 may include one or more accelerator(s) 1214 (e.g., hardware accelerators, software accelerators, or a combination thereof). In at least one embodiment, SoC(s) 1204 may include a hardware acceleration cluster that may include optimized hardware accelerators and/or large on-chip memory. In at least one embodiment, large on-chip memory (e.g., 4 MB of SRAM), may enable a hardware acceleration cluster to accelerate neural networks and other calculations. In at least one embodiment, a hardware acceleration cluster may be used to complement GPU(s) 1208 and to off-load some of tasks of GPU(s) 1208 (e.g., to free up more cycles of GPU(s) 1208 for performing other tasks). In at least one embodiment, accelerator(s) 1214 could be used for targeted workloads (e.g., perception, convolutional neural networks (“CNNs”), recurrent neural networks (“RNNs”), etc.) that are stable enough to be amenable to acceleration. In at least one embodiment, a CNN may include a region-based or regional convolutional neural networks (“RCNNs”) and Fast RCNNs (e.g., as used for object detection) or other type of CNN.

[0176] In at least one embodiment, accelerator(s) 1214 (e.g., hardware acceleration cluster) may include one or more deep learning accelerator (“DLA”). In at least one embodiment, DLA(s) may include, without limitation, one or more Tensor processing units (“TPUs”) that may be configured to provide an additional ten trillion operations per second for deep learning applications and inferencing. In at least one embodiment, TPUs may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.). In at least one embodiment, DLA(s) may further be optimized for a specific set of neural network types and floating point operations, as well as inferencing. In at least one embodiment, design of DLA(s) may provide more performance per millimeter than a typical general-purpose GPU, and typically vastly exceeds performance of a CPU. In at least one embodiment, TPU(s) may perform several functions, including a single-instance convolution function, supporting, for example, INT8, INT16, and FP16 data types for both features and weights, as well as post-processor functions. In at least one embodiment, DLA(s) may quickly and efficiently execute neural networks, especially CNNs, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events.

[0177] In at least one embodiment, DLA(s) may perform any function of GPU(s) 1208, and by using an inference accelerator, for example, a designer may target either DLA(s) or GPU(s) 1208 for any function. For example, in at least one embodiment, a designer may focus processing of CNNs and floating point operations on DLA(s) and leave other functions to GPU(s) 1208 and/or accelerator(s) 1214.

[0178] In at least one embodiment, accelerator(s) 1214 may include programmable vision accelerator (“PVA”), which may alternatively be referred to herein as a computer vision accelerator. In at least one embodiment, PVA may be designed and configured to accelerate computer vision algorithms for advanced driver assistance system (“ADAS”) 1238, autonomous driving, augmented reality (“AR”) applications, and/or virtual reality (“VR”) applications. In at least one embodiment, PVA may provide a balance between performance and flexibility. For example, in at least one embodiment, each PVA may include, for example and without limitation, any number of reduced instruction set computer (“RISC”) cores, direct memory access (“DMA”), and/or any number of vector processors.

[0179] In at least one embodiment, RISC cores may interact with image sensors (e.g., image sensors of any cameras described herein), image signal processor(s), etc. In at least one embodiment, each RISC core may include any amount of memory. In at least one embodiment, RISC cores may use any of a number of protocols, depending on embodiment. In at least one embodiment, RISC cores may execute a real-time operating system (“RTOS”). In at least one embodiment, RISC cores may be implemented using one or more integrated circuit devices, application specific integrated circuits (“ASICs”), and/or memory devices. For example, in at least one embodiment, RISC cores could include an instruction cache and/or a tightly coupled RAM.

[0180] In at least one embodiment, DMA may enable components of PVA to access system memory independently of CPU(s) 1206. In at least one embodiment, DMA may support any number of features used to provide optimization to a PVA including, but not limited to, supporting multi-dimensional addressing and/or circular addressing. In at least one embodiment, DMA may support up to six or more dimensions of addressing, which may include, without limitation, block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping.

[0181] In at least one embodiment, vector processors may be programmable processors that may be designed to efficiently and flexibly execute programming for computer vision algorithms and provide signal processing capabilities. In at least one embodiment, a PVA may include a PVA core and two vector processing subsystem partitions. In at least one embodiment, a PVA core may include a processor subsystem, DMA engine(s) (e.g., two DMA engines), and/or other peripherals. In at least one embodiment, a vector processing subsystem may operate as a primary processing engine of a PVA, and may include a vector processing unit (“VPU”), an instruction cache, and/or vector memory (e.g., “VMEM”). In at least one embodiment, VPU core may include a digital signal processor such as, for example, a single instruction, multiple data (“SIMD”), very long instruction word (“VLIW”) digital signal processor. In at least one embodiment, a combination of SIMD and VLIW may enhance throughput and speed.

[0182] In at least one embodiment, each of vector processors may include an instruction cache and may be coupled to dedicated memory. As a result, in at least one embodiment, each of vector processors may be configured to execute independently of other vector processors. In at least one embodiment, vector processors that are included in a particular PVA may be configured to employ data parallelism. For instance, in at least one embodiment, plurality of vector processors included in a single PVA may execute a common computer vision algorithm, but on different regions of an image. In at least one embodiment, vector processors included in a particular PVA may simultaneously execute different computer vision algorithms, on one image, or even execute different algorithms on sequential images or portions of an image. In at least one embodiment, among other things, any number of PVAs may be included in hardware acceleration cluster and any number of vector processors may be included in each PVA. In at least one embodiment, PVA may include additional error correcting code (“ECC”) memory, to enhance overall system safety.

[0183] In at least one embodiment, accelerator(s) 1214 may include a computer vision network on-chip and static random-access memory (“SRAM”), for providing a high-bandwidth, low latency SRAM for accelerator(s) 1214. In at least one embodiment, on-chip memory may include at least 4 MB SRAM, comprising, for example and without limitation, eight field-configurable memory blocks, that may be accessible by both a PVA and a DLA. In at least one embodiment, each pair of memory blocks may include an advanced peripheral bus (“APB”) interface, configuration circuitry, a controller, and a multiplexer. In at least one embodiment, any type of memory may be used. In at least one embodiment, a PVA and a DLA may access memory via a backbone that provides a PVA and a DLA with high-speed access to memory. In at least one embodiment, a backbone may include a computer vision network on-chip that interconnects a PVA and a DLA to memory (e.g., using APB).

[0184] In at least one embodiment, a computer vision network on-chip may include an interface that determines, before transmission of any control signal/address/data, that both a PVA and a DLA provide ready and valid signals. In at least one embodiment, an interface may provide for separate phases and separate channels for transmitting control signals/addresses/data, as well as burst-type communications for continuous data transfer. In at least one embodiment, an interface may comply with International Organization for Standardization (“ISO”) 26262 or International Electrotechnical Commission (“IEC”) 61508 standards, although other standards and protocols may be used.

[0185] In at least one embodiment, one or more of SoC(s) 1204 may include a real-time ray-tracing hardware accelerator. In at least one embodiment, real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses.

[0186] In at least one embodiment, accelerator(s) 1214 can have a wide array of uses for autonomous driving. In at least one embodiment, a PVA may be used for key processing stages in ADAS and autonomous vehicles. In at least one embodiment, a PVA’s capabilities are a good match for algorithmic domains needing predictable processing, at low power and low latency. In other words, a PVA performs well on semi-dense or dense regular computation, even on small data sets, which might require predictable run-times with low latency and low power. In at least one embodiment, such as in vehicle 1200, PVAs might be designed to run classic computer vision algorithms, as they can be efficient at object detection and operating on integer math.

[0187] For example, according to at least one embodiment of technology, a PVA is used to perform computer stereo vision. In at least one embodiment, a semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. In at least one embodiment, applications for Level 3-5 autonomous driving use motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). In at least one embodiment, a PVA may perform computer stereo vision functions on inputs from two monocular cameras.

[0188] In at least one embodiment, a PVA may be used to perform dense optical flow. For example, in at least one embodiment, a PVA could process raw RADAR data (e.g., using a 4D Fast Fourier Transform) to provide processed RADAR data. In at least one embodiment, a PVA is used for time of flight depth processing, by processing raw time of flight data to provide processed time of flight data, for example.

[0189] In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. In at least one embodiment, a DLA may run a neural network for regressing confidence value. In at least one embodiment, neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimate obtained (e.g., from another subsystem), output from IMU sensor(s) 1266 that correlates with vehicle 1200 orientation, distance, 3D location estimates of object obtained from neural network and/or other sensors (e.g., LIDAR sensor(s) 1264 or RADAR sensor(s) 1260), among others.

[0190] In at least one embodiment, one or more of SoC(s) 1204 may include data store(s) 1216 (e.g., memory). In at least one embodiment, data store(s) 1216 may be on-chip memory of SoC(s) 1204, which may store neural networks to be executed on GPU(s) 1208 and/or a DLA. In at least one embodiment, data store(s) 1216 may be large enough in capacity to store multiple instances of neural networks for redundancy and safety. In at least one embodiment, data store(s) 1216 may comprise L2 or L3 cache(s).

[0191] In at least one embodiment, one or more of SoC(s) 1204 may include any number of processor(s) 1210 (e.g., embedded processors). In at least one embodiment, processor(s) 1210 may include a boot and power management processor that may be a dedicated processor and subsystem to handle boot power and management functions and related security enforcement. In at least one embodiment, a boot and power management processor may be a part of a boot sequence of SoC(s) 1204 and may provide runtime power management services. In at least one embodiment, a boot power and management processor may provide clock and voltage programming, assistance in system low power state transitions, management of SoC(s) 1204 thermals and temperature sensors, and/or management of SoC(s) 1204 power states. In at least one embodiment, each temperature sensor may be implemented as a ring-oscillator whose output frequency is proportional to temperature, and SoC(s) 1204 may use ring-oscillators to detect temperatures of CPU(s) 1206, GPU(s) 1208, and/or accelerator(s) 1214. In at least one embodiment, if temperatures are determined to exceed a threshold, then a boot and power management processor may enter a temperature fault routine and put SoC(s) 1204 into a lower power state and/or put vehicle 1200 into a chauffeur to safe stop mode (e.g., bring vehicle 1200 to a safe stop).

[0192] In at least one embodiment, processor(s) 1210 may further include a set of embedded processors that may serve as an audio processing engine which may be an audio subsystem that enables full hardware support for multi-channel audio over multiple interfaces, and a broad and flexible range of audio I/O interfaces. In at least one embodiment, an audio processing engine is a dedicated processor core with a digital signal processor with dedicated RAM.

[0193] In at least one embodiment, processor(s) 1210 may further include an always-on processor engine that may provide necessary hardware features to support low power sensor management and wake use cases. In at least one embodiment, an always-on processor engine may include, without limitation, a processor core, a tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various I/O controller peripherals, and routing logic.

[0194] In at least one embodiment, processor(s) 1210 may further include a safety cluster engine that includes, without limitation, a dedicated processor subsystem to handle safety management for automotive applications. In at least one embodiment, a safety cluster engine may include, without limitation, two or more processor cores, a tightly coupled RAM, support peripherals (e.g., timers, an interrupt controller, etc.), and/or routing logic. In a safety mode, two or more cores may operate, in at least one embodiment, in a lockstep mode and function as a single core with comparison logic to detect any differences between their operations. In at least one embodiment, processor(s) 1210 may further include a real-time camera engine that may include, without limitation, a dedicated processor subsystem for handling real-time camera management. In at least one embodiment, processor(s) 1210 may further include a high-dynamic range signal processor that may include, without limitation, an image signal processor that is a hardware engine that is part of a camera processing pipeline.

[0195] In at least one embodiment, processor(s) 1210 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce a final image for a player window. In at least one embodiment, a video image compositor may perform lens distortion correction on wide-view camera(s) 1270, surround camera(s) 1274, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC 1204, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change a vehicle’s destination, activate or change a vehicle’s infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to a driver when a vehicle is operating in an autonomous mode and are disabled otherwise.

[0196] In at least one embodiment, a video image compositor may include enhanced temporal noise reduction for both spatial and temporal noise reduction. For example, in at least one embodiment, where motion occurs in a video, noise reduction weights spatial information appropriately, decreasing weights of information provided by adjacent frames. In at least one embodiment, where an image or portion of an image does not include motion, temporal noise reduction performed by video image compositor may use information from a previous image to reduce noise in a current image.

[0197] In at least one embodiment, a video image compositor may also be configured to perform stereo rectification on input stereo lens frames. In at least one embodiment, a video image compositor may further be used for user interface composition when an operating system desktop is in use, and GPU(s) 1208 are not required to continuously render new surfaces. In at least one embodiment, when GPU(s) 1208 are powered on and active doing 3D rendering, a video image compositor may be used to offload GPU(s) 1208 to improve performance and responsiveness.

[0198] In at least one embodiment, one or more SoC of SoC(s) 1204 may further include a mobile industry processor interface (“MIPI”) camera serial interface for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for a camera and related pixel input functions. In at least one embodiment, one or more of SoC(s) 1204 may further include an input/output controller(s) that may be controlled by software and may be used for receiving I/O signals that are uncommitted to a specific role.

[0199] In at least one embodiment, one or more Soc of SoC(s) 1204 may further include a broad range of peripheral interfaces to enable communication with peripherals, audio encoders/decoders (“codecs”), power management, and/or other devices. In at least one embodiment, SoC(s) 1204 may be used to process data from cameras (e.g., connected over Gigabit Multimedia Serial Link and Ethernet channels), sensors (e.g., LIDAR sensor(s) 1264, RADAR sensor(s) 1260, etc. that may be connected over Ethernet channels), data from bus 1202 (e.g., speed of vehicle 1200, steering wheel position, etc.), data from GNSS sensor(s) 1258 (e.g., connected over a Ethernet bus or a CAN bus), etc. In at least one embodiment, one or more SoC of SoC(s) 1204 may further include dedicated high-performance mass storage controllers that may include their own DMA engines, and that may be used to free CPU(s) 1206 from routine data management tasks.

[0200] In at least one embodiment, SoC(s) 1204 may be an end-to-end platform with a flexible architecture that spans automation Levels 3-5, thereby providing a comprehensive functional safety architecture that leverages and makes efficient use of computer vision and ADAS techniques for diversity and redundancy, and provides a platform for a flexible, reliable driving software stack, along with deep learning tools. In at least one embodiment, SoC(s) 1204 may be faster, more reliable, and even more energy-efficient and space-efficient than conventional systems. For example, in at least one embodiment, accelerator(s) 1214, when combined with CPU(s) 1206, GPU(s) 1208, and data store(s) 1216, may provide for a fast, efficient platform for Level 3-5 autonomous vehicles.

[0201] In at least one embodiment, computer vision algorithms may be executed on CPUs, which may be configured using a high-level programming language, such as C, to execute a wide variety of processing algorithms across a wide variety of visual data. However, in at least one embodiment, CPUs are oftentimes unable to meet performance requirements of many computer vision applications, such as those related to execution time and power consumption, for example. In at least one embodiment, many CPUs are unable to execute complex object detection algorithms in real-time, which is used in in-vehicle ADAS applications and in practical Level 3-5 autonomous vehicles.

[0202] Embodiments described herein allow for multiple neural networks to be performed simultaneously and/or sequentially, and for results to be combined together to enable Level 3-5 autonomous driving functionality. For example, in at least one embodiment, a CNN executing on a DLA or a discrete GPU (e.g., GPU(s) 1220) may include text and word recognition, allowing reading and understanding of traffic signs, including signs for which a neural network has not been specifically trained. In at least one embodiment, a DLA may further include a neural network that is able to identify, interpret, and provide semantic understanding of a sign, and to pass that semantic understanding to path planning modules running on a CPU Complex.

[0203] In at least one embodiment, multiple neural networks may be run simultaneously, as for Level 3, 4, or 5 driving. For example, in at least one embodiment, a warning sign stating “Caution: flashing lights indicate icy conditions,” along with an electric light, may be independently or collectively interpreted by several neural networks. In at least one embodiment, such warning sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a neural network that has been trained), text “flashing lights indicate icy conditions” may be interpreted by a second deployed neural network, which informs a vehicle’s path planning software (preferably executing on a CPU Complex) that when flashing lights are detected, icy conditions exist. In at least one embodiment, a flashing light may be identified by operating a third deployed neural network over multiple frames, informing a vehicle’s path-planning software of a presence (or an absence) of flashing lights. In at least one embodiment, all three neural networks may run simultaneously, such as within a DLA and/or on GPU(s) 1208.

[0204] In at least one embodiment, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify presence of an authorized driver and/or owner of vehicle 1200. In at least one embodiment, an always-on sensor processing engine may be used to unlock a vehicle when an owner approaches a driver door and turns on lights, and, in a security mode, to disable such vehicle when an owner leaves such vehicle. In this way, SoC(s) 1204 provide for security against theft and/or carjacking.

[0205] In at least one embodiment, a CNN for emergency vehicle detection and identification may use data from microphones 1296 to detect and identify emergency vehicle sirens. In at least one embodiment, SoC(s) 1204 use a CNN for classifying environmental and urban sounds, as well as classifying visual data. In at least one embodiment, a CNN running on a DLA is trained to identify a relative closing speed of an emergency vehicle (e.g., by using a Doppler effect). In at least one embodiment, a CNN may also be trained to identify emergency vehicles specific to a local area in which a vehicle is operating, as identified by GNSS sensor(s) 1258. In at least one embodiment, when operating in Europe, a CNN will seek to detect European sirens, and when in North America, a CNN will seek to identify only North American sirens. In at least one embodiment, once an emergency vehicle is detected, a control program may be used to execute an emergency vehicle safety routine, slowing a vehicle, pulling over to a side of a road, parking a vehicle, and/or idling a vehicle, with assistance of ultrasonic sensor(s) 1262, until emergency vehicles pass.

[0206] In at least one embodiment, vehicle 1200 may include CPU(s) 1218 (e.g., discrete CPU(s), or dCPU(s)), that may be coupled to SoC(s) 1204 via a high-speed interconnect (e.g., PCIe). In at least one embodiment, CPU(s) 1218 may include an X86 processor, for example. CPU(s) 1218 may be used to perform any of a variety of functions, including arbitrating potentially inconsistent results between ADAS sensors and SoC(s) 1204, and/or monitoring status and health of controller(s) 1236 and/or an infotainment system on a chip (“infotainment SoC”) 1230, for example.

[0207] In at least one embodiment, vehicle 1200 may include GPU(s) 1220 (e.g., discrete GPU(s), or dGPU(s)), that may be coupled to SoC(s) 1204 via a high-speed interconnect (e.g., NVIDIA’s NVLINK channel). In at least one embodiment, GPU(s) 1220 may provide additional artificial intelligence functionality, such as by executing redundant and/or different neural networks, and may be used to train and/or update neural networks based at least in part on input (e.g., sensor data) from sensors of a vehicle 1200.

[0208] In at least one embodiment, vehicle 1200 may further include network interface 1224 which may include, without limitation, wireless antenna(s) 1226 (e.g., one or more wireless antennas for different communication protocols, such as a cellular antenna, a Bluetooth antenna, etc.). In at least one embodiment, network interface 1224 may be used to enable wireless connectivity to Internet cloud services (e.g., with server(s) and/or other network devices), with other vehicles, and/or with computing devices (e.g., client devices of passengers). In at least one embodiment, to communicate with other vehicles, a direct link may be established between vehicle 120 and another vehicle and/or an indirect link may be established (e.g., across networks and over the Internet). In at least one embodiment, direct links may be provided using a vehicle-to-vehicle communication link. In at least one embodiment, a vehicle-to-vehicle communication link may provide vehicle 1200 information about vehicles in proximity to vehicle 1200 (e.g., vehicles in front of, on a side of, and/or behind vehicle 1200). In at least one embodiment, such aforementioned functionality may be part of a cooperative adaptive cruise control functionality of vehicle 1200.

[0209] In at least one embodiment, network interface 1224 may include an SoC that provides modulation and demodulation functionality and enables controller(s) 1236 to communicate over wireless networks. In at least one embodiment, network interface 1224 may include a radio frequency front-end for up-conversion from baseband to radio frequency, and down conversion from radio frequency to baseband. In at least one embodiment, frequency conversions may be performed in any technically feasible fashion. For example, frequency conversions could be performed through well-known processes, and/or using super-heterodyne processes. In at least one embodiment, radio frequency front end functionality may be provided by a separate chip. In at least one embodiment, network interfaces may include wireless functionality for communicating over LTE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, Wi-Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols.

[0210] In at least one embodiment, vehicle 1200 may further include data store(s) 1228 which may include, without limitation, off-chip (e.g., off SoC(s) 1204) storage. In at least one embodiment, data store(s) 1228 may include, without limitation, one or more storage elements including RAM, SRAM, dynamic random-access memory (“DRAM”), video random-access memory (“VRAM”), flash memory, hard disks, and/or other components and/or devices that may store at least one bit of data.

[0211] In at least one embodiment, vehicle 1200 may further include GNSS sensor(s) 1258 (e.g., GPS and/or assisted GPS sensors), to assist in mapping, perception, occupancy grid generation, and/or path planning functions. In at least one embodiment, any number of GNSS sensor(s) 1258 may be used, including, for example and without limitation, a GPS using a USB connector with an Ethernet-to-Serial (e.g., RS-232) bridge.

[0212] In at least one embodiment, vehicle 1200 may further include RADAR sensor(s) 1260. In at least one embodiment, RADAR sensor(s) 1260 may be used by vehicle 1200 for long-range vehicle detection, even in darkness and/or severe weather conditions. In at least one embodiment, RADAR functional safety levels may be ASIL B. In at least one embodiment, RADAR sensor(s) 1260 may use a CAN bus and/or bus 1202 (e.g., to transmit data generated by RADAR sensor(s) 1260) for control and to access object tracking data, with access to Ethernet channels to access raw data in some examples. In at least one embodiment, a wide variety of RADAR sensor types may be used. For example, and without limitation, RADAR sensor(s) 1260 may be suitable for front, rear, and side RADAR use. In at least one embodiment, one or more sensor of RADAR sensors(s) 1260 is a Pulse Doppler RADAR sensor.

[0213] In at least one embodiment, RADAR sensor(s) 1260 may include different configurations, such as long-range with narrow field of view, short-range with wide field of view, short-range side coverage, etc. In at least one embodiment, long-range RADAR may be used for adaptive cruise control functionality. In at least one embodiment, long-range RADAR systems may provide a broad field of view realized by two or more independent scans, such as within a 250 m (meter) range. In at least one embodiment, RADAR sensor(s) 1260 may help in distinguishing between static and moving objects, and may be used by ADAS system 1238 for emergency brake assist and forward collision warning. In at least one embodiment, sensors 1260(s) included in a long-range RADAR system may include, without limitation, monostatic multimodal RADAR with multiple (e.g., six or more) fixed RADAR antennae and a high-speed CAN and FlexRay interface. In at least one embodiment, with six antennae, a central four antennae may create a focused beam pattern, designed to record vehicle’s 1200 surroundings at higher speeds with minimal interference from traffic in adjacent lanes. In at least one embodiment, another two antennae may expand field of view, making it possible to quickly detect vehicles entering or leaving a lane of vehicle 1200.

[0214] In at least one embodiment, mid-range RADAR systems may include, as an example, a range of up to 160 m (front) or 80 m (rear), and a field of view of up to 42 degrees (front) or 150 degrees (rear). In at least one embodiment, short-range RADAR systems may include, without limitation, any number of RADAR sensor(s) 1260 designed to be installed at both ends of a rear bumper. When installed at both ends of a rear bumper, in at least one embodiment, a RADAR sensor system may create two beams that constantly monitor blind spots in a rear direction and next to a vehicle. In at least one embodiment, short-range RADAR systems may be used in ADAS system 1238 for blind spot detection and/or lane change assist.

[0215] In at least one embodiment, vehicle 1200 may further include ultrasonic sensor(s) 1262. In at least one embodiment, ultrasonic sensor(s) 1262, which may be positioned at a front, a back, and/or side location of vehicle 1200, may be used for parking assist and/or to create and update an occupancy grid. In at least one embodiment, a wide variety of ultrasonic sensor(s) 1262 may be used, and different ultrasonic sensor(s) 1262 may be used for different ranges of detection (e.g., 2.5 m, 4 m). In at least one embodiment, ultrasonic sensor(s) 1262 may operate at functional safety levels of ASIL B.

[0216] In at least one embodiment, vehicle 1200 may include LIDAR sensor(s) 1264. In at least one embodiment, LIDAR sensor(s) 1264 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. In at least one embodiment, LIDAR sensor(s) 1264 may operate at functional safety level ASIL B. In at least one embodiment, vehicle 1200 may include multiple LIDAR sensors 1264 (e.g., two, four, six, etc.) that may use an Ethernet channel (e.g., to provide data to a Gigabit Ethernet switch).

[0217] In at least one embodiment, LIDAR sensor(s) 1264 may be capable of providing a list of objects and their distances for a 360-degree field of view. In at least one embodiment, commercially available LIDAR sensor(s) 1264 may have an advertised range of approximately 100 m, with an accuracy of 2 cm to 3 cm, and with support for a 100 Mbps Ethernet connection, for example. In at least one embodiment, one or more non-protruding LIDAR sensors may be used. In such an embodiment, LIDAR sensor(s) 1264 may include a small device that may be embedded into a front, a rear, a side, and/or a corner location of vehicle 1200. In at least one embodiment, LIDAR sensor(s) 1264, in such an embodiment, may provide up to a 120-degree horizontal and 35-degree vertical field-of-view, with a 200 m range even for low-reflectivity objects. In at least one embodiment, front-mounted LIDAR sensor(s) 1264 may be configured for a horizontal field of view between 45 degrees and 135 degrees.

[0218] In at least one embodiment, LIDAR technologies, such as 3D flash LIDAR, may also be used. In at least one embodiment, 3D flash LIDAR uses a flash of a laser as a transmission source, to illuminate surroundings of vehicle 1200 up to approximately 200 m. In at least one embodiment, a flash LIDAR unit includes, without limitation, a receptor, which records laser pulse transit time and reflected light on each pixel, which in turn corresponds to a range from vehicle 1200 to objects. In at least one embodiment, flash LIDAR may allow for highly accurate and distortion-free images of surroundings to be generated with every laser flash. In at least one embodiment, four flash LIDAR sensors may be deployed, one at each side of vehicle 1200. In at least one embodiment, 3D flash LIDAR systems include, without limitation, a solid-state 3D staring array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). In at least one embodiment, flash LIDAR device may use a 5 nanosecond class I (eye-safe) laser pulse per frame and may capture reflected laser light as a 3D range point cloud and co-registered intensity data.

[0219] In at least one embodiment, vehicle 1200 may further include IMU sensor(s) 1266. In at least one embodiment, IMU sensor(s) 1266 may be located at a center of a rear axle of vehicle 1200. In at least one embodiment, IMU sensor(s) 1266 may include, for example and without limitation, accelerometer(s), magnetometer(s), gyroscope(s), a magnetic compass, magnetic compasses, and/or other sensor types. In at least one embodiment, such as in six-axis applications, IMU sensor(s) 1266 may include, without limitation, accelerometers and gyroscopes. In at least one embodiment, such as in nine-axis applications, IMU sensor(s) 1266 may include, without limitation, accelerometers, gyroscopes, and magnetometers.

[0220] In at least one embodiment, IMU sensor(s) 1266 may be implemented as a miniature, high performance GPS-Aided Inertial Navigation System (“GPS/INS”) that combines micro-electro-mechanical systems (“MEMS”) inertial sensors, a high-sensitivity GPS receiver, and advanced Kalman filtering algorithms to provide estimates of position, velocity, and attitude. In at least one embodiment, IMU sensor(s) 1266 may enable vehicle 1200 to estimate its heading without requiring input from a magnetic sensor by directly observing and correlating changes in velocity from a GPS to IMU sensor(s) 1266. In at least one embodiment, IMU sensor(s) 1266 and GNSS sensor(s) 1258 may be combined in a single integrated unit.

[0221] In at least one embodiment, vehicle 1200 may include microphone(s) 1296 placed in and/or around vehicle 1200. In at least one embodiment, microphone(s) 1296 may be used for emergency vehicle detection and identification, among other things.

[0222] In at least one embodiment, vehicle 1200 may further include any number of camera types, including stereo camera(s) 1268, wide-view camera(s) 1270, infrared camera(s) 1272, surround camera(s) 1274, long-range camera(s) 1298, mid-range camera(s) 1276, and/or other camera types. In at least one embodiment, cameras may be used to capture image data around an entire periphery of vehicle 1200. In at least one embodiment, which types of cameras used depends on vehicle 1200. In at least one embodiment, any combination of camera types may be used to provide necessary coverage around vehicle 1200. In at least one embodiment, a number of cameras deployed may differ depending on embodiment. For example, in at least one embodiment, vehicle 1200 could include six cameras, seven cameras, ten cameras, twelve cameras, or another number of cameras. In at least one embodiment, cameras may support, as an example and without limitation, Gigabit Multimedia Serial Link (“GMSL”) and/or Gigabit Ethernet communications. In at least one embodiment, each camera might be as described with more detail previously herein with respect to FIG. 12A and FIG. 12B.

[0223] In at least one embodiment, vehicle 1200 may further include vibration sensor(s) 1242. In at least one embodiment, vibration sensor(s) 1242 may measure vibrations of components of vehicle 1200, such as axle(s). For example, in at least one embodiment, changes in vibrations may indicate a change in road surfaces. In at least one embodiment, when two or more vibration sensors 1242 are used, differences between vibrations may be used to determine friction or slippage of road surface (e.g., when a difference in vibration is between a power-driven axle and a freely rotating axle).

[0224] In at least one embodiment, vehicle 1200 may include ADAS system 1238. In at least one embodiment, ADAS system 1238 may include, without limitation, an SoC, in some examples. In at least one embodiment, ADAS system 1238 may include, without limitation, any number and combination of an autonomous/adaptive/automatic cruise control (“ACC”) system, a cooperative adaptive cruise control (“CACC”) system, a forward crash warning (“FCW”) system, an automatic emergency braking (“AEB”) system, a lane departure warning (“LDW)” system, a lane keep assist (“LKA”) system, a blind spot warning (“BSW”) system, a rear cross-traffic warning (“RCTW”) system, a collision warning (“CW”) system, a lane centering (“LC”) system, and/or other systems, features, and/or functionality.

[0225] In at least one embodiment, ACC system may use RADAR sensor(s) 1260, LIDAR sensor(s) 1264, and/or any number of camera(s). In at least one embodiment, ACC system may include a longitudinal ACC system and/or a lateral ACC system. In at least one embodiment, a longitudinal ACC system monitors and controls distance to another vehicle immediately ahead of vehicle 1200 and automatically adjusts speed of vehicle 1200 to maintain a safe distance from vehicles ahead. In at least one embodiment, a lateral ACC system performs distance keeping, and advises vehicle 1200 to change lanes when necessary. In at least one embodiment, a lateral ACC is related to other ADAS applications, such as LC and CW.

[0226] In at least one embodiment, a CACC system uses information from other vehicles that may be received via network interface 1224 and/or wireless antenna(s) 1226 from other vehicles via a wireless link, or indirectly, over a network connection (e.g., over the Internet). In at least one embodiment, direct links may be provided by a vehicle-to-vehicle (“V2V”) communication link, while indirect links may be provided by an infrastructure-to-vehicle (“I2V”) communication link. In general, V2V communication provides information about immediately preceding vehicles (e.g., vehicles immediately ahead of and in same lane as vehicle 1200), while I2V communication provides information about traffic further ahead. In at least one embodiment, a CACC system may include either or both I2V and V2V information sources. In at least one embodiment, given information of vehicles ahead of vehicle 1200, a CACC system may be more reliable and it has potential to improve traffic flow smoothness and reduce congestion on road.

[0227] In at least one embodiment, an FCW system is designed to alert a driver to a hazard, so that such driver may take corrective action. In at least one embodiment, an FCW system uses a front-facing camera and/or RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an FCW system may provide a warning, such as in form of a sound, visual warning, vibration and/or a quick brake pulse.

[0228] In at least one embodiment, an AEB system detects an impending forward collision with another vehicle or other object, and may automatically apply brakes if a driver does not take corrective action within a specified time or distance parameter. In at least one embodiment, AEB system may use front-facing camera(s) and/or RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, when an AEB system detects a hazard, it will typically first alert a driver to take corrective action to avoid collision and, if that driver does not take corrective action, that AEB system may automatically apply brakes in an effort to prevent, or at least mitigate, an impact of a predicted collision. In at least one embodiment, an AEB system may include techniques such as dynamic brake support and/or crash imminent braking.

[0229] In at least one embodiment, an LDW system provides visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert driver when vehicle 1200 crosses lane markings. In at least one embodiment, an LDW system does not activate when a driver indicates an intentional lane departure, such as by activating a turn signal. In at least one embodiment, an LDW system may use front-side facing cameras, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an LKA system is a variation of an LDW system. In at least one embodiment, an LKA system provides steering input or braking to correct vehicle 1200 if vehicle 1200 starts to exit its lane.

[0230] In at least one embodiment, a BSW system detects and warns a driver of vehicles in an automobile’s blind spot. In at least one embodiment, a BSW system may provide a visual, audible, and/or tactile alert to indicate that merging or changing lanes is unsafe. In at least one embodiment, a BSW system may provide an additional warning when a driver uses a turn signal. In at least one embodiment, a BSW system may use rear-side facing camera(s) and/or RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component.

[0231] In at least one embodiment, an RCTW system may provide visual, audible, and/or tactile notification when an object is detected outside a rear-camera range when vehicle 1200 is backing up. In at least one embodiment, an RCTW system includes an AEB system to ensure that vehicle brakes are applied to avoid a crash. In at least one embodiment, an RCTW system may use one or more rear-facing RADAR sensor(s) 1260, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component.

……
……
……

您可能还喜欢...