Sony Patent | Systems and methods for adjusting one or more parameters of a gpu

Patent: Systems and methods for adjusting one or more parameters of a gpu

Drawings: Click to check drawins

Publication Number: 20210158597

Publication Date: 20210527

Applicant: Sony

Abstract

A method for adjusting complexity of content rendered by a graphical processing unit (GPU) is described. The method includes processing, by the GPU, an image frame for a scene of a game. The method further includes tracking one or more metrics regarding the processing of the image frame during the processing of the image frame. During the processing of the image frame, the method includes sending a quality adjuster signal (QAS) to a shader associated with a game engine. The QAS is generated based on the one or more metrics associated with the processing by the GPU. During the processing of the image frame, the method includes adjusting, by the shader, one or more shader parameters upon receipt of the QAS, wherein said adjusting the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU.

Claims

  1. A method for adjusting complexity of content rendered by a graphical processing unit (GPU), comprising: processing, by the GPU, an image frame for a scene of a game; tracking one or more metrics regarding the processing of the image frame during the processing of the image frame; during the processing of the image frame, sending a quality adjuster signal (QAS) to a shader associated with a game engine, wherein the QAS is generated based on the one or more metrics associated with the processing by the GPU; and during the processing of the image frame, adjusting, by the shader, one or more shader parameters upon receipt of the QAS, wherein said adjusting the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU.

  2. The method of claim 1, wherein the one or more metrics are tracked to identify an amount of power consumed by the GPU during said processing of the image frame, the method further comprising: analyzing the amount of power consumed by the GPU during said processing the image frame to determine whether the amount of power exceeds a predetermined threshold level, wherein the quality adjuster signal is sent to the shader upon determining that the amount of power exceeds the predetermined threshold level.

  3. The method of claim 1, wherein the one or more shader parameters include a number of times for which ray tracing is performed during said processing of the image frame, or a resolution of the image frame during said processing of the image frame, or a number of virtual objects that are rendered during said processing of the image frame, or an order of priority in which the virtual objects are rendered during said processing of the image frame, or a combination of two or more thereof.

  4. The method of claim 3, wherein the order of priority is based on distances at which the virtual objects are to be rendered, wherein one of the virtual objects that is to be rendered at a closer distance has a greater priority than another one of the virtual objects that is to be rendered at a farther distance.

  5. The method of claim 1, wherein said adjusting the one or more shader parameters includes decreasing the level of complexity by: decreasing a number of times for which ray tracing is performed during said processing upon determining that an amount of power consumed by the GPU is greater than a predetermined level; or decreasing a resolution of the image frame upon determining that the amount of power consumed by the GPU is greater than the predetermined level; or decreasing a number of virtual objects that are rendered within the image frame upon determining that the amount of power consumed by the GPU is greater than the predetermined level; or prioritizing rendering of the virtual objects upon determining that the amount of power consumed by the GPU is greater than the predetermined level; or a combination of two or more thereof.

  6. The method of claim 1, wherein the one or more metrics are tracked to identify an amount of latency of transferring a packet between a node and a client device during said processing of the image frame, the method further comprising: analyzing the amount of latency during said processing of the image frame to determine whether the amount of latency exceeds a predetermined threshold level, wherein the quality adjuster signal is sent to the shader upon determining that the amount of latency exceeds the predetermined threshold level.

  7. The method of claim 1, wherein said adjusting the one or more shader parameters includes decreasing the level of complexity by: decreasing a number of times for which ray tracing is performed within a portion of the image frame during said processing upon determining that an amount of latency of transferring a packet between a node and a client device is greater than a predetermined level; or decreasing a resolution of the portion of the image frame upon determining that the amount of latency is greater than the predetermined level; or decreasing a number of virtual objects that are rendered within the image frame upon determining that the amount of latency is greater than the predetermined level; or prioritizing rendering of the virtual objects within the image frame upon determining that the amount of latency is greater than the predetermined level; or a combination of two or more thereof.

  8. The method of claim 1, wherein the one or more metrics are tracked to identify an amount of temperature of the GPU during said processing of the image frame, the method further comprising: analyzing the amount of temperature during said processing of the image frame to determine whether the amount of temperature exceeds a predetermined threshold level, wherein the quality adjuster signal is sent to the shader upon determining that the amount of temperature exceeds the predetermined threshold level.

  9. The method of claim 1, wherein said adjusting the one or more shader parameters includes decreasing the level of complexity by: decreasing a number of times for which ray tracing is performed within a portion of the image frame during said processing upon determining that an amount of temperature of the GPU during said processing is greater than a predetermined level; or decreasing a resolution of the portion of the image frame during said processing of the image frame upon determining that the amount of temperature is greater than the predetermined level; or decreasing a number of virtual objects that are rendered within the image frame during said processing of the image frame upon determining that the amount of temperature is greater than the predetermined level; or prioritizing rendering of the virtual objects within the image frame upon determining that the amount of temperature is greater than the predetermined level; or a combination of two or more thereof.

  10. The method of claim 1, wherein said adjusting the one or more shader parameters includes decreasing the level of complexity by: decreasing a number of times for which ray tracing is performed within a portion of the image frame during said processing upon determining that the quality adjuster signal is to be generated based on a user input; or decreasing a resolution of the portion of the image frame upon determining that the quality adjuster signal is to be generated; or decreasing a number of virtual objects that are rendered within the image frame upon determining that the quality adjuster signal is to be generated; or prioritizing rendering of the virtual objects within the image frame upon determining that the quality adjuster signal is to be generated; or a combination of two or more thereof.

  11. The method of claim 1, wherein said processing the image frame includes generating the image frame, wherein the image frame includes the one or more shader parameters, wherein the image frame includes a plurality of pixels, wherein each of the pixels is provided color, shading, and texturing during said processing of the image frame.

  12. The method of claim 1, wherein the one or more metrics are tracked to determine whether a user input is received via a computer network during said processing of the image frame, the method further comprising: analyzing the user input to determine whether the quality adjuster signal is to be generated.

  13. The method of claim 1, wherein the QAS is generated by a processor coupled to the GPU, wherein said sending the QAS is performed by the processor.

  14. A server for adjusting complexity of content rendered by a graphical processing unit (GPU), comprising: the GPU configured to execute a shader to process an image frame for a scene of a game, wherein the GPU is configured to track one or more metrics regarding the processing of the image frame; and a processing unit coupled to the GPU, wherein the processing unit is configured to generate and send a quality adjuster signal (QAS) to the shader associated with a game engine, wherein the QAS is sent to the shader while the image frame is being processed, wherein the QAS is generated based on the one or more metrics associated with the processing by the GPU, and wherein the shader is configured to adjust one or more shader parameters upon receipt of the QAS, wherein the adjustment of the one or more shader parameters occurs while the image frame is being processed, wherein the adjustment of the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU.

  15. The server of claim 14, wherein to process the image frame, the GPU is configured to generate the image frame, wherein the image frame includes a plurality of pixels, wherein each of the pixels is provided color, shading, and texturing during the processing of the image frame, wherein when the image frame is being processed, the processing unit is configured to: track the one or more metrics to identify an amount of power consumed by the GPU; analyze the amount of power consumed by the GPU to determine whether the amount of power exceeds a predetermined threshold level, wherein the processing unit is configured to send the quality adjuster signal to the shader upon determining that the amount of power exceeds the predetermined threshold level.

  16. The server of claim 14, wherein the one or more shader parameters include a number of times for which ray tracing is performed during said processing of the image frame, or a resolution of the image frame, or a number of virtual objects that are rendered during said processing of the image frame, or an order of priority in which the virtual objects are rendered during said processing of the image frame, or a combination of two or more thereof.

  17. The server of claim 14, wherein when the image frame is being processed by the GPU, to adjust the one or more shader parameters, the shader is configured to decrease the level of complexity by: decreasing a number of times for which ray tracing is performed within a portion of the image frame; or decreasing a resolution of the portion of the image frame; or decreasing a number of virtual objects that are rendered within the image frame; or prioritizing rendering of a plurality of virtual objects within the image frame; or a combination of two or more thereof.

  18. A system for adjusting complexity of content rendered by a graphical processing unit (GPU), comprising: a server node including: the GPU configured to process an image frame for a scene of a game, wherein the GPU is configured to track one or more metrics regarding the processing of the image frame, wherein the GPU is configured to execute a shader; and a processing unit coupled to the GPU, wherein the processing unit is configured to generate and send a quality adjuster signal (QAS) to the shader associated with a game engine, wherein the QAS is sent to the shader while the image frame is being processed, wherein the QAS is generated based on the one or more metrics associated with the processing by the GPU, and wherein the shader is configured to adjust one or more shader parameters upon receipt of the QAS, wherein the adjustment of the one or more shader parameters occurs while the image frame is being processed, wherein the adjustment of the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU; and a client device configured to communicate with the server node via a computer network.

  19. The system of claim 18, wherein the processing unit is configured to determine whether a user input is received via the computer network during the processing of the image frame, wherein the processing unit is configured to analyze the user input to determine whether the quality adjuster signal is to be generated.

  20. The system of claim 18, wherein to adjust the one or more shader parameters when the image frame is being processing by the GPU, the shader is configured to decrease the level of complexity by: decreasing a number of times for which ray tracing is performed for a portion of the image frame; or decreasing a resolution of the portion within the image frame; or decreasing a number of virtual objects that are rendered within the image frame; or prioritizing rendering of a plurality of virtual objects within the image frame; or a combination of two or more thereof.

Description

FIELD

[0001] The present disclosure relates to systems and methods for adjusting one or more parameters of a graphical processing unit (GPU).

BACKGROUND

[0002] Many of today’s games and simulations facilitate multiple players simultaneously participating in the same instance of a game. The multiplayer aspect of such games provides an enriched gaming experience, where players may communicate, collaborate, compete against one another, and/or otherwise interact with and affect each other and their shared collective gaming environment. The players in a multiplayer game are connected via a network, such as a local area network (LAN) or a wide area network (WAN).

[0003] The multiplayer gaming has created a need to accommodate a large number of players in a networked multiplayer game, while maintaining a quality gaming experience for each player.

SUMMARY

[0004] Embodiments of the present disclosure provide systems and methods for adjusting one or more parameters of a graphical processing unit (GPU).

[0005] Other aspects of the present disclosure will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of embodiments described in the present disclosure.

[0006] In an embodiment, the one or more parameters of the GPU are adjusted during an operation of rendering an image frame. For example, details of the image frame are reduced during the rendering instead of after rendering the image frame.

[0007] In one embodiment, the systems and methods described herein provide a shader data parameter, including a quality parameter. A shader chooses a value of the shader data parameter during execution of the shader. For example, a ray iteration count or a level of detail of a virtual object, such as virtual grass or virtual foliage, is modified by the shader based on how busy a graphical processing unit (GPU) is. The GPU executes the shader. As another example, the level of detail is determined based on a distance of the virtual object in a virtual scene. To illustrate, when the virtual object is far away, along a depth dimension, in the virtual scene, the level of detail is less compared to when the virtual object is closer in the depth dimension. A quality parameter matrix having multiple values of multiple shader data parameters is generated, and the values are applied during a rendering operation performed by execution of the shader by the GPU.

[0008] In an embodiment, a method for adjusting complexity of content rendered by the GPU is described. The method includes processing, by the GPU, an image frame for a scene of a game. The method further includes tracking one or more metrics regarding the processing of the image frame during the processing of the image frame. During the processing of the image frame, the method includes sending a quality adjuster signal (QAS) to a shader associated with a game engine. The QAS is generated based on the one or more metrics associated with the processing by the GPU. During the processing of the image frame, the method includes adjusting, by the shader, one or more shader parameters upon receipt of the QAS, wherein said adjusting the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU.

[0009] In an embodiment, a server for adjusting complexity of content rendered by the GPU is described. The GPU executes a shader to process an image frame for a scene of a game. The GPU tracks one or more metrics regarding the processing of the image frame. The server includes a processing unit coupled to the GPU. The processing unit generates and sends the QAS to the shader associated with the game engine. The QAS is sent to the shader while the image frame is being processed and is generated based on the one or more metrics associated with the processing by the GPU. The shader adjusts one or more shader parameters upon receipt of the QAS. The adjustment of the one or more shader parameters occurs while the image frame is being processed and the adjustment of the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU.

[0010] In one embodiment, a system for adjusting complexity of content rendered by the GPU is described. The system includes a server node. The server node includes the GPU. The GPU processes an image frame for a scene of a game. The GPU tracks one or more metrics regarding the processing of the image frame. The GPU executes the shader. The server node further includes a processing unit coupled to the GPU. The processing unit generates and sends the QAS to the shader associated with the game engine. The QAS is sent to the shader while the image frame is being processed. The QAS is generated based on the one or more metrics associated with the processing by the GPU. The shader adjusts one or more shader parameters upon receipt of the QAS. The adjustment of the one or more shader parameters occurs while the image frame is being processed and the adjustment of the one or more shader parameters changes a level of complexity of the image frame being processed by the GPU. The system includes a client device that communicates with the server node via a computer network.

[0011] Some advantages of herein systems and methods include accounting for network latency, temperature of the GPU, power associated with the GPU, and reception of a user input during rendering of an image frame to save time in rendering the image frame. For example, if it is determined that there is an increase in the network latency during rendering of the image frame, complexity of the image frame is reduced to increase a rate at which the image frame is rendered. This increase in the rate of rendering the image frame accounts for, such as negates an effect of, the network latency. As another example, upon determining that the power associated with the GPU, or the temperature of the GPU, or both are high, the complexity during rendering of the image frame is reduced. The reduction in the complexity reduces the temperature of the GPU and the power associated with the GPU. The reduction in the power also reduces the temperature of the GPU.

[0012] Also, accounting for the temperature of the GPU and the power associated with the GPU increases a life cycle of the GPU. Increasing or maintaining a load of the GPU when the GPU is consuming a high amount of power or when the temperature of the GPU is high reduces the life cycle. By reducing the complexity, the temperature of the GPU and the power associated with the GPU are reduced to increase the life cycle of the GPU.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013] Various embodiments of the present disclosure are best understood by reference to the following description taken in conjunction with the accompanying drawings in which:

[0014] FIG. 1 is a diagram of an embodiment of a system to illustrate a change in a parameter value of a shader based on processing power usage of a graphical processing unit (GPU).

[0015] FIG. 2 is a diagram of an embodiment of a system to illustrate an adjustment of values of one or more parameters of the shader based on network latency.

[0016] FIG. 3 is a diagram of an embodiment of a system to illustrate generation of a quality adjuster signal (QAS) for adjusting the one or more parameters based on an amount of power consumed by the GPU.

[0017] FIG. 4 is a diagram of an embodiment of a system to illustrate generation of the QAS based on a temperature of the GPU.

[0018] FIG. 5 is a diagram of an embodiment of the system of FIG. 2 to illustrate generation of the QAS signal based on a latency of a user input received from a client device.

[0019] FIG. 6A is a diagram of an embodiment of a table to illustrate examples of the parameters.

[0020] FIG. 6B is a diagram of an embodiment of a mapping between a metric associated with the GPU and one of the parameters.

[0021] FIG. 6C is a diagram of an embodiment of a method to illustrate that details of a virtual object in an image frame are modified during execution of a rendering operation by the GPU.

[0022] FIG. 6D is a diagram illustrating embodiments of images frames to illustrate that values of one or more of the parameters are adjusted in different portions of the image frames.

[0023] FIG. 7 is a diagram of an embodiment of a system to illustrate use of multiple nodes for gaming

[0024] FIG. 8 is an embodiment of a flow diagram conceptually illustrating various operations which are performed for streaming a cloud video game to a client device.

[0025] FIG. 9 is a block diagram of an embodiment of a game console that is compatible for interfacing with a display device of the client device and is capable of communicating via a computer network with a game server.

[0026] FIG. 10 is a diagram of an embodiment of a head-mounted display (HMD).

[0027] FIG. 11 illustrates an embodiment of an Information Service Provider (INSP) architecture.

DETAILED DESCRIPTION

[0028] Systems and methods for adjusting one or more parameters of a graphical processing unit (GPU) are described. It should be noted that various embodiments of the present disclosure are practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure various embodiments of the present disclosure.

[0029] FIG. 1 is a diagram of an embodiment of a system 100 to illustrate a change in a parameter value of a shader A based on processing power usage of a GPU A. The system 100 includes a memory device system A, a central processing unit (CPU) A, the GPU A, a video encoder A, and a shader component A. Examples of a memory device system, as described herein, include one or more memory devices that are coupled to each other. To illustrate, a memory device is a device from which data is read or to which the data is written. The memory device can be a read-only memory (ROM) device, or a random access memory (RAM) device, or a combination thereof. To further illustrate, the memory device includes a flash memory, a cache, or a redundant array of independent disks (RAID). As an example, a CPU, as used herein, is an electronic circuit that carries out or executes multiple instructions of a computer program, such as a game engine, by performing arithmetic, logic, control, and input/output (I/O) functions that are specified by the instructions. To further illustrate, the CPU executes instructions to determine variables, such as, a position, an orientation, a size, and a location of a virtual object in a virtual scene of a game. Examples of the CPU include a processor, a microprocessor, a microcontroller, an application specific integrated circuit (ASIC), and a programmable logic device (PLD), and these terms are used herein interchangeably. Examples of the virtual object, as used herein, include a virtual gun, a virtual user, a virtual weapon, a virtual vehicle, a virtual sport equipment, a virtual background object, and one or more virtual blades of grass. The virtual background object is sometimes referred to herein as game content context. Examples of the game content context include the virtual object, such as, the virtual blades of grass and a virtual bush. Sometimes, the term game content context and game content are used herein interchangeably. An example of a GPU, as used herein, includes an electronic circuit, such as a processor or an ASIC or a PLD or a microcontroller, that applies a rendering program to generate image frames for output to a display device for display of images on the display device.

[0030] Examples of a video encoder, as used herein, include a compressor that compresses multiple image frames to output one or more encoded image frames. To illustrate, the video encoder receives raw image frames to output I, P, and B frames. As another illustration, the video encoder applies an H.264 standard to compress multiple image frames. An example of an image frame is a still image. A rate at which the multiple image frames, such as multiple images, are being rendered is sometimes referred to herein at a frame rate. As an example, a frame rate is measured as number of image frames rendered per second or a number of image frames per second (fps).

[0031] The memory device system A, the CPU A, the GPU A, and the video encoder A are coupled to each other via a communication medium A. To illustrate, the shader component A, the memory device system A, the CPU A, the GPU A, and the video encoder A are components of a system-on-chip (SoC). As another illustration, the shader component A, the memory device system A, the CPU A, the GPU A, and the video encoder A are components of a node, such as a server or a game console, of a server system. Examples of a communication medium, as used herein, include a wired or wireless communication medium. Examples of the wired communication medium include a bus, a cable, and silicon. Examples of the wireless communication medium include a medium for transferring data via a wireless transfer protocol, such as Bluetooth.TM. or Wi-Fi.TM..

[0032] Examples of a shader component, as used herein, include a processor, or an ASIC, or a PLD, or a controller, or a computer program, or a portion of a computer program. As an example, the shader component A is located outside the GPU A and is coupled via the communication medium A to the GPU A. As another example, the shader component A is a part of the GPU A. to further illustrate, the shader component A is a component within the GPU A or is a portion of a computer program that is executed by the GPU A. As another example, the shader component A is a component or a portion of the shader A.

[0033] The memory device system A stores a game engine A and a shader A. Examples of a game engine, as described herein, include a game computer program, a computer program for generating a virtual reality (VR) scene, a computer program for generating an augmented reality (AR) scene, a physics software program for applying laws of physics for generating the VR scene or the AR scene, or a rendering computer program for applying a rendering operation for generating the VR scene or the AR scene, or a combination of two or more thereof. To illustrate, the laws of physics are applied for collision detection or collision response. Examples of the virtual scene include the VR scene and the AR scene. As an example, the virtual scene includes one or more virtual objects and one or more virtual backgrounds. To illustrate, the virtual scene includes multiple virtual users, multiple virtual trees, multiple virtual weapons held by the virtual users, a virtual sky, and a virtual plane of a video game.

[0034] Examples of a shader, as used herein, include a computer program used for rendering multiple raw image frames. To illustrate, the shader is executed by a GPU to produce designated levels of intensity, texture, or color, or a combination of two or more thereof, for all pixels within an image frame. As another illustration, the shader A is a portion of the game engine A. It should be noted that in an embodiment, the terms intensity, brightness, and shade are used herein interchangeably. A texture, in one embodiment, of pixels of an image frame is an arrangement of color or intensities in the image frame.

[0035] The shader component A includes a shader parameter adjuster A and a quality matrix A. As an example, a shader parameter adjuster, as used herein, of a shader component is a portion of a computer program of the shader component. As another example, the shader parameter adjuster, of the shader component is a part of a processor, an ASIC, or an PLD that implements the shader component. As another example, the shader component A is a part of a computer program of the shader A.

[0036] A quality matrix, as used herein, is a matrix that includes multiple values of one or more parameters, such as P1, P2, and P3. An example of the parameter P1 is an amount of detail within an image frame. To illustrate, an amount of detail within the image frame is less when the virtual object is represented by a first number of pixels, such as forty or fifty, within the image frame. The amount of detail is less compared to another amount of detail within the same image frame when the virtual object is represented by a second number of pixels, such as 100 or 200 pixels. The first number of pixels is less than the second number of pixels. Both the first and second number of pixels occupies the same amount of display area on a display screen of a client device. As another illustration, an amount of detail within the image frame that has 10 blades of grass within the image frame is less than another amount of detail within the image frame provided to 20 blades of grass. The 20 blades of grass occupy a larger number of pixels of the image frame than the 10 blades of grass but occupy the same amount of display area on a display screen of a client device as that occupied by the 10 blades of grass. A display area, of a display screen, is measured as a product of a width of the display area and a length of the display area. An example of the parameter P2 is a distance at which the virtual object is represented within the virtual scene of an image frame. To illustrate, the parameter P2 is a distance, in a depth dimension, at which the virtual object is represented in the virtual scene of the image frame. An example of the parameter P3 is a number of ray iterations that are performed during the rendering operation to render at least a portion of an image frame. The portion includes the virtual object being rendered. Ray iteration is sometimes referred to herein as ray tracing. During each ray iteration, a color and intensity of a pixel within an image frame is calculated by the shader component A. For multiple ray iterations, the color and intensity of the pixel within the image frame is calculated multiple times. The rendering operation is performed by the GPU A to process, such as generate or output, one or more image frames, which are further described below.

[0037] The CPU A includes a processing unit A. A processing unit, as used herein, includes a processor, or an ASIC, or a PLD, or a controller. To illustrate, a processing unit within a CPU is a part of a PLD or an ASIC or a controller that executes functions described herein as being performed by the processing unit.

[0038] During a play of a game, the CPU A executes the game engine A to determine variables, such as a shape, a size, a position, and an orientation, of a model of the virtual object for an image frame having the virtual scene. The CPU A sends an indication of completion of the determination of the variables via the communication medium A to the GPU A. Upon receiving the indication of the completion, the GPU A executes the rendering operation to determine a color and shade for each pixel of the image frame.

[0039] During a time period in which the GPU A executes the rendering operation for generating, such as rendering, the image frame, the processing unit A determines how busy the GPU A is. For example, once the GPU A starts execution of the rendering operation, the GPU A sends a signal to the processing unit A. Upon receiving the signal, the processing unit A determines that the rendering operation has started and starts to determine how busy the GPU A is.

[0040] As an example, to determine how busy the GPU A is, the processing unit A sends a request to the GPU A to obtain a frame rate of generation of image frames, such as image frames 102A, 102B, and 102C, by the GPU A. The image frames 102A-102C are unencoded image frames, such as raw image frames, for the game. The request is sent via the communication medium A to the GPU A. The GPU A upon receiving the request, provides the frame rate via the communication medium A to the CPU A. The processing unit A determines that the frame rate is greater than a predetermined frame rate to further determine that the GPU A is busy. As an illustration, the GPU A is busy generating image frames for multiple games. As another illustration, the GPU A is busy generating the image frames 102A-102C for the game. Continuing with the example, when the GPU A is busy, the GPU A is using a large amount of processing power, e.g., greater than a predetermined amount, of the GPU A. The predetermined amount of power is an example of a predetermined power threshold level. On the other hand, the processing unit A determines that the frame rate is less than the predetermined frame rate to further determine that the GPU A is not busy. When the GPU A is not busy, the GPU A is using a small amount of processing power, e.g., less than the predetermined amount of power, of the GPU A. An amount of processing power of the GPU A is an example of power consumed by the GPU A.

[0041] Upon determining that the GPU A is busy, the processing unit A generates and sends a quality adjuster signal (QAS) via the communication medium A to the shader parameter adjuster A. The QAS signal is sent from the processing unit A to the shader component A while the GPU A is processing, such as rendering or generating, the image frames 102A-102C. For example, during a time period in which the image frame 102A is being rendered, the processing unit A sends the QAS signal to the shader component A.

[0042] Upon receiving the QAS, the shader parameter adjuster A applies content rules to adjust values of one or more of the parameters P1-P3 of one or more of the image frames 102A-102C being generated by the GPU A. For example, the shader parameter adjuster A reduces a resolution, such as an amount of detail, of the virtual object in the image frame 102A, or increases a distance, in the depth dimension, of the virtual object within the image frame 102A so that a lower number of pixels of the virtual object occupies the image frame 102A, or decreases a number of ray iterations that are used to generate the color and intensity of pixels representing the virtual object in the image frame 102A, or a combination thereof. It should be noted that when the resolution of the virtual object in the image frame 102A is decreased, a resolution of the image frame 102A decreases. As another example, the shader parameter adjuster A reduces an amount of detail of the virtual object in the image frame 102A to a preset level of detail, or increases a distance, in the depth dimension, of the virtual object within the image frame 102A to a preset distance, or decreases a number of ray iterations that are used to generate the color and intensity of pixels representing the virtual object in the image frame 102A to a preset number of ray iterations, or a combination thereof. The preset level of detail, the preset distance, and the preset number of ray iterations are stored in the memory device system A.

[0043] As yet another example, the shader parameter adjuster A reduces an amount of detail of the virtual object in the image frames 102A-102C having the virtual object, or increases a distance, in the depth dimension, of the virtual object within the image frames, or decreases a number of ray iterations that are used to generate the color and intensity of pixels representing the virtual object in the image frames 102A-102C, or a combination thereof. As still another example, the shader parameter adjuster A reduces an amount of detail of the virtual object in the image frames 102A-102C having the virtual object to the preset level of detail, or increases a distance, in the depth dimension, of the virtual object within the image frames 102A-102C to the preset distance, or decreases a number of ray iterations that are used to generate the color and intensity of pixels representing the virtual object in the image frames to the preset number of ray iterations, or a combination thereof. As yet another example, the shader parameter adjuster A adjusts the values of one or more of the parameters P1-P3 while a portion, such as pixels or blocks, of the image frame 102A is generated and remaining portion, such as remaining pixels or remaining blocks, of the image frame 102A is not generated. The shader parameter adjuster A provides the adjusted values of one or more of the parameters P1-P3 to the shader A via the communication medium A.

[0044] Upon receiving the adjusted values of one or more of the parameters P1-P3, the shader A outputs the image frames, such as the image frames 102A-102C, having the adjusted values of the parameters P1-P3. It should be noted that the one or more parameters are adjusted before completion of generation of the image frames 102A-102C. The image frames 102A-102C are unencoded image frames. The shader A sends image frames 102A-102C to the video encoder A via the communication medium A. The video encoder A encodes, such as compresses, the image frames 102A-102C to output encoded image frames, such as encoded image frames 104A and 104B. On the other hand, upon determining that the GPU A is not busy, the processing unit A does not generate the QAS and none of the parameters P1-P3 are adjusted by the shader parameter adjuster A.

[0045] In one embodiment, the values of the parameters P1-P3 are not adjusted by the shader parameter adjuster A before the image frames 102A-102C are rendered, such as generated or produced or output. For example, the CPU A does not send an instruction to the GPU A to adjust values of the parameters P1-P3 before the rendering operation is performed by the GPU A for generation of the image frames 102A-102C.

[0046] In one embodiment, the values of the parameters P1-P3 are not adjusted by the shader parameter adjuster A after the image frames 102A-102C are rendered. For example, the shader parameter adjuster A does not adjust values of the image frame 102A after the image frame 102A is rendered to re-render the image frame 102A.

[0047] In one embodiment, a shader, as described herein, is a portion of a game engine. For example, the shader A is a portion of the game engine A.

[0048] In an embodiment, the functions described herein as being performed by a processing unit, such as the processing unit A, are performed by a shader component, such as the shader component A.

[0049] In one embodiment, the functions described herein as being performed by a CPU are not performed by a processing unit with the CPU but are performed by another processor, such as a main processing unit, of the CPU.

[0050] In an embodiment, the GPU A includes a frame rate counter that counts a number of images being generated by the GPU A during a predetermined time period, such as every second or every two seconds.

[0051] In one embodiment, the shader parameter adjuster A applies the content rules to determine which of multiple virtual objects in the virtual scene is more complex compared to remaining of the virtual objects. For example, the shader parameter adjuster A determines that virtual vehicle tracks in the virtual scene have a higher level of detail compared to virtual blades of grass in the virtual scene. Upon determining that the virtual vehicle tracks are more detailed, the shader parameter adjuster A reduces the level of detail of the virtual vehicle tracks without reducing a level of detail of the blades of grass. An example of the higher level of detail includes a greater number of pixels in the virtual scene of the image frame 102A than that consumed by the virtual blades of grass. Another example of the higher level of detail includes having a higher number of parts or portions or components within the image frame 102A. To illustrate, the virtual vehicle tracks have railway sleepers, railway tracks, railway fasteners, and railway ballasts and the virtual blades of grass have blades.

[0052] As another example, the shader parameter adjuster A initially reduces the level of detail of the virtual vehicle tracks and then reduces a level of detail of the blades of grass. As yet another example, the shader parameter adjuster A reduces the level of detail of the virtual vehicle tracks to less than a first predetermined level and reduces the level of detail of the virtual blades of grass to less than a second predetermined level. The first predetermined level is lower than, the same as, or greater than the second predetermined level.

[0053] FIG. 2 is a diagram of an embodiment of a system 200 to illustrate an adjustment of values of the parameters P1-P3 based on network latency. The system 200 includes the memory device system A, the communication medium A, the CPU A, the GPU A, a transmit (TX) unit A, a streaming engine A, a computer network 202, multiple client devices A and B, a receive (RX) unit A, and the shader component A. The transmit unit A includes the video encoder A and multiple frame buffers. One of the frame buffers stores the image frames 102A-102C and another one of the frame buffers stores the encoded image frames 104A and 104B. The frame buffers are coupled to the video encoder A. The receive unit A includes a decoder A and multiple frame buffers. The decoder A is coupled to the frame buffers of the receive unit A. An example of a decoder, as used herein, includes a component that is used to decode, such as decompress, encoded image frames to output unencoded image frames. To illustrate, the decoder applies the H.264 standard to convert the P, I, and B image frames into raw image frames.

[0054] Examples of a streaming engine, as used herein, include a network interface controller, such as a network interface card that applies an Internet communication protocol, such as Transmission Control Protocol/Internet Protocol (TCP/IP). The receive unit A is coupled to the communication medium A. Also, the streaming engine A is coupled to the communication medium A. Examples of the computer network 202 include a wide area network (WAN), such as the Internet, or a local area network (LAN), such as the Internet, or a combination thereof. A client device, as used herein, is a device that is operated by a user to gain access to a game that is executed using the game engine A. Examples of a client device, as used herein, include a computer, a cell phone, a tablet, a smart phone, a smart television, a game console, and a head-mounted display (HMD), etc. An HMD, as used herein, is a display device that is worn by a user to view a virtual scene, such as the VR scene or the AR scene. The VR scene or the AR scene is generated upon execution of the game engine A.

[0055] The client device A is operated by a user A and the client device B is operated by another user B. The user A is assigned a user account A by a user account server, further described below. Similarly, the user B is assigned to another user account B by the user account server. The user A logs into the user account A to access the game from one or multiple nodes, which are further described below. Similarly, the user B logs into the user account B to access the game from the one or multiple nodes.

[0056] During the rendering operation in which the frames 102A-102C are being rendered, the processing unit A sends a signal to the streaming engine A via the communication medium A to send a predetermined number of test packets, such as one or two packets, with an instruction to the client device A to send back the predetermined number of test packets to the processing unit A. When the signal is sent to the streaming engine A, the processing unit A initiates a counter or a timer to count the network latency, such as an amount of time taken for the predetermined number of packets to be sent from the streaming engine A to the client device A and to be received by the streaming engine from the client device A. For example, upon receiving the signal from the GPU A indicating that the rendering operation is being performed to generate one or more of the image frames 102A-102C, the processing unit A sends the signal to the streaming engine A to send the predetermined number of test packets with the instruction to the client device A to send back the predetermined number of test packets. Upon receiving the signal from the processing unit A, the streaming engine A generates the predetermined number of test packets and applies the Internet protocol to embed the instruction within the test packets, and sends the test packets via the computer network 202 to the client device A.

[0057] The client device A is one for which the image frames 102A-102C are being rendered. For example, in response to receiving an input signal from the client device A via the computer network 202, the CPU A determines the variables for the virtual object in the virtual scene. Moreover, based on the variables determined by the CPU A, the GPU A determines the color and intensity of the virtual object in the virtual scene. As another example, a request to play the game for which the virtual object is rendered by the GPU A is received from the client device A via the computer network 202.

[0058] Upon receiving the predetermined number of test packets having the instruction, the client device A parses the tests packets to obtain the instruction. The client device A applies the Internet protocol to the instruction to generate the predetermined number of return test packets, and sends the return test packets via the computer network 202 to the streaming engine A. Upon receiving the predetermined number of return test packets, the streaming engine A applies the Internet protocol to parse the return test packets to obtain the instruction. The streaming engine A sends the instruction via the receive unit A to the processing unit A. Upon receiving the instruction, the processing unit A turns off the counter or timer that was initiated to determine the network latency in transferring the predetermined number of test packets from the streaming engine A via the computer network 202 to the client device A and from the client device A by the computer network 202 to the streaming engine A. As an example, the network latency is an amount of time between the operation of sending of the predetermined number of test packets by the streaming engine A to the client device A and the operation of receipt of the predetermined number of return test packets by the streaming engine A.

[0059] The processing unit A determines whether the network latency is greater than a predetermined amount of network latency, which is stored in the memory device system A. The predetermined amount of network latency is an example of a predetermined network latency threshold level. Upon determining that the network latency is greater than the predetermined amount of network latency, such as a threshold Th, the processing unit A generates the QAS and sends the QAS via the communication medium A to the shader component A. In response to receiving the QAS, the shader parameter adjuster A, in the manner described herein, of the shader component A adjusts the values of one or more of the parameters P1, P2, and P3 and provides the adjusted values to the shader A via the communication medium A for generating the image frames 102A-102C. On the other hand upon determining that the network latency is not greater than the predetermined amount of network latency, the processing unit A does not generate the QAS.

[0060] The video encoder A receives the image frames 102A-102C output from the shader A and encodes the image frames to generate the encoded image frames 104A and 104B. The streaming engine A receives the encoded image frames 104A and 104B from the video encoder A and applies Internet protocol to the encoded image frames 104A and 104B to generate one or more packets, and transfers the one or more packets via the computer network 202 to the client device A. The client device A includes a video decoder for decoding the encoded image frames 104A and 104B to generate the image frames 102A-102C. The client device A displays the image frames 102A-102C on the display device of the client device A. Examples of the display device, as used herein, include a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display.

[0061] FIG. 3 is a diagram of an embodiment of a system 300 to illustrate generation of the QAS based on an amount of power consumed by the GPU A. The system 300 includes the memory device system A, the communication medium A, a power measurement device (PMD) A, the GPU A, a power supply A, the CPU A, and the shader component A. Examples of a power measurement device, as used herein, include a voltage sensor, a current sensor, or a power sensor. The current sensor includes one or more current shunt resistors to measure current. The power supply A and the PMD A are coupled to the communication medium A. Moreover, the PMD A is coupled to an input of the GPU A to measure power at the input of the GPU A. For example, the PMD A is coupled to a power input pin of the GPU A.

[0062] During the time period in which the rendering operation is performed by the GPU A to generate one or more of the image frames 102A-102C, the PMD A measures an amount of power consumed by the GPU A at the input of the GPU A, such as power draw of the GPU A, to output power measurement data. As an example, the PMD A includes a sampler or an analog-to-digital converter (ADC) to convert analog values of the amount of power measured into digital values of the power measurement data.

[0063] The power measurement data is transferred via the communication medium A to the processing unit A. The processing unit A receives the power measurement data and determines whether the power measurement data has values that are below a predetermined power threshold, which is another example of the predetermined power threshold level. For example, while the rendering operation is being performed to generate the image frame 102A, the processing unit A determines that the power measurement data is greater than the predetermined power threshold. The predetermined power threshold is stored in the memory device system A.

[0064] Upon determining that the power measurement data is below the predetermined power threshold, the processing unit A does not generate the QAS signal. On the other hand, upon determining the power measurement data is greater than the predetermined power threshold, the processing unit A generates the QAS signal. The QAS signal is generated while or during a time period in which one or more of the frames 102A-102C are being generated by execution of the rendering operation. The processing unit A sends the QAS via the communication medium A to the shader parameter adjuster A. Upon receiving the QAS, the shader parameter adjuster A adjusts values of the one or more parameters P1-P3 of one or more of the image frames 102A-102C in the manner described herein. The adjusted values are sent from the shader parameter adjuster A via the communication medium A to the shader A to render one or more of the image frames 102A-102C having the adjusted values in the manner described herein.

[0065] In one embodiment, multiple power measurement devices are coupled to an input of the GPU A to measure power that is consumed by the GPU A.

[0066] FIG. 4 is a diagram of an embodiment of a system 400 to illustrate generation of the QAS based on a temperature of the GPU A. The system 400 includes the memory device system A, the communication medium A, the GPU A, the CPU A, a temperature sensor A, and the shader component A. Examples of a temperature sensor, as used herein, include a thermocouple, a thermistor, a resistance temperature detector, and a semiconductor sensor. The temperature sensor A is coupled to the GPU A and to the communication medium A. For example, the temperature sensor A is attached to a surface of a semiconductor chip of the GPU A.

[0067] During generation of one or more of the frames 102A-102C by the GPU A, the temperature sensor A senses a temperature of the GPU A to output temperature data and provides the temperature data via the communication medium A to the processing unit A. As an example, the temperature sensor A includes a sampler or an ADC to convert analog temperature signals into digital data of temperature of the GPU A to provide the temperature data. The processing unit A receives the temperature data and determines whether the temperature of the GPU A exceeds a predetermined temperature threshold TMP THR, which is stored in the memory device system A. For example, the processing unit A includes a temperature analyzer to determine whether the temperature of the GPU A exceeds the predetermined temperature threshold TMP THR. The temperature threshold TMP THR is an example of a predetermined temperature threshold level. Upon determining that the temperature of the GPU A exceeds the predetermined temperature threshold, the processing unit A generates the QAS signal and sends the QAS signal to the shader parameter adjuster A. Upon receiving the QAS, the shader parameter adjuster A adjusts values of the one or more parameters P1-P3 of one or more of the image frames 102A-102C in the manner described herein and provides the adjusted values via the communication medium A to the shader A. Upon receiving the adjusted values, the shader A generates one or more frames having the adjusted values of one or more of the parameters P1-P3. On the other hand, upon determining that the temperature of the GPU A does not exceed, e.g., is not greater than, the predetermined temperature threshold, the processing unit A does not generate the QAS signal.

……
……
……

You may also like...