空 挡 广 告 位 | 空 挡 广 告 位

Valve Patent | Split rendering between a head-mounted display (hmd) and a host computer

Patent: Split rendering between a head-mounted display (hmd) and a host computer

Drawings: Click to check drawins

Publication Number: 20210185294

Publication Date: 20210617

Applicant: Valve Corporation

Abstract

A rendering workload for an individual frame can be split between a head-mounted display (HMD) and a host computer that is executing an application. To split a rendering workload for a frame, the HMD may send head tracking data to the host computer, and the head tracking data may be used by the host computer to generate pixel data associated with the frame and extra data in addition to the pixel data. The extra data can include, without limitation, pose data, depth data, motion vector data, and/or extra pixel data. The HMD may receive the pixel data and at least some of the extra data, determine an updated pose for the HMD, and apply re-projection adjustments to the pixel data based on the updated pose and the received extra data to obtain modified pixel data, which is used to present an image on the display panel(s) of the HMD.

Claims

  1. A head-mounted display (HMD) comprising: one or more display panels having an array of light emitting elements; a head tracking system; a processor; and memory storing computer-executable instructions that, when executed by the processor, cause the HMD to: send, to a host computer that is communicatively coupled to the HMD, first head tracking data generated by the head tracking system; receive, from the host computer, and based at least in part on the first head tracking data, pixel data associated with a first frame and extra data in addition to the pixel data, the extra data including: pose data indicative of a predicted pose of the HMD that was used by an application executing on the host computer to generate the pixel data; and depth data associated with the first frame; determine, based at least in part on second head tracking data generated by the head tracking system, an updated pose that the HMD will be in at time at which the light emitting elements will illuminate for the first frame; apply, based at least in part on the depth data and a comparison between the predicted pose and the updated pose, re-projection adjustments to the pixel data to obtain modified pixel data associated with the first frame; and present a first image on the one or more display panels based at least in part on the modified pixel data.

  2. The HMD of claim 1, wherein: the host computer is wirelessly coupled to the HMD; the first head tracking data is sent wirelessly to the host computer; and the pixel data and the extra data are received wirelessly from the host computer.

  3. The HMD of claim 1, wherein the pixel data includes pixel values corresponding to an array of pixels of the one or more display panels, and the computer-executable instructions, when executed by the processor, further cause the HMD to: classify, based at least in part on the depth data, a first subset of the pixel values as foreground pixels and a second subset of the pixel values as background pixels, wherein applying the re-projection adjustments to the pixel data based at least in part on the depth data comprises: modifying the first subset of the pixel values; and refraining from modifying the second subset of the pixel values.

  4. The HMD of claim 1, wherein: the pixel data includes pixel values corresponding to an array of pixels of the one or more display panels; the extra data further includes extra pixel data that includes extra pixel values outside of a boundary of the array of pixels of the one or more display panels; and applying the re-projection adjustments to the pixel data comprises replacing at least some of the pixel values with at least some of the extra pixel values.

  5. The HMD of claim 1, wherein: the extra data further includes motion vector data that was generated by the host computer based at least in part on the first head tracking data; and applying the re-projection adjustments to the pixel data is based at least in part on the motion vector data.

  6. The HMD of claim 1, wherein the computer-executable instructions, when executed by the processor, further cause the HMD to: receive, from at least one handheld controller that is communicatively coupled to the HMD, hand tracking data; and modify, based at least in part on the hand tracking data, the pixel data to include one or more virtual hands overlaid on a scene represented by the pixel data to obtain the modified pixel data associated with the first frame.

  7. A method implemented by a head-mounted display (HMD) that includes one or more display panels having an array of light emitting elements, the method comprising: sending, to a host computer, first head tracking data generated by a head tracking system of the HMD; receiving, from the host computer, and based at least in part on the first head tracking data, pixel data associated with a first frame and extra data, the extra data including: pose data indicative of a predicted pose of the HMD that was used by an application executing on the host computer to generate the pixel data; and depth data associated with the first frame; determining, based at least in part on second head tracking data generated by the head tracking system, an updated pose that the HMD will be in at time at which the light emitting elements will illuminate for the first frame; applying, based at least in part on the depth data and a comparison between the predicted pose and the updated pose, re-projection adjustments to the pixel data to obtain modified pixel data associated with the first frame; and presenting a first image on the one or more display panels based at least in part on the modified pixel data.

  8. The method of claim 7, wherein: the host computer is wirelessly coupled to the HMD; the first head tracking data is sent wirelessly to the host computer; and the pixel data and the extra data are received wirelessly from the host computer.

  9. The method of claim 7, wherein the pixel data includes pixel values corresponding to an array of pixels of the one or more display panels, the method further comprising: classifying, based at least in part on the depth data, a first subset of the pixel values as foreground pixels and a second subset of the pixel values as background pixels, wherein the applying of the re-projection adjustments to the pixel data comprises: modifying the first subset of the pixel values; and refraining from modifying the second subset of the pixel values.

  10. The method of claim 7, wherein: the pixel data includes pixel values corresponding to an array of pixels of the one or more display panels; the extra data further includes extra pixel data that includes extra pixel values outside of a boundary of the array of pixels of the one or more display panels; and the applying of the re-projection adjustments to the pixel data comprises replacing at least some of the pixel values with at least some of the extra pixel values.

  11. The method of claim 7, wherein: the extra data further includes motion vector data that was generated based at least in part on the first head tracking data; and the applying of the re-projection adjustments to the pixel data is based at least in part on the motion vector data.

  12. The method of claim 7, further comprising: receiving, from at least one handheld controller that is communicatively coupled to the HMD, hand tracking data; and modifying, based at least in part on the hand tracking data, the pixel data to include one or more virtual hands overlaid on a scene represented by the pixel data to obtain the modified pixel data associated with the first frame.

  13. A host computer comprising: a processor; and memory storing computer-executable instructions that, when executed by the processor, cause the host computer to: receive, from a head mounted display (HMD), first head tracking data generated by a head tracking system of the HMD; determine a predicted illumination time representing a time at which light emitting elements of one or more display panels of the HMD will illuminate for a first frame of a series of frames; determine, based at least in part on the first head tracking data, a predicted pose that the HMD will be in at the predicted illumination time; provide pose data indicative of the predicted pose to an application for rendering the first frame, the application executing on the host computer; obtain, from the application, pixel data associated with the first frame; generate motion vector data based at least in part on the first head tracking data and second head tracking data generated by the head tracking system, the second head tracking data having been received from the HMD prior to the first head tracking data; and send, to the HMD, the pixel data and extra data, the extra data including at least the pose data and the motion vector data.

  14. The host computer of claim 13, wherein: the HMD is wirelessly coupled to the host computer; the first head tracking data is received wirelessly from the HMD; and the pixel data and the extra data are sent wirelessly to the HMD.

  15. The host computer of claim 13, wherein the computer-executable instructions, when executed by the processor, further cause the host computer to: receive, from the application, depth data associated with the first frame, wherein the extra data further includes the depth data.

  16. The host computer of claim 13, wherein the pixel data includes pixel values corresponding to an array of pixels of the one or more display panels of the HMD, and the computer-executable instructions, when executed by the processor, further cause the host computer to: receive, from the application, extra pixel data that includes extra pixel values outside of a boundary of the array of pixels of the one or more display panels of the HMD, wherein the extra data further includes the extra pixel data.

  17. The host computer of claim 16, wherein the computer-executable instructions, when executed by the processor, further cause the host computer to instruct the application to generate the pixel data at a first resolution and to generate the extra pixel data at a second resolution lower than the first resolution.

  18. The host computer of claim 16, wherein the computer-executable instructions, when executed by the processor, further cause the host computer to instruct the application to generate the extra pixel data based at least in part on the first head tracking data indicating an amount of movement of the HMD that is greater than a threshold amount of movement.

  19. The host computer of claim 16, wherein the computer-executable instructions, when executed by the processor, further cause the host computer to instruct the application to render a number of the extra pixel values in the extra pixel data based at least in part on an amount of movement of the HMD indicated by the first head tracking data.

  20. The host computer of claim 16, wherein the computer-executable instructions, when executed by the processor, further cause the host computer to: instruct the application to generate the extra pixel data based at least in part on at least one of the motion vector data or predictive data generated by the application.

Description

BACKGROUND

[0001] Virtual reality (VR) systems are used both within and outside of the video game industry. A conventional VR system setup includes a VR headset that is physically tethered to a host computer via a wired data connection. In this conventional setup, the host computer executes a graphics-based application, such as a video game, where most, if not all, of the graphics rendering operations are handled by the host computer, and the VR headset simply displays the pixel data received from the host computer. This setup leverages the high-computing capacity of the host computer and the low-latency of the wired connection to display high-quality imagery on a lightweight VR headset that functions much like a “thin-client” device in terms of the headset’s graphics processing capabilities. However, because such VR headsets are physically connected to the host computer, the user’s mobility is limited while using the VR headset. Furthermore, both setup and teardown of such a VR system is more difficult than it needs to be due to the requirement of connecting and disconnecting cables.

[0002] On the opposite end the spectrum, all-in-one (or standalone) VR headsets perform the entirety of the graphics processing operations to display imagery, without the aid of a separate machine. While standalone VR headsets provide a user with greater mobility because they do not have to be tethered to a host computer, manufacturing an all-in-one VR headset that is both comfortable and capable of rendering high-quality graphics can be challenging. For example, standalone VR headsets that are tasked with performing computationally-intensive, high-power-consuming graphics-processing operations to render high-quality graphics tend to get hot very quickly, and they also tend to be cumbersome and/or heavy, making them uncomfortable to wear for long periods of time. To alleviate these drawbacks, some standalone VR headsets trade quality for comfort by using lower-quality graphics processing components that render graphics at lower resolution, lower dynamic range, and/or with a limited set of only basic textures, which makes the graphics processing operations onboard the headset less computationally-intensive, allowing for a lighter-weight headset that does not get too hot and is therefore more comfortable to wear. However, users who wish to experience high quality graphics in VR are left dissatisfied with today’s standalone VR headsets, which are unable to provide both quality and comfort.

[0003] Provided herein are technical solutions to improve and enhance these and other systems.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] The detailed description is described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.

[0005] FIG. 1 is a diagram illustrating an example technique for splitting a rendering workload for a frame between a head-mounted display (HMD) and a host computer, in accordance with embodiments disclosed herein.

[0006] FIG. 2 is a diagram illustrating two example timelines showing respective rendering workloads for individual frames, the respective rendering workloads being split between a host computer and a HMD, in accordance with embodiments disclosed herein.

[0007] FIG. 3 illustrates a flow diagram of an example process for rendering a frame by splitting the rendering workload for the frame between a HMD and a host computer, in accordance with embodiments disclosed herein.

[0008] FIG. 4 illustrates a flow diagram of an example process for applying re-projection adjustments on a HMD based on motion vector data generated by a host computer, in accordance with embodiments disclosed herein.

[0009] FIG. 5 illustrates a flow diagram of an example process for applying re-projection adjustments based on extra pixel data generated by an application executing on a host computer, in accordance with embodiments disclosed herein.

[0010] FIG. 6 illustrates a flow diagram of an example process for applying re-projection adjustments based on depth data generated by an application executing on a host computer, in accordance with embodiments disclosed herein.

[0011] FIG. 7 illustrates a flow diagram of an example process for an HMD to receive hand tracking data directly from a handheld controller, and overlaying a virtual hand(s) on an application-rendered scene using the hand tracking data, in accordance with embodiments disclosed herein.

[0012] FIGS. 8A and 8B illustrate two alternative setups of a system that splits a rendering workload for a frame between a HMD and a host computer, in accordance with embodiments disclosed herein.

[0013] FIG. 9 illustrates example components of a wearable device, such as a HMD (e.g., a VR headset), and a host computer, in which the techniques disclosed herein can be implemented.

DETAILED DESCRIPTION

[0014] A head-mounted display (HMD) may be worn by a user for purposes of immersing the user in a virtual reality (VR) environment or an augmented reality (AR) environment. One or more display panels of the HMD present images based on data generated by an application (e.g., a video game). The application executes on a host computer that is communicatively coupled to the HMD, and the application generates pixel data for individual frames of a series of frames. The pixel data is sent to the HMD to present images that are viewed by a user through the optics included in the HMD, making the user perceive the images as if the user was immersed in a VR or AR environment.

[0015] Described herein are, among other things, techniques and systems for splitting a rendering workload for an individual frame between the HMD and the host computer such that the host computer performs a first portion of the rendering workload and the HMD performs a second portion of the rendering workload. For a given frame, the HMD is configured to send head tracking data to the host computer, and the host computer is configured to use the head tracking data to generate the pixel data for the frame and extra data in addition to the pixel data. The extra data can include, without limitation, pose data, depth data, motion vector data, parallax occlusion data, and/or extra pixel data. For example, the host computer may use the head tracking data to generate pose data indicative of a predicted pose that the HMD will be in at a time at which light emitting elements of the display panel(s) of the HMD will illuminate for the frame. The host computer may additionally, or alternatively, instruct the application to generate depth data and/or extra pixel data based at least in part on the pose data. The host computer may also generate motion vector data based at least in part on the head tracking data and/or movement within the scene being rendered. Some or all of this extra data may be sent from the host computer to the HMD, and the HMD may use at least some of the extra data it receives for purposes of modifying the pixel data, such as by applying re-projection adjustments to the pixel data. “Re-projection” is a technique used to compensate for slight inaccuracies in an original pose prediction of the HMD and/or to compensate for the application failing to make frame rate, which has the same effect as an original pose prediction that is slightly inaccurate. For example, a re-projected frame can be generated using pixel data from an application-rendered frame by transforming (e.g., through rotation and re-projection calculations) the application-rendered frame in a way that accounts for an updated prediction of the pose of the HMD. Accordingly, the modified pixel data obtained from applying the re-projection adjustments (and possibly other adjustments) may be used to present an image(s) on the display panel(s) of the HMD for the given frame, and this process may iterate for a series of frames.

[0016] In some embodiments, the extra data–besides the pixel data–that is generated, sent, and/or utilized for rendering frames may vary frame-to-frame. For example, the host computer may dynamically determine, for individual frames, the type and/or extent of extra data that is to be generated as part of the first portion of the rendering workload and/or the type and/or extent of the extra data that is to be sent to the HMD. Meanwhile, the HMD may dynamically determine, for individual frames, the type and/or extent of the extra data received from the host computer to utilize as part of the second portion of the rendering workload.

[0017] Splitting the rendering workload for a given frame between a host computer and a HMD allows for implementing a system where the host computer and the HMD can be wirelessly connected to each other; something that is currently impracticable with today’s high-latency wireless communication protocols and HMDs that are fully-reliant on a host computer. Splitting the rendering workload, in turn, allows for providing a high-quality VR or AR experience on a HMD that is also comfortable to wear for long periods of time because the high-computing capacity of the host computer can still be leveraged in the system disclosed herein. Furthermore, the HMD disclosed herein can be, and can remain, physically untethered from the host computer, providing a user with greater mobility, as compared to a tethered HMD, in that the user is better able to walk around a space while wearing the HMD, without concern for accidentally unplugging the HMD or the like. Given user demand for high-fidelity, high-resolution VR graphics, a wireless VR system that adheres to these demands will tend to be subjected to higher latencies in data transfer over a wireless communication link due to the greater amount of data that is transferred wirelessly. This means that a pose prediction of the HMD used by the application to render a given frame is made farther in advance in the system disclosed herein, as compared to the pose prediction for a conventional physically-tethered HMD that can avail itself to the higher data transfer rate of a wired connection. A pose prediction that is made farther in advance of the illumination time for the frame means there is more error in the pose prediction, as compared to the later-in-time pose prediction for a physically-tethered HMD, which, in turn, means that the HMD disclosed herein is tasked with performing computationally-intensive graphics processing operations in order to modify the pixel data received from the host computer (e.g., to correct for errors in the pixel data received from the host computer) so that a suitable image(s) is displayed on the HMD. In general, the HMD, armed with extra data received from the host computer, is in a better position to account for a relatively-lower data transfer rate over the wireless communication link between the host computer and the HMD in order to modify the received pixel data in a way that improves the quality of the resulting image(s) presented on the display panel(s) of the HMD. In addition, the split rendering techniques and systems described herein can allow for a different rendering frequency (or frame rate) on each of the host computer and the HMD.

[0018] Accordingly, the disclosed HMD is configured to perform a portion of the rendering workload for a given frame, which allows data to be transferred wirelessly between the host computer and the HMD notwithstanding the relatively higher latency of the wireless connection, as compared to the relatively low-latency wired connection of today’s HMDs. The HMD can compensate for the higher latency of the wireless communication link using graphics-processing logic onboard the HMD that is used to correct for errors in the data generated by the host computer. In addition, this onboard graphics-processing logic allows the HMD to be used as a standalone device, perhaps in limited use scenarios. For example, the HMD disclosed herein can be used in standalone mode to play video games that render more basic graphics in their imagery, thereby requiring less computationally-intensive graphics processing operations to render frames. As another example, the HMD disclosed herein can be used in standalone mode to playback movies and/or video clips on the HMD, all without relying on the host computer. When a user of the HMD disclosed herein wishes to play a video game with richer graphics, however, the user may operate the HMD in connected mode to leverage the additional graphics processing capacity of the host computer by connecting the HMD thereto, either over a wired or wireless communication link. A wired communication link may still be utilized by users who wish to play video games with richer graphics for long periods of time by leveraging the additional power capacity of the host computer (e.g., so the HMD does not run out of battery power). As compared to today’s all-in-one systems, for example, a user can benefit from a high-fidelity graphics experience that is provided by a connected host computer along with the increased mobility that is enabled by virtue of an available wireless connection between the host computer and the HMD.

[0019] Also disclosed herein are non-transitory computer-readable media storing computer-executable instructions to implement the techniques and processes disclosed herein. Although the techniques and systems disclosed herein are discussed, by way of example, in the context of video game applications, and specifically VR gaming applications, it is to be appreciated that the techniques and systems described herein may provide benefits with other applications, including, without limitation, non-VR applications (e.g., AR applications), and/or non-gaming applications, such as industrial machine applications, defense applications, robotics applications, and the like.

[0020] FIG. 1 is a diagram illustrating an example technique for splitting a rendering workload 100 for a frame between a head-mounted display (HMD) and a host computer. FIG. 1 depicts a head-mounted display (HMD) 102 worn by a user 104, as well as a host computer(s) 106. FIG. 1 depicts example implementations of a host computer 106 in the form of a laptop 106(1) carried in a backpack, for example, or a personal computer (PC) 106(N), which may be situated in the user’s 104 household, for example. It is to be appreciated, however, that these exemplary types of host computers 106 are non-limiting to the present disclosure. For example, the host computer 106 can be implemented as any type and/or any number of computing devices, including, without limitation, a PC, a laptop computer, a desktop computer, a portable digital assistant (PDA), a mobile phone, tablet computer, a set-top box, a game console, a server computer, a wearable computer (e.g., a smart watch, etc.), or any other electronic device that can transmit/receive data. The host computer 106 may be collocated in the same environment as the HMD 102, such as a household of the user 104 wearing the HMD 102. Alternatively, the host computer 106 may be remotely located with respect to the HMD 102, such as a host computer 106 in the form of a server computer that is located in a remote geographical location with respect to the geographical location of the HMD 102. In a remote host computer 106 implementation, the host computer 106 may be communicatively coupled to the HMD 102 via a wide-area network, such as the Internet. In a local host computer 106 implementation, the host computer 106 may be collocated in an environment (e.g., a household) with the HMD 102, whereby the host computer 106 and the HMD 102 may be communicatively coupled together either directly or over a local area network (LAN) via intermediary network devices.

[0021] As shown in FIG. 1, for a given frame, the host computer 106 is configured to perform a first partial rendering workload 100(1) (e.g., a first portion of the rendering workload 100 for a given frame), and the HMD 102 is configured to perform a second partial rendering workload 100(2) (e.g., a second portion of the rendering workload 100 for the given frame). In this manner, the HMD 102 and the host computer 106 are communicatively coupled together and are configured to work together in a collaborative fashion to render a given frame by generating pixel data that is ultimately used to present a corresponding image(s) on a display panel(s) 108 of the HMD 102.

[0022] The HMD 102 in the example of FIG. 1 may include a single display panel 108 or multiple display panels 108, such as a left display panel and a right display panel of a stereo pair of display panels. The one or more display panels 108 of the HMD 102 may be used to present a series of image frames (herein referred to as “frames”) that are viewable by the user 104 wearing the HMD 102. It is to be appreciated that the HMD 102 may include any number of display panels 108 (e.g., more than two display panels, a pair of display panels, or a single display panel). Hence, the terminology “display panel,” as used in the singular herein, may refer to either display panel 108 of a pair of display panels of a two-panel HMD 102, or it may refer to a single display panel 108 of a HMD 102 with any number of display panels (e.g., a single-panel HMD 102 or a multi-panel HMD 102). In a two-panel HMD 102, a stereo frame buffer may render, for instance, 2160.times.1200 pixels on both display panels of the HMD 102 (e.g., 1080.times.1200 pixels per display panel).

[0023] The display panel(s) 108 of the HMD 102 may utilize any suitable type of display technology, such as an emissive display that utilizes light emitting elements (e.g., light emitting diodes (LEDs)) to emit light during presentation of frames on the display panel(s) 108. As an example, display panel(s) 108 of the HMD 102 may comprise liquid crystal displays (LCDs), organic light emitting diode (OLED) displays, inorganic light emitting diode (ILED) displays, or any other suitable type of display technology for HMD applications.

[0024] The display panel(s) 108 of the HMD 102 may operate at any suitable refresh rate, such as a 90 Hertz (Hz) refresh rate, which can be a fixed refresh rate or a variable refresh rate that dynamically varies over a range of refresh rates. The “refresh rate” of a display is the number of times per second the display redraws the screen. The number of frames displayed per second may be limited by the refresh rate of the display, if using a fixed refresh rate. Thus, a series of frames may be processed (e.g., rendered) and displayed as images on the display such that a single frame of the series of frames is displayed with every screen refresh. That is, in order to present a series of images on the display panel(s) 108, the display panel(s) 108 may transition from frame-to-frame, in the series of frames, at the refresh rate of the display, illuminating the pixels at every screen refresh. In some embodiments, the frame rate can be throttled and/or the application can fail to hit the target frame rate, and phantom frames (based on re-projection) can be inserted between application-rendered frames.

[0025] The display system of the HMD 102 may implement any suitable type of display driving scheme, such as a global flashing type of display driving scheme, a rolling band type of display driving scheme, or any other suitable type of display driving scheme. In a global flashing type of display driving scheme, the array of light emitting elements of the display illuminate simultaneously at every screen refresh, thereby flashing globally at the refresh rate. In a rolling band type of display driving scheme, individual subsets of the light emitting elements of the display can be illuminated independently and sequentially in a rolling band of illumination during an illumination time period. These types of display driving schemes may be enabled by the light emitting elements being individually addressable. If the array of pixels and the array of light emitting elements on the display panel(s) 108 are arranged in rows and columns (but not necessarily with a one-pixel per one-light emitting element correspondence), individual rows and/or individual columns of light emitting elements may be addressed in sequence, and/or individual groups of contiguous rows and/or individual groups of contiguous columns of light emitting elements may be addressed in sequence for a rolling band type of display driving scheme.

[0026] In general, as used herein, “illuminating a pixel” means illuminating the light emitting element that corresponds to that pixel. For example, a LCD illuminates a light emitting element of a backlight to illuminate the corresponding pixel(s) of the display. Furthermore, as used herein, a “subset of pixels” may comprise an individual pixel or multiple pixels (e.g., a group of pixels). In order to drive the display panel(s) 108, the HMD 102 may include, among other things, a display controller(s), display driver circuitry, and similar electronics for driving the display panel(s) 108. Display driver circuitry may be coupled to the array of light emitting elements of the display panel(s) 108 via conductive paths, such as metal traces, on a flexible printed circuit. In an example, a display controller(s) may be communicatively coupled to the display driver circuitry and configured to provide signals, information, and/or data to the display driver circuitry. The signals, information, and/or data received by the display driver circuitry may cause the display driver circuitry to illuminate the light emitting elements in a particular way. That is, the display controller(s) may determine which light emitting element(s) is/are to be illuminated, when the element(s) is/are to illuminate, and the level of light output that is to be emitted by the light emitting element(s), and may communicate the appropriate signals, information, and/or data to the display driver circuitry in order to accomplish that objective.

[0027] In the illustrated implementation, the HMD 102 includes one or more processors 110 and memory 112 (e.g., computer-readable media 112). In some implementations, the processors(s) 110 may include a central processing unit (CPU)(s), a graphics processing unit (GPU)(s) 114, both CPU(s) and GPU(s) 114, a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. Additionally, each of the processor(s) 110 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.

[0028] The memory 112 may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The memory 112 may be implemented as computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor(s) 110 to execute instructions stored on the memory 112. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s) 110.

[0029] In general, the HMD 102 may include logic (e.g., software, hardware, and/or firmware, etc.) that is configured to implement the techniques, functionality, and/or operations described herein. The computer-readable media 112 can include various modules, such as instruction, datastores, and so forth, which may be configured to execute on the processor(s) 110 for carrying out the techniques, functionality, and/or operations described herein. An example functional module in the form of a compositor 116 is shown as being stored in the computer-readable media 112 and executable on the processor(s) 110, although the same functionality may alternatively be implemented in hardware, firmware, or as a system on a chip (SOC), and/or other logic. Furthermore, additional or different functional modules may be stored in the computer-readable media 112 and executable on the processor(s) 110. The compositor 116 is configured to modify pixel data received from the host computer 106 as part of the second partial rendering workload 100(2), and to output the modified pixel data to a frame buffer (e.g., a stereo frame buffer) so that a corresponding image(s) can be presented on the display panel(s) 108 of the HMD 102.

[0030] The HMD 102 may further include a head tracking system 118 and a communications interface(s) 120. The head tracking system 118 may leverage one or more sensors (e.g., infrared (IR) light sensors mounted on the HIM 102) and one or more tracking beacon(s) (e.g., IR light emitters collocated in the environment with the HMD 102) to track head motion or movement, including head rotation, of the user 104. This example head tracking system 118 is non-limiting, and other types of head tracking systems 118 (e.g., camera-based, inertial measurement unit (IMU)-based, etc.) can be utilized. The head tracking system 118 is configured to generate head tracking data 122, which can be sent, via the communications interface(s) 120 to the host computer 106 during runtime, as frames are being rendered.

[0031] The communications interface(s) 120 of the HMD 102 may include wired and/or wireless components (e.g., chips, ports, etc.) to facilitate wired and/or wireless data transmission/reception to/from the host computer 106, either directly or via one or more intermediate devices, such as a wireless access point (WAP). For example, the communications interface(s) 120 may include a wireless unit coupled to an antenna to facilitate a wireless connection with the host computer 106 and/or another device(s). Such a wireless unit may implement one or more of various wireless technologies, such as Wi-Fi, Bluetooth, radio frequency (RF), and so on. The communications interface(s) 120 may further include one or more physical ports to facilitate a wired connection with the host computer 106 and/or another device(s) (e.g., a plug-in network device that communicates with other wireless networks).

[0032] In the illustrated implementation, the host computer 106 includes one or more processors 124 and memory 126 (e.g., computer-readable media 126). In some implementations, the processors(s) 124 may include a CPU(s), a GPU(s) 128, both CPU(s) and GPU(s) 128, a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include FPGAs, ASICs, ASSPs, SOCs, CPLDs, etc. Additionally, each of the processor(s) 124 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.

[0033] The memory 126 may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The memory 126 may be implemented as CRSM, which may be any available physical media accessible by the processor(s) 124 to execute instructions stored on the memory 126. In one basic implementation, CRSM may include RAM and Flash memory. In other implementations, CRSM may include, but is not limited to, ROM, EEPROM, or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s) 124.

[0034] In general, the host computer 106 may include logic (e.g., software, hardware, and/or firmware, etc.) that is configured to implement the techniques, functionality, and/or operations described herein. The computer-readable media 126 can include various modules, such as instruction, datastores, and so forth, which may be configured to execute on the processor(s) 124 for carrying out the techniques, functionality, and/or operations described herein. Example functional modules in the form of applications 130, such as a video game 130(1), and a render component 132 are shown as being stored in the computer-readable media 126 and executable on the processor(s) 124. In some embodiments, the functionality of the render component 132 may alternatively be implemented in hardware, firmware, or as a system on a chip (SOC), and/or other logic. Furthermore, additional or different functional modules may be stored in the computer-readable media 126 and executable on the processor(s) 124.

[0035] The host computer 106 may further include a communications interface(s) 134, which may include wired and/or wireless components (e.g., chips, ports, etc.) to facilitate wired and/or wireless data transmission/reception to/from the HMD 102, either directly or via one or more intermediate devices, such as a WAP. For example, the communications interface(s) 134 may include a wireless unit coupled to an antenna to facilitate a wireless connection with the HMD 102 and/or another device(s). Such a wireless unit may implement one or more of various wireless technologies, such as Wi-Fi, Bluetooth, RF, and so on. The communications interface(s) 134 may further include one or more physical ports to facilitate a wired connection with the HMD 102 and/or another device(s) (e.g., a plug-in network device that communicates with other wireless networks).

[0036] It is to be appreciated that the HMD 102 may represent a VR headset for use in VR systems, such as for use with a VR gaming system, in which case the video game 130(1) may represent a VR video game 130(1). However, the HMD 102 may additionally, or alternatively, be implemented as an AR headset for use in AR applications, or a headset that is usable for VR and/or AR applications that are not game-related (e.g., industrial applications). In AR, a user 104 sees virtual objects overlaid on a real-world environment, whereas, in VR, the user 104 does not typically see a real-world environment, but is fully immersed in a virtual environment, as perceived via the display panel(s) 108 and the optics (e.g., lenses) of the HMD 102. It is to be appreciated that, in some VR systems, pass-through imagery of the real-world environment of the user 104 may be displayed in conjunction with virtual imagery to create an augmented VR environment in a VR system, whereby the VR environment is augmented with real-world imagery (e.g., overlaid on a virtual world). Examples described herein pertain primarily to a VR-based HMD 102, but it is to be appreciated that the HMD 102 is not limited to implementation in VR applications.

[0037] In general, the application(s) 130 executing on the host computer 106 can be a graphics-based application(s) 130 (e.g., a video game 130(1)). An application 130 is configured to generate pixel data for a series of frames, and the pixel data is ultimately used to present corresponding images on the display panel(s) 108 of the HMD 102. During runtime, for a given frame, the render component 132 may determine a predicted “illumination time” for the frame. This predicted “illumination time” for the frame represents a time at which light emitting elements of the display panel(s) 108 of the HMD 102 will illuminate for the frame. This prediction can account for, among other things, the inherent latency of a wireless communication link between the host computer 106 and the HMD 102, as well as a predicted render time and/or a known scan-out time of the pixels from the frame buffer(s). In other words, the prediction may be different for a wireless communication link than it is for a wired communication link. For instance, the render component 132 may, for a wired communication link, predict an illumination time that is a first amount of time in the future (e.g., about 22 milliseconds in the future), whereas the render component 132 may, for a wireless communication link, predict an illumination time that is a second, greater amount of time in the future (e.g., about 44 milliseconds in the future), due to the inherent differences in latency when transferring data over a wired connection verses a wireless connection.

[0038] The host computer 106 may also receive, from the HMD 102, the head tracking data 122 (e.g., first head tracking data 122) generated by the head tracking system 118 of the HMD 102. This head tracking data 122 may be generated and/or sent at any suitable frequency, such as a frequency corresponding to the target frame rate and/or the refresh rate of the HMD 102, or a different (e.g., faster) frequency, such as 1000 Hz (or 1 sensor reading every 1 millisecond). The render component 132 is configured to determine a predicted pose that the HMD 102 will be in at the predicted illumination time based at least in part on the head tracking data 122. The render component 132 may then provide pose data indicative of the predicted pose to the executing application 130 for rendering the frame (e.g., generating pixel data for the frame) based on the predicted pose, and the render component 132 may obtain, from the application 130, pixel data 136 associated with the frame. This pixel data 136 may correspond to an array of pixels of the display panel(s) 108 of the HMD 102. For example, the pixel data 136 output by the application 130 based on the pose data may include a two-dimensional array of per-pixel values (e.g., color values) for the array of pixels on the display panel(s) 108 of the HMD 102. In an illustrative example, a stereo pair of display panels 108 may include an array of 2160.times.1200 pixels on both display panels of the HMD 102 (e.g., 1080.times.1200 pixels per display panel). In this illustrative example, the pixel data 136 may include 2160.times.1200 pixel values (or 2,592,000 pixel values). In some embodiments, the pixel data 136 may include data for each pixel that is represented by a single set of color and alpha values (e.g., one color value for a red channel, one color value for a green channel, one color value for a blue channel, and one or more values for one or more alpha channels).

[0039] Logic of the host computer 106 may also generate extra data 138 besides (or, in addition to) the pixel data 136, and at least some of this extra data 138 may be sent to the HMD 102 to aid the HMD 102 in the second partial rendering workload 100(2). For example, the extra data 138 can be packaged with the pixel data 136 and sent to the HMD 102, and at least some of the extra data 138 can be used by logic of the HMD 102 to modify the pixel data 136 for purposes of presenting an image(s) corresponding to the frame on the display panel(s) 108 of the HMD 102. The extra data 138 can include, without limitation, the pose data generated by the render component 132, depth data, motion vector data, parallax occlusion data, and/or extra pixel data. For example, in providing the pose data to the executing application 130 for rendering the frame, the render component 132 may further instruct the application 130 to generate depth data (e.g., Z-buffer data) for the frame and/or extra pixel data (sometimes referred to herein as “out-of-bounds pixel data” or “additional pixel data”), and, in response, the render component 132 may obtain, from the application 130, the depth data and/or the extra pixel data associated with the frame. Additionally, or alternatively, the render component 132 may generate motion vector data based at least in part on the head tracking data 122 received from the HMD 102. For example, motion vector data can be generated based on a comparison of head tracking data generated at two different points in time (e.g., a comparison of head tracking data separated by a few milliseconds). Logic of the HMD 102 (e.g., the compositor 116) can utilize some or all of the extra data 138 for purposes of modifying the pixel data 136 to correct for errors in the pose prediction made ahead of time by the render component 132, which accounted for the inherent latency of the wireless connection between the host computer 106 and the HMD 102. For example, the compositor 116 may apply re-projection adjustments based at least in part on the extra data 138 received from the host computer 106. Other adjustments made by the compositor 116 as part of the second partial rendering workload 100(2) may include, without limitation, adjustments for geometric distortion, chromatic aberration, re-projection, and the like. Ways in which the extra data 138 can be utilized as part of the second partial rendering workload 100(2) are described in more detail below with reference to the following figures.

[0040] FIG. 2 is a diagram illustrating two example timelines 200(1) and 200(2) showing respective rendering workloads for individual frames, the respective rendering workloads being split between a host computer 106 and a HMD 102, in accordance with embodiments disclosed herein. The example of FIG. 2 depicts three example frames–frame “F”, frame “F+1”, and frame “F+2”–with respect to the first timeline 200(1) associated with the host computer 106. This first timeline 200(1) illustrates how the frames can be rendered in series by an executing application 130 on the host computer 106 using a GPU(s) 128 of the host computer 106. Here, the application 130 renders frame F as part of first partial rendering workload 100(1)(a), then frame F+1 as part of a second partial rendering workload 100(1)(b), and then frame F+2 as part of a third partial rendering workload 100(1)(c), in sequence, from left to right on the first timeline 200(1). The ellipses on the first timeline 200(1) indicate that this may continue for any number of frames as the application 130 continues to execute. The first timeline 200(1) also implies, by the vertical lines oriented orthogonally to the horizontal timeline 200(1), that the application 130 is targeting a target frame rate (e.g., a frame rate of 90 Hz where the vertical lines would be separated by about 11.11 milliseconds). In the example of FIG. 2, the application 130 executing on the host computer 106 happens to be hitting the target frame rate over the series of three example frames, but this may not always be the case, as the application 130 may, in some instances (e.g., for scenes with a high number of moving objects or complex textures), take longer than the allotted time to render a given frame 202. This scenario is sometimes referred to as the application 130 failing to hit the target frame rate.

[0041] The second timeline 200(2) in FIG. 2, which is associated with the HMD 102, shows the partial rendering workloads 100(2)(a), 100(2)(b), and 100(2)(c) of the compositor 116 of the HMD 102 for the individual frames. An individual rendering workload 100(2) of the HMD’s 102 compositor 116 for a given frame may represent adjustments that are applied to the pixel data 136 generated by the application 130 executing on the host computer 106 before a final image(s) is presented on the display panel(s) 108 of the HMD 102. Such adjustments may include, without limitation, adjustments for geometric distortion, chromatic aberration, re-projection, and the like, which are applied to the pixel data 136 received from the host computer 106 before rendering a final image(s) on the HMD 102. At least some of these adjustments may utilize the extra data 138 received from the host computer 106, such as the pose data, depth data, extra pixel data, parallax occlusion data, and/or motion vector data, as described herein. Accordingly, the frames that are shown in FIG. 2 are meant to represent “actual” frames in the sense that they are output from the application 130, which may represent a video game application 130(1), or any other type of graphics-based application. By contrast, if the application 130 failed to hit the target frame rate for a given frame, or if the frame rate was throttled to a lower rate than the refresh rate of the display panel(s) 108 of the HMD 102, the compositor 116 of the HMD 102 may use the previously-received pixel data 136 for a preceding frame to generate a “phantom” frame (e.g., using re-projection) based on the pose prediction of the preceding frame and an updated pose prediction made by the HMD 102. In any case, the result of the partial rendering workloads 100(2) is the generation of modified pixel data that may be output to a frame buffer (e.g., a stereo frame buffer). This distinction between an “actual” frame and a “phantom” frame is not meant to imply that an actual frame is not adjusted on the HMD 102, and, in this sense, the frames generated on the HMD side are all effectively synthesized (i.e., not the same as the original frames output by the application 130 executing on the host computer 106).

[0042] The second timeline 200(2) of FIG. 2 also shows a scan-out time 202(a), 202(b), and 202(c) for each frame, as well as an illumination time 204(a), 204(b), and 204(c) for each frame. During the scan-out time 202 for a given frame, subsets of pixel values (of the modified pixel data) are scanned out to the display panel(s) 108 via a display port (e.g., a high-definition multimedia interface (HDMI)), and during the illumination time 204 for the given frame, the light emitting elements of the display panel(s) 108 are illuminated to cause the pixels of the display panel(s) 108 to illuminate. FIG. 2 illustrates an example of a global flashing type of display driving scheme, which may be used with LCD panels to simultaneously emit light from the light emitting elements of the display panel(s) 108 at the refresh rate of the HMD 102. In an illustrative example, if the HMD 102 is operating at a 90 Hz refresh rate, the illumination time 204 for each frame may be separated by roughly 11.11 milliseconds.

[0043] It is to be appreciated that, although FIG. 2 depicts that the respective rendering cycles of the host computer 106 and the HMD 102 appear to be synchronized (which they can be), the techniques and systems described herein do not require synchronization of frames between the two devices. In general, the compositor 116 of the HMD 102 may start its rendering workload 100(2) for a given frame as soon as the data (e.g., the pixel data 136 and the extra data 138) is received from the host computer 106, and/or as soon as the HMD 102 determines that the application 130 of the host computer 106 may have missed a frame or that packets may have been dropped in transit, etc. Due to varying conditions of the wireless communications link, the processing loads on the respective devices, and/or other factors, the respective rendering cycles of the host computer 106 and the HMD 102 may at times be out-of-synch/unsynchronized relative to each other. Accordingly, while the host computer 106 and the HMD 102 are configured to work together in a collaborative fashion by splitting the rendering workload for a given frame into partial workloads performed on the respective devices, one can appreciated that the devices may operate independently of one another to perform their respective portions of the workload.

[0044] The processes described herein are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, firmware, or a combination thereof (i.e., logic). In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes.

……
……
……

您可能还喜欢...