空 挡 广 告 位 | 空 挡 广 告 位

Valve Patent | Systems And Methods For Detection And/Or Correction Of Pixel Luminosity And/Or Chrominance Response Variation In Displays

Patent: Systems And Methods For Detection And/Or Correction Of Pixel Luminosity And/Or Chrominance Response Variation In Displays

Publication Number: 20190180668

Publication Date: 20190613

Applicants: Valve

Abstract

Methods and systems are disclosed for measuring pixel-by-pixel luminosity and/or chrominance variations on a display, encoding and/or storing the measurements as a set of global and/or pixel-by-pixel correction factors, and/or digitally manipulating imagery with the inverse effect as the measured variations, such that the appearance of visual artifacts caused by the variations is reduced. These methods and systems may be used, for example, as part of the production process for virtual reality headsets, as well as in other applications that make high-fidelity use of displays exhibiting such artifacts (e.g., cell phones, watches, augmented reality displays, and the like).

BACKGROUND

Technical Field

[0001] The disclosure relates generally to video display technology, and more specifically to systems and methods for measuring pixel-by-pixel energy emission variations on a display, encoding and storing these measurements as a set of global and per-pixel correction factors, and/or digitally manipulating imagery with the inverse effect as the measured variations, such that the appearance of artifacts caused by such variations is reduced.

Description of the Related Art

[0002] Certain display technologies exhibit luminosity and/or colorimetric (gamma) energy emission responses which vary from pixel to pixel. Such variations are sometimes referred to as “mura defects,” “mura variations,” or simply “mura,” although the terminology and its precise meaning is not known to be standardized in the display industry.

[0003] For example, on Liquid Crystal Displays (“LCDs”), the backlight may exhibit spatial variations across the display which are visible to users. As another example, on Organic Light Emitting Diode (“OLED”) displays, adjacent pixels may exhibit substantially different color responses. These effects are particularly noticeable in regions of constant color and smooth gradients, where the region may appear “noisy” to an observer. This artifact is particularly objectionable on head mounted displays (“HMDs”), sometimes appearing as a “dirty window” through which the viewer is looking.

[0004] Various subjective/manual and objective/photoelectronic methods (sometimes generally known as “mura correction” techniques) are known in the art to address these variations to various extents. However, it is desirable to address the current limitations in this art according to aspects of the present invention.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

[0005] By way of example, reference will now be made to the accompanying drawings, which are not to scale.

[0006] FIG. 1 is an exemplary diagram of a computing device that may be used to implement aspects of certain embodiments of the present invention.

[0007] FIG. 2A is a grayscale version of a photograph depicting an exemplary all-green raw image sent to a display.

[0008] FIG. 2B is a grayscale version of a photograph depicting the exemplary all-green raw image sent to a display of FIG. 2A, as displayed to an observer, and uncorrected according to exemplary embodiments of the present invention.

[0009] FIG. 2C is a photograph depicting exemplary pixel-by-pixel correction factors according to aspects of the present invention.

[0010] FIG. 2D is a grayscale version of a photograph depicting pre-corrected imagery according to aspects of the present invention, corresponding to the image shown in FIG. 2B, as sent to an exemplary display.

[0011] FIG. 2E is a grayscale version of a photograph depicting an exemplary final image shown to an observer, according to aspects of the present invention, corresponding to the image depicted in FIG. 2D.

[0012] FIG. 3 is a grayscale version of a photograph depicting an exemplary image capture on a display panel of a constant green image with resolution sufficient to achieve an energy estimate for each sub-pixel according to aspects of the present invention.

[0013] FIG. 4 is a zoomed-in grayscale version of a photograph (approximate zoom factor=1000) of a portion of the image depicted in FIG. 3, with only the green channel illuminated, comprising a 5-by-5 pixel region with visible sub-pixels.

[0014] FIGS. 5A and 5B are photographs depicting aspects of an exemplary image capture system and configuration according to aspects of the present invention.

[0015] FIG. 6 depicts two exemplary display panels (610, 620) being driven by customized electronics (630) according to aspects of the present invention to simulate a head-mounted-display configuration.

[0016] FIG. 7 depicts a grid pattern shown on a display panel under test for use during calibration and to facilitate solving for geometric lens eccentricities according to aspects of the present invention.

[0017] FIG. 8 is a grayscale version of a photograph depicting a captured image according to aspects of the present invention, after dark field subtraction and lens undistortion steps used in certain embodiments.

[0018] FIG. 9 is a grayscale version of a photograph depicting corner-detection steps in a captured image according to aspects of the present invention.

[0019] FIG. 10 is a grayscale version of a photograph depicting an exemplary 32-by-32 pixel inset area in a captured image of a display panel under test after rectilinear alignment according to aspects of the present invention.

[0020] FIG. 11 graphically depicts pixel-by-pixel energy emission in a portion of an exemplary display panel under test according to aspects of the present invention.

DETAILED DESCRIPTION

[0021] Those of ordinary skill in the art will realize that the following description of the present invention is illustrative only and not in any way limiting. Other embodiments of the invention will readily suggest themselves to such skilled persons, having the benefit of this disclosure, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. Reference will now be made in detail to specific implementations of the present invention as illustrated in the accompanying drawings. The same reference numbers will be used throughout the drawings and the following description to refer to the same or like parts.

[0022] The data structures and code described in this detailed description are typically stored on a computer readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs) and DVDs (digital versatile discs or digital video discs), and computer instruction signals embodied in a transmission medium (with or without a carrier wave upon which the signals are modulated). For example, the transmission medium may include a communications network, such as the Internet.

[0023] FIG. 1 is an exemplary diagram of a computing device 100 that may be used to implement aspects of certain embodiments of the present invention. Computing device 100 may include a bus 101, one or more processors 105, a main memory 110, a read-only memory (ROM) 115, a storage device 120, one or more input devices 125, one or more output devices 130, and a communication interface 135. Bus 101 may include one or more conductors that permit communication among the components of computing device 100. Processor 105 may include any type of conventional processor, microprocessor, or processing logic that interprets and executes instructions. Main memory 110 may include a random-access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 105. ROM 115 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 105. Storage device 120 may include a magnetic and/or optical recording medium and its corresponding drive. Input device(s) 125 may include one or more conventional mechanisms that permit a user to input information to computing device 100, such as a keyboard, a mouse, a pen, a stylus, handwriting recognition, voice recognition, biometric mechanisms, and the like. Output device(s) 130 may include one or more conventional mechanisms that output information to the user, including a display, a projector, an A/V receiver, a printer, a speaker, and the like. Communication interface 135 may include any transceiver-like mechanism that enables computing device/server 100 to communicate with other devices and/or systems. Computing device 100 may perform operations based on software instructions that may be read into memory 110 from another computer-readable medium, such as data storage device 120, or from another device via communication interface 135. The software instructions contained in memory 110 cause processor 105 to perform processes that will be described later. Alternatively, hard-wired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the present invention. Thus, various implementations are not limited to any specific combination of hardware circuitry and software.

[0024] In certain embodiments, memory 110 may include without limitation high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include without limitation non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 110 may optionally include one or more storage devices remotely located from the processor(s) 105. Memory 110, or one or more of the storage devices (e.g., one or more non-volatile storage devices) in memory 110, may include a computer readable storage medium. In certain embodiments, memory 110 or the computer readable storage medium of memory 110 may store one or more of the following programs, modules and data structures: an operating system that includes procedures for handling various basic system services and for performing hardware dependent tasks; a network communication module that is used for connecting computing device 110 to other computers via the one or more communication network interfaces and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; a client application that may permit a user to interact with computing device 100.

[0025] Certain text and/or figures in this specification may refer to or describe flow charts illustrating methods and systems. It will be understood that each block of these flow charts, and combinations of blocks in these flow charts, may be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions that execute on the computer or other programmable apparatus create structures for implementing the functions specified in the flow chart block or blocks. These computer program instructions may also be stored in computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in computer-readable memory produce an article of manufacture including instruction structures that implement the function specified in the flow chart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flow chart block or blocks.

[0026] Accordingly, blocks of the flow charts support combinations of structures for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flow charts, and combinations of blocks in the flow charts, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.

[0027] For example, any number of computer programming languages, such as C, C++, C# (C Sharp), Perl, Ada, Python, Pascal, SmallTalk, FORTRAN, assembly language, and the like, may be used to implement aspects of the present invention. Further, various programming approaches such as procedural, object-oriented or artificial intelligence techniques may be employed, depending on the requirements of each particular implementation. Compiler programs and/or virtual machine programs executed by computer systems generally translate higher level programming languages to generate sets of machine instructions that may be executed by one or more processors to perform a programmed function or set of functions.

[0028] In the descriptions set forth herein, certain embodiments are described in terms of particular data structures, preferred and optional enforcements, preferred control flows, and examples. Other and further application of the described methods, as would be understood after review of this application by those with ordinary skill in the art, are within the scope of the invention.

[0029] The term “machine-readable medium” should be understood to include any structure that participates in providing data that may be read by an element of a computer system. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory such as devices based on flash memory (such as solid-state drives, or SSDs). Volatile media include dynamic random access memory (DRAM) and/or static random access memory (SRAM). Transmission media include cables, wires, and fibers, including the wires that comprise a system bus coupled to a processor. Common forms of machine-readable media include, for example and without limitation, a floppy disk, a flexible disk, a hard disk, a solid-state drive, a magnetic tape, any other magnetic medium, a CD-ROM, a DVD, or any other optical medium.

[0030] In certain embodiments, methods according to aspects of the present invention comprise three steps (each step is described in more detail after the following introductory list):

[0031] 1) A technique for display measuring: This approach requires accurate estimation of the energy emitted for each sub-pixel of the display. The specific images captured are targeted to the known deficiencies of the display technology in conjunction with the correction model being utilized.

[0032] 2) A technique for applying measurement: For each display panel, based on the appropriate correction model, a set of global and per-pixel correction factors are computed. Two general approaches to computing the correction factors are described, although combinations and/or variations of these may be implemented without departing from the scope of the invention: an iterative approach, and a non-iterative approach.

[0033] 3) Real-time imagery processing: Images are processed in real-time, using the correction factors computed in the second step, above, to reduce the appearance of the visual artifacts caused by the measured pixel-by-pixel energy emission variations from step one, above.

Display Measurement

[0034] Due to the typically high number of sub-pixel elements in a display (usually more than a million), generating accurate energy estimates for each sub-pixel may comprise a relatively complex task.

[0035] In certain embodiments, step one is to image each color channel individually (e.g., red, green, blue) to reduce the number of emissive elements being imaged.

[0036] Super-sampling the panel under test using the imaging sensor is also required in certain embodiments, as an exact sub-pixel alignment between camera sensor elements and emissive display elements is typically impossible. One factor that makes such a 1:1 sub-pixel measurement typically impossible is that camera technologies typically use rectangular raster and Bayer patterns for color reproduction, while display panels often use alternative (e.g., non-rectangular) patterns such as a pentile mappings.

[0037] In certain embodiments, it has been observed that accurate display measurements can be created by using twenty-five or more photosites on the camera sensor for each sub-pixel in the display. Additional camera photosites per sub-pixel yield better results in certain embodiments.

[0038] FIG. 2A is a grayscale version of a photograph (200A) depicting an exemplary all-green raw image sent to a display.

[0039] FIG. 2B is a grayscale version of a photograph (200B) depicting the exemplary all-green raw image sent to a display of FIG. 2A, as displayed to an observer, and uncorrected according to exemplary embodiments of the present invention.

[0040] FIG. 2C is a photograph (200C) depicting exemplary pixel-by-pixel correction factors according to aspects of the present invention.

[0041] FIG. 2D is a grayscale version of a photograph (200D) depicting pre-corrected imagery according to aspects of the present invention, corresponding to the image shown in FIG. 2B, as sent to an exemplary display.

[0042] FIG. 2E is a grayscale version of a photograph (200E) depicting an exemplary final image shown to an observer, according to aspects of the present invention, corresponding to the image depicted in FIG. 2D.

[0043] FIG. 3 is a grayscale version of a photograph (300) depicting an exemplary image capture on a display panel (320) of a constant green image with resolution sufficient to achieve an energy estimate for each sub-pixel according to aspects of the present invention.

[0044] FIG. 4 is a zoomed-in grayscale version of a photograph (400) (approximate zoom factor=1000) of a portion of the image depicted in FIG. 3, with only the green channel illuminated, comprising a 5-by-5 pixel region with visible sub-pixels.

[0045] If a camera is utilized which does not have a resolution sufficient to maintain this resolution across the full panel, sub-regions may be imaged in certain embodiments and then the resulting data sets may be smoothly blended.

[0046] Alternatively, for some display applications such as HMDs, it is not always necessary to image the full visual field. For example, measuring and correcting only the central field of view is often sufficient in certain embodiments, provided the correction layer smoothly blends to no correction at the periphery of the corrected area (rather than the alternative of cutting off correction abruptly). This may be accomplished in certain embodiments by smoothly blending the per-pixel correction factors (described in more detail later) with a null value towards the periphery.

[0047] FIGS. 5A and 5B are photographs depicting aspects of an exemplary image capture system and configuration according to aspects of the present invention.

[0048] In one exemplary display measurement system embodiment (as shown in FIGS. 5A and 5B), the following equipment may be used: a Canon 5Ds digital SLR camera, a 180 mm macro photograph lens (510), and a rigid macro stand. Drive electronics are also included (630, shown in FIG. 6), which drive the displays (610, 620) in a manner that matches HIVID usage (i.e., low persistence, 90 Hz or 120 Hz frame rate). In display production environments, measurements are typically taken in a dust-free and light-blocking enclosure in certain embodiments.

[0049] In order to accurately predict the placement for each of millions of sub-pixels, the imaging system (lens) must be spatially calibrated beyond the sub-pixel level in certain embodiments. This correction is typically dependent upon factors such as camera lens model and live focus, fstop settings.

[0050] Prior to taking a color measurement in certain embodiments, geometric lens eccentricities are accounted for by placing a known grid pattern on the display. This is a common technique used by ordinarily skilled artisans in the field of in computer vision, although the precision of requirements according to certain implementations of the present invention go beyond typical uses. Post-calibration, the geometric accuracy of the lens and imaging system must be correct beyond the sub-pixel level of the imaging device in certain embodiments. That is, for a five-by-five per sub-pixel imaging of the display raster in such embodiments, the overall geometric distortion must be much less than one output pixel, equivalent to less than one-fifth of the spacing between display sub-pixels.

[0051] FIG. 7 depicts a grid pattern (710) shown on a display panel under test for use during calibration and to facilitate solving for geometric lens eccentricities according to aspects of the present invention.

[0052] Next, in certain embodiments a black image is captured to determine the dark field response of the camera.

[0053] Finally, an image suitable for characterizing the per-pixel response is displayed. In certain embodiments, this is typically a monochrome image of constant color.

[0054] All images are captured using camera raw processing in certain embodiments, which preserves their photometric linearity.

[0055] The dark field is then subtracted from the captured image in certain embodiments, and is then unwarped by the lens solution. FIG. 8 is a grayscale version of a photograph depicting a captured image according to aspects of the present invention (800), after dark field subtraction and lens undistortion steps used in certain embodiments.

[0056] A deconvolution kernel may be applied in certain embodiments, which removes local flares in the imaging chain. This flare compensation can be validated using an image which measures the “PFS” (point-spread function). Typically, a single point pixel is illuminated in an otherwise constant valued region to compute this value.

[0057] In certain embodiments, the pixel corners for the captured rectangular area are detected, and a four-corner perspective warp creates an axis-aligned representation, where each sub-pixel has a consistent size and alignment. FIG. 9 is a photograph depicting corner-detection steps in a captured image (900) according to aspects of the present invention.

[0058] FIG. 10 is a grayscale version of a photograph (1000) depicting an exemplary 32-by-32 pixel inset area in a captured image of a display panel under test after rectilinear alignment according to aspects of the present invention.

[0059] Each sub-pixel is centered in each box in certain embodiments, allowing for accurate energy estimation, where each box is the area integrated for each sub-pixel. Each sub-pixel typically has a different intensity, as shown in FIG. 11; this is the effect that is measured and/or corrected in whole or in part according to aspects of the present invention.

[0060] Finally, according to certain embodiment, the energy for each pixel is calculated by summing all values in each pixel area.

[0061] FIG. 11 graphically depicts pixel-by-pixel energy emission in a portion of an exemplary display panel under test (1100) according to aspects of the present invention.

[0062] This process is typically highly sensitive to dust landing on the panel during image acquisition. If dust or fibers land on the display, they will absorb and/or scatter some light so the overlapping pixels will be incorrectly measured as dim. When compensation is applied, these pixels will have strong positive gain factors applied and will stand out as objectionable “overbright” pixels. To compensate for dust, multiple images of the panel may be taken in certain embodiments, with a blast of air (or other cleaning process) effected between each image capture. The energy estimates are computed individually for each captured image, and then merged using the max( )operator for each pixel. As dust and other particulates can typically only make pixels dimmer (not brighter) during capture, as long as the dust moves between subsequent captures, its impact may be removed.

[0063] Summary of capture process in certain embodiments:

[0064] Align and lock camera to proper panel position,* including focus and exposure*

[0065] Capture grid pattern (monochrome)* and solve for lens geometric characteristics*

[0066] Display all-black image,* to capture dark field*

[0067] Display target flat-field colors (monochrome) and capture pixels in photometric linear data (camera raw) [0068] Subtract dark field [0069] Apply deconvolution to account for imaging system PSF [0070] Unwarp by lens solution [0071] Detect pixel corners for visual field for all four corners [0072] Use four-corner perspective warp to synthesize idealized, axis-aligned rectilinear grid. Each sub-pixel in the display should correspond to a known, axis aligned box of constant size in the aligned output image. [0073] Sum energy in each box corresponding to a sub-pixel.

[0074] For estimates robust to dust, repeat N times with a cleaning/air blast between captures. Merge captures using the maximum value estimate for each sub-pixel across all captures.

Correction Factor Modeling

[0075] For each display panel, based on the correction model, a set of global and per-pixel correction factors may be computed in certain embodiments. Iterative and non-iterative approaches to computing the correction factors may be implemented, as well as variations and/or combinations of these approaches, depending on the particular requirements of each implementation.

Non-Iterative Approach

[0076] The following model may be used as the starting point, which accounts for more than 90% of the mura effect in OLED panels. (For other display technologies, alternative formulations may be employed to compactly represent the artifact, as known to those of ordinary skill in the art).

CCV(x,y)=ICV(x,y)+PPD(x,y)

where:

[0077] CCV:* Corrected code value in the device native gamma encoding*

[0078] ICV:* Input code value in the device native gamma encoding*

[0079] PPD:* per-pixel delta*

[0080] (x,y) denotes that the quantity varies as a function of the output pixel location (x,y) in display space.

[0081] During final playback (applying the correction factors in real-time to novel imagery), it may be convenient in certain embodiments to encode the per-pixel delta maximizing the coding space by pulling out the min and max values into global constants.

PPD(x,y)=PPV(x,y)*CG+CO

CCV(x,y)=ICV(x,y)+PPD(x,y)

where:

[0082] CCV:* Corrected code value in the device native gamma encoding*

[0083] ICV:* Input code value in the device native gamma encoding*

[0084] PPV:* per-pixel value encoded with limited bits*

[0085] PPD:* per-pixel delta*

[0086] CG:* correction gain*

[0087] CO:* correction offset*

[0088] Gain/offset:* global values used to interpret per-pixel deltas*

[0089] It should be noted that despite the simplicity of the above mathematical formulation (i.e., adding a constant per-pixel delta value in the device’s native coding space), it is counterintuitive that a display technology would behave in this manner. Indeed, although the a more intuitive model for the relevant behavior may be a per-pixel correction factor with a linear gain operation. (e.g., cause one pixel to emit 20% more light, cause another one to be 5% dimmer, etc.), when formulated in this intuitive manner the amount of gain varies as a function of the input code value. After significant investigation and experimentation, it was determined that these higher order terms canceled out, resulting in simplified formulation used in certain embodiments that is described herein.

[0090] It has been determined according to aspects of the present invention that that the mura effect can be cancelled out for OLED displays with an additive offset applied in the device’s gamma encoding.

Computing the Per-Pixel Deltas

[0091] As an additive offset is modeled, a representative code value is selected and the energy estimate is measured, per pixel, for a flat-field image. Specifically for the case of OLEDs in certain embodiments, code value 51 (out of 255) may be selected. This value is dim enough that a fixed additive offset has a high signal-to-noise ratio, but it is bright enough that exposure times are not prohibitive. Of course, different implementations may be better suited to different representative code values.

PPD=TCV-pow(LPE(x,y)/LPELA(x,y)*pow(TCV,DG),1.0/DG)

where:

[0092] PPD:* per-pixel delta*

[0093] TCV: target code-value (that sent to display during measurement)

[0094] LPE:* linear pixel energy*

[0095] LPELA: linear pixel energy, local area average (local energy average for surrounding neighborhood, often center/Gaussian weighted).

[0096] DG: display gamma (typically a constant=2.2)

[0097] The above equation models the question: assuming an idealized gamma transfer function for the display–“pow(x, gamma)”–what input code value models the linear light we have measured? Dividing the linear pixel energy by a local average allows for robustly computing how this pixel compares an ideal, in a manner robust to global lens capture effects. The size of the local average window is tailored to the display technology being measured in certain embodiments.

[0098] One may also replace the sub-expression “pow(x, gamma)” in certain embodiments with a more accurate display gamma characterization:

PPD=TCV-inv_display_response(LPE(x,y)/LPELA(x,y)*display_response(TCV))

[0099] Alternative formations also exist based on different mathematical assumptions of the display response, to compute the per-pixel deltas corrections from the measured energy estimates.

[0100] Assuming a locally linear and symmetric display response:

PPD=log (LPE(x,y)/LPELA(x,y))*display_response_constant

[0101] Assuming an anti-symmetric display response (where dim pixels must be driven with proportionally larger gain to make up the difference in response):

PPD=TCV+pow(LPELA(x,y)/LPE(x,y)*pow(TCV,DG),1.0/DG)

[0102] All equations listed above yield similar, though not identical, correction factors. Other formulations are known to exist which also approximate the per-pixel deltas, though with decreasing accuracy when modeling OLED technology. In general, the preferred technique is the one that minimizes the mura appearance, post-correction, as judged by a human observer.

Iterative Approach

[0103] While a single capture can correct for more than 90% of the effect, there are still lingering inaccuracies that can be accounted for in certain embodiments. In the iterative approach, we first solve for the constant per-pixel delta in certain embodiments as stated above. But then the process may be augmented in certain embodiments by sending a corrected flat-field image to the display and recording the residual uncorrected deltas. This residual is measured for multiple input code values in certain embodiments, and then the per-pixel residuals are calculated and applied by interpolating the recorded data sets.

CCV(x,y)=ICV(x,y)+PPD(x,y)+PPR(ICV, x,y)

where:

[0104] CCV:* Corrected code value in the device native gamma encoding*

[0105] ICV:* Input code value in the device native gamma encoding*

[0106] PPD:* per-pixel delta*

[0107] PPR: per-pixel residual,* which is a function of the input code value*

[0108] As the per-pixel residuals are much smaller than the per-pixel deltas, multiple residuals can be efficiently stored in a similar amount of space to the original per-pixel factor.

[0109] Over the lifetime of a display panel, the mura artifacts often change in intensity. This may be accounted for in certain embodiments by manipulating the correction gain factor to apply more, or less, of the correction as needed.

[0110] While display output is quantized to integral output values of light (such as 256 steps yielded by an 8-bit input) the per-pixel intensity variation may be modeled in certain embodiments at a greater degree of precision. By storing the per-pixel deltas with greater precision than the display, it is possible to globally recreate output luminance values with greater precision than the number of steps in the input (i.e., each individual pixel may only have 256 addressable steps, but local regions on average may have many more discrete output levels in certain embodiments).

[0111] Leveraging per-pixel display intensity variation to reduce banding artifacts is as interesting transmission technology, independent of the mura display artifact. For example, in a system with a high bit-precision image synthesis, a “mura-free” high bit-precision display, but a low bit-depth transmission link, one may introduce artificial pixel variation in the display to reduce the appearance of banding.

[0112] Synthetic pixel variation patterns can be created which have more compact representations and lower sampling discrepancy than the natural mura seen on OLED displays. One formulation is to use a tileable noise pattern, with a uniform sampling over the luma domain of +/-0.5 code values. The noise tiling is a jittered stratified sampling or blue noise in certain embodiments, such that pixel values are unlikely to have an offset similar to their neighbors. By making the transmission source aware of the display’s pixel variation algorithm, the appropriate quantization per-pixel may be applied such that banding appearance is reduced.

[0113] A tileable noise pattern may also be created that varies over time to further reduce banding artifacts, though in such a system the image synthesis in certain embodiments needs to encode and transmit which frame of noise to apply to the pixel variation.

[0114] Another advancement in certain embodiments is to bias the uniform sampling as a function of code value, such that clipped values are not introduced. For an exemplary 8-bit transmission link, code value 0 presumes uniform random biases in the range of [0.0,1.0], for an intermediate code value (128), [-0.5,0.5] is selected, and for code value 255, [-1.0, 0.0] is used.

[0115] In certain HMD-related embodiments, mura correction processing in accordance with aspects of the present invention is performed host-side on the graphics processing unit (“GPU”). However, depending on the requirements of each particular implementation, such processing may be effected in silicon, in the headset itself, on a tether, or in the display panel electronics, for example. Such alternative implementations may provide greater image compressibility, which is important in situations involving limited link bandwidths (e.g., wireless systems).

[0116] While the above description contains many specifics and certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art, as mentioned above. The invention includes any combination or sub-combination of the elements from the different species and/or embodiments disclosed herein.

[0117] The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, including U.S. Provisional App. No. 62/207,091 filed on Aug. 19, 2015 and U.S. application Ser. No. 15/239,982 filed on Aug. 18, 2016, are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.

[0118] These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.

您可能还喜欢...