Google Patent | Augmented reality microscope for pathology with overlay of quantitative biomarker data

Patent: Augmented reality microscope for pathology with overlay of quantitative biomarker data

Drawings: Click to check drawins

Publication Number: 20210018742

Publication Date: 20210121

Applicant: Google

Abstract

A microscope of the type used by a pathologist to view slides containing biological samples such as tissue or blood is provided with the projection of enhancements to the field of view, such as a heatmap, border, or annotations, or quantitative biomarker data, substantially in real time as the slide is moved to new locations or changes in magnification or focus occur. The enhancements assist the pathologist in characterizing or classifying the sample, such as being positive for the presence of cancer cells or pathogens.

Claims

  1. A method for assisting a user in review of a slide containing a biological sample with a microscope having an eyepiece comprising the steps of: (a) capturing, with a camera, a digital image of a view of the sample as seen through the eyepiece of the microscope, (b) using a first machine learning pattern recognizer to identify one or more areas of interest in the sample from the image captured by the camera, and a second machine pattern recognizer trained to identify individual cells and (c) superimposing an enhancement to the view of the sample as seen through the eyepiece of the microscope as an overlay, wherein the enhancement is based upon the identified areas of interest in the sample and further comprises quantitative data associated with the areas of interest, (d) wherein, when the sample is moved relative to the microscope optics or when a magnification or focus of the microscope changes, a new digital image of a new view of the sample is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is superimposed onto the new view of the sample as seen through the eyepiece in substantial real time.

  2. The method of claim 1, wherein the one or more areas of interest comprise cells positive for expression of a protein and wherein the quantitative data comprises a percent of the cells in the view as being positive for such protein expression.

  3. The method of claim 2, wherein the protein comprises Ki-67, P53, Estrogen Receptor (ER) or Progesterone Receptor (PR).

  4. The method of claim 1, wherein the one or more areas of interest comprise individual microorganism cells and the quantitative data comprises a count of the number of microorganism cells in the view.

  5. The method of claim 1, wherein the one or more areas of interest comprise individual cells undergoing mitosis and wherein the quantitative data comprises a count of the number of cells in the view undergoing mitosis.

  6. The method of claim 1, wherein the areas of interest comprise tumor cells and wherein the quantitative data comprises an area measurement of the tumor cells, either absolute or relative area within a defined region in the sample.

  7. The method of claim 1, further comprising the step of providing on a workstation associated with the microscope a graphical display providing access to tools to customize the presentation of the enhancement on the field of view.

  8. The method of claim 1, wherein the quantitative data comprises a measurement.

  9. The method of claim 8, wherein the measurement comprises an area measurement and wherein the areas of interest comprise prostate tissue with specific Gleason grades.

  10. The method of claim 1, wherein the quantitative data comprises a count of the number of areas of interest in the view.

  11. A system assisting a user in review of a slide containing a biological sample, comprising: a microscope having a stage for holding a slide containing a biological sample, at least one objective lens, and an eyepiece, a digital camera configured to capture digital images of a view of the sample as seen through the eyepiece of the microscope, a compute unit comprising a machine learning pattern recognizer configured to receive the digital images from the digital camera, wherein the pattern recognizer is trained to identify regions of interest in biological samples of the type currently placed on the stage, and wherein the pattern recognizer recognizes regions of interest on a digital image captured by the camera and wherein the compute unit generates data representing an enhancement to the view of the sample as seen through the eyepiece of the microscope, wherein the enhancement is based upon the regions of interest in the sample; and one or more optical components coupled to the eyepiece for superimposing the enhancement on the field of view; wherein the compute unit implements a first machine learning pattern recognizer trained to identify individual cells within the view and a second machine learning pattern recognizer trained to identify individual cells within the view which are positive for expression of a protein; and wherein the enhancement further comprises a display of quantitative data relating to the cells which are positive for the expression of the protein.

  12. The system of claim 10, wherein the protein comprises Ki-67, P53, Estrogen Receptor or Progesterone Receptor (PR).

  13. A system assisting a user in review of a slide containing a biological sample, comprising: a microscope having a stage for holding a slide containing a biological sample, at least one objective lens, and an eyepiece, a digital camera configured to capture digital images of a view of the sample as seen through the eyepiece of the microscope, a compute unit comprising a machine learning pattern recognizer configured to receive the digital images from the digital camera, wherein the pattern recognizer is trained to identify regions of interest in biological samples of the type currently placed on the stage, and wherein the pattern recognizer recognizes regions of interest on a digital image captured by the camera and wherein the compute unit generates data representing an enhancement to the view of the sample as seen through the eyepiece of the microscope, wherein the enhancement is based upon the regions of interest in the sample; and one or more optical components coupled to the eyepiece for superimposing the enhancement on the field of view; wherein the compute unit implements a machine learning pattern recognizer trained to identify individual cells which are undergoing mitosis; and wherein the enhancement further comprises a display of quantitative data relating to the cells which are undergoing mitosis.

  14. A system assisting a user in review of a slide containing a biological sample, comprising: a microscope having a stage for holding a slide containing a biological sample, at least one objective lens, and an eyepiece, a digital camera configured to capture digital images of a view of the sample as seen through the eyepiece of the microscope, a compute unit comprising a machine learning pattern recognizer configured to receive the digital images from the digital camera, wherein the pattern recognizer is trained to identify regions of interest in biological samples of the type currently placed on the stage, and wherein the pattern recognizer recognizes regions of interest on a digital image captured by the camera and wherein the compute unit generates data representing an enhancement to the view of the sample as seen through the eyepiece of the microscope, wherein the enhancement is based upon the regions of interest in the sample; and one or more optical components coupled to the eyepiece for superimposing the enhancement on the field of view; wherein the compute unit implements one or more machine learning pattern recognizers trained to identify individual tumor cells or areas of tumor cells which are classified in accordance with specific Gleason grades, and wherein the enhancement further comprises a display of quantitative area data relating to the tumor cells or areas of tumor cells which are classified in accordance with specific Gleason grades.

  15. The system of claim 11, further comprising a workstation associated with the microscope having a display providing tools for a user of the workstation to draw an annotation on an image of the view, and wherein the annotation is saved along with the image of the view in a computer memory.

  16. The system of claim 11, further comprising a workstation associated with the microscope having a display, wherein the display providing access to tools to customize the presentation of the enhancement on the field of view.

  17. The system of claim 13, further comprising a workstation associated with the microscope having a display providing tools for a user of the workstation to draw an annotation on an image of the view, and wherein the annotation is saved along with the image of the view in a computer memory.

  18. The system of claim 13, further comprising a workstation associated with the microscope having a display, wherein the display providing access to tools to customize the presentation of the enhancement on the field of view.

  19. The system of claim 14, further comprising a workstation associated with the microscope having a display providing tools for a user of the workstation to draw an annotation on an image of the view, and wherein the annotation is saved along with the image of the view in a computer memory.

  20. The system of claim 14, further comprising a workstation associated with the microscope having a display, wherein the display providing access to tools to customize the presentation of the enhancement on the field of view.

Description

[0001] This application claims priority benefits of U.S. Provisional application Ser. No. 62/656,557 filed Apr. 12, 2018.

FIELD

[0002] This disclosure relates to the field of pathology and more particularly to an improved microscope system and method for assisting a pathologist in classifying biological samples such as blood or tissue, e.g., as containing cancer cells or containing a pathological agent such as plasmodium protozoa or tuberculosis bacteria.

BACKGROUND

[0003] In order to characterize or classify a biological sample such as tissue, the sample is placed on a microscope slide and a pathologist views it under magnification with a microscope. The sample may be stained with agents such as hematoxylin and eosin (H&E) to make features of potential interest in the sample more readily seen. Alternatively, the sample may be stained and scanned with a high resolution digital scanner, and the pathologist views magnified images of the sample on a screen of a workstation or computer.

[0004] For example, the assessment of lymph nodes for metastasis is central to the staging of many types of solid tumors, including breast cancer. The process requires highly skilled pathologists and is fairly time-consuming and error-prone, especially for nodes that are negative for cancer or have a small foci of cancer. The current standard of care involves examination of digital slides of node biopsies that have been stained with hematoxylin and eosin. However, there are several limitations inherent with manual reads including reader fatigue, and intra and inter-grader reliability that negatively impact the sensitivity of the process. Accurate review and assessment of lymph node biopsy slides is important because the presence of tumor cells in the lymph node tissue may warrant new or more aggressive treatment for the cancer and improve the patient’s chances of survival.

[0005] The prior art includes descriptions of the adaptation of deep learning techniques and trained neural networks to the context of digital tissue images in order to improve cancer diagnosis, characterization and/or staging. Pertinent background art includes the following articles: G. Litjens, et al., Deep learning as a tool for increasing accuracy and efficiency of histopathological diagnosis, www.nature.com/scientificreports 6:26286 (May 2016); D. Wang et al., Deep Learning for Identifying Metastatic Breast Cancer, arXiv:1606.05718v1 (June 2016): A. Madabhushi et al., Image analysis and machine learning in digital pathology: Challenges and opportunities, Medical Image Analysis 33, p. 170-175 (2016); A. Schuamberg, et al., H&E-stained Whole Slide Deep Learning Predicts SPOP Mutation State in Prostate Cancer, bioRxiv preprint http:/.bioRxiv.or/content/early/2016/07/17/064279. Additional prior art of interest includes Quinn et al., Deep Convolutional Neural Networks for Microscopy-based Point of Care Diagnostics, Proceedings of International Conference on Machine Learning for Health Care 2016.

[0006] The art has described several examples of augmenting the field of view of a microscope to aid in surgery. See U.S. patent application publication 2016/0183779 and published PCT application WO 2016/130424A1. See also Watson et al., Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images, Journal of Biomedical Optics, vol. 20 (10) October 2015.

SUMMARY

[0007] A method is disclosed for assisting a user in review of a slide containing a biological sample with a microscope having an eyepiece. The method includes steps of (a) capturing, with a camera, a digital image of a view of the sample as seen through the eyepiece of the microscope, (b) using a first machine learning pattern recognizer to identify one or more areas of interest in the sample from the image captured by the camera, and a second machine learning pattern recognizer trained to identify individual cells, and (c) superimposing an enhancement to the view of the sample as seen through the eyepiece of the microscope as an overlay, wherein the enhancement is based upon the identified areas of interest in the sample and further comprises quantitative data associated with the areas of interest. The method includes step (d), wherein when the sample is moved relative to the microscope optics or when a magnification or focus of the microscope changes, a new digital image of a new view of the sample is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is superimposed onto the new view of the sample as seen through the eyepiece in substantial real time, whereby the enhancement assists the user in classifying or characterizing the biological sample.

[0008] In one embodiment the one or more areas of interest comprise cells positive for expression of a protein and wherein the quantitative data takes the form of a percent of the cells in the view as being positive for such protein expression. Examples of the protein are Ki-67, P53, and Progesterone Receptor (PR). As another example, the one or more areas of interest can take the form of individual microorganism cells and the quantitative data comprises a count of the number of microorganism cells in the view. As another example, the one or more areas of interest take the form of individual cells undergoing mitosis and wherein the quantitative data is a count of the number of cells in the view undergoing mitosis. As another example, the areas of interest are tumor cells and the quantitative data is an area measurement of the tumor cells, either absolute or relative area within a defined region in the sample.

[0009] In one possible embodiment, the quantitative data comprises a measurement, e.g., a distance measurement. As another example the measurement is an area measurement. In one specific example, the areas of interest are prostate tissue with specific Gleason grades and the quantitative measurement is relative or absolute area measurements of tumor regions having specific Gleason grades, e.g., Grade 3, Grade 4 etc.

[0010] As another example, the quantitative data can take the form of a count of the number of areas of interest in the view. For example, the machine learning model identifies individual microorganism cells in the view and displays a count of the number of such cells.

[0011] In another aspect of this disclosure, a system is disclosed for assisting a user in review of a slide containing a biological sample. The system includes a microscope having a stage for holding a slide containing a biological sample, at least one objective lens, and an eyepiece, a digital camera configured to capture digital images of a view of the sample as seen through the eyepiece of the microscope, and a compute unit comprising a machine learning pattern recognizer configured to receive the digital images from the digital camera, wherein the pattern recognizer is trained to identify regions of interest in biological samples of the type currently placed on the stage, and wherein the pattern recognizer recognizes regions of interest on a digital image captured by the camera and wherein the compute unit generates data representing an enhancement to the view of the sample as seen through the eyepiece of the microscope, wherein the enhancement is based upon the regions of interest in the sample. The system further includes one or more optical components coupled to the eyepiece for superimposing the enhancement on the field of view.

[0012] In one configuration the compute unit implements a first machine learning pattern recognizer trained to identify individual cells within the view and a second machine learning pattern recognizer trained to identify individual cells within the view which are positive for expression of a protein. The enhancement takes the form of a display of quantitative data relating to the cells which are positive for the expression of the protein. The protein can take the form of comprises Ki-67, P53, or Progesterone Receptor (PR).

[0013] In another configuration, the system includes a workstation associated with the microscope having a display providing tools for a user of the workstation to draw an annotation on an image of the view, and wherein the annotation is saved along with the image of the view in a computer memory. The workstation may further include a graphical display providing access to tools to customize the presentation of the enhancement on the field of view.

[0014] In another configuration, the compute unit implements a machine learning pattern recognizer trained to identify individual cells which are undergoing mitosis. The enhancement includes a display of quantitative data relating to the cells which are undergoing mitosis.

[0015] In another configuration, the compute unit implements one or more machine learning pattern recognizers trained to identify individual tumor cells or areas of tumor cells which are classified in accordance with specific Gleason grade (e..g, Grade 3, Grade 4 etc.). The enhancement takes the form of a display of quantitative area data relating to the tumor cells or areas of tumor cells which are classified in accordance with the specific Gleason grades.

[0016] As used in this document, the term “biological sample” is intended to be defined broadly to encompass blood or blood components, tissue or fragments thereof from plants or animals, sputum, stool, urine or other bodily substances, as well as water, soil or food samples potentially containing pathogens.

BRIEF DESCRIPTION OF THE DRAWINGS

[0017] FIG. 1 is a schematic diagram of an augmented reality microscope system for pathology, which is shown in conjunction with an optional connected pathologist workstation.

[0018] FIG. 2A is an illustration of the field of view of a microscope showing a breast cancer specimen at a given magnification level, for example 10.times.. FIG. 2B is an illustration of an augmented view seen by the pathologist using the microscope of FIG. 1, with an enhancement in the form of a “heat map” superimposed on the field of view in registry will cells in the sample which are likely to be cancerous. The superimposing of the heat map in FIG. 2B assists the pathologist in characterizing the sample because it directs their attention to areas of interest that are particularly likely to be cancerous. If the pathologist were to change microscope objective lenses in order to zoom in on the heat map area of

[0019] FIG. 2B (e.g., change to a 40.times. lens) a new field of view of the sample would be seen through the microscope eyepiece, a new image captured, and in substantial real time (e.g., with a second or two) a new heat map would be overlaid on the field of view (not shown) to further aid the pathologist’s investigation of the sample.

[0020] FIG. 3A is an illustration of the field of view of a microscope showing a prostate cancer specimen at a given magnification level, for example 10.times.. FIG. 3B is an illustration of an augmented view seen by the pathologist using the microscope of FIG. 1, with an enhancement in the form of an outline superimposed on the field of view circumscribing cells in the sample which are likely to be cancerous. The enhancement further includes a text box providing annotations, in this example Gleason score grading and tumor size data. The superimposing of the outline and annotations FIG. 3B assists the pathologist in characterizing the sample because it directs their attention to areas of interest that are particularly likely to be cancerous and provides proposed scores for the sample. If the pathologist were to change focal plane position or depth (i.e., adjust focus of the microscope) in order to probe the area of interest within the outline at different depths, a new field of view of the sample would be seen through the microscope eyepiece and captured by the camera, and in substantial real time (e.g., within a second or two) a new enhancement (not shown), e.g., outline and annotation text box, would be overlaid on the field of view to further aid the pathologist’s investigation of the sample.

[0021] FIG. 4A is an illustration of the field of view through the microscope of a blood sample at low magnification. FIG. 4B shows the field of view of FIG. 4A but with an enhancement in the form of rectangles identifying malaria parasites (plasmodium) present in the sample overlaid on the field of view to assist the pathologist in characterizing the sample.

[0022] FIG. 5 is a more detailed block diagram of the compute unit of FIG. 1.

[0023] FIG. 6 is a flow chart showing the work flow of the system of FIG. 1.

[0024] FIG. 7 is a chart showing a color code or scale for interpreting an enhancement in the form of a heat map.

[0025] FIG. 8 is an illustration of a machine learning pattern recognizer in the form of an ensemble of independent deep convolutional neural networks which are pre-trained on a set of microscope slide images. Each member of the ensemble is trained at a particular magnification level.

[0026] FIG. 9 is an illustration of a set of portable computer storage media, each of which is loaded with code, parameters, and associated data representing an ensemble of independent deep convolutional neural networks trained on a set of microscope slide images for a particular application, such as detection of breast cancer in breast tissue, detection and characterization of cancer cells in prostate tissue, etc. A user of the system of FIG. 1 who wants to augment the capability of the microscope system can obtain one or more of the media of FIG. 9 and load the associated ensemble of deep convolutional neutral networks into the local compute unit of FIGS. 1 and 5. Alternatively, additional ensembles of deep convolutional neural networks could be downloaded from a remote data store over a network interface in the compute unit.

[0027] FIG. 10 is an optics diagram of a module for projecting an enhancement or overlay into the field of view of an eyepiece of the microscope of FIG. 1.

[0028] FIG. 11A is an illustration of a deep learning algorithm development and model training method.

[0029] FIG. 11B is an illustration of an application of the deep learning algorithm performing inference.

[0030] FIG. 11C is an illustration of a software pipeline showing the sequence of operations performed on a series of images over time.

[0031] FIG. 12A is a series of lymph node field of view images showing the superposition of an augmented reality enhancement in the field of view in the form of an area or border highlighted to show tumor cells in a specimen.

[0032] FIG. 12B is a series of prostate field of view images showing the superposition of an augmented reality enhancement in the field of view in the form of an area or border highlighted to show tumor cells in a specimen.

[0033] FIG. 13 shows several sample fields of view of lymph node specimens for metastasis detection, the first column being the augmented reality image with the overlay in the form of an outline, the second column being the heat map generated by the neural network, the third column showing the whole slide H&E image and the fourth column showing the whole slide IHC image.

[0034] FIG. 14A-F are illustrations of examples of fields of view of microscope with an overlay or enhancement in the form of quantitative measurements. FIG. 14A shows a tissue sample with a border and a measurement of “100% PR” meaning all the cells within the border are positive for expression of the progesterone receptor protein. FIG. 14B shows an enhancement in the form of circles drawn around cells undergoing mitosis and a quantitative report of the number of cells per high power field: “4 mitoses per high power field.” FIG. 14C shows a measurement or ruler showing the dimension (200 .mu.m) of a cluster of cells. FIG. 14D shows a biological sample with dark points indicating areas where the machine learning model identified the presence of individual microorganisms, in this case heliobater pylori. FIG. 14E shows a biological sample with a circle indicating areas where the machine learning model identified the presence of an individual microorganism, in this case a mycobacterium. FIG. 14F shows an overlay in the form of regions predicted positive for prostate cancer and a percentage of the specimen having tumor involvement (70% tumor involvement).

[0035] FIG. 15 is an illustration of a workstation (which could take the form of a general purpose computing device, or tablet computer) showing an interface for drawing annotations manually on an image in the field of view, and a separate pane for providing access to tools to customize and control the rendering of annotations in the field of view.

[0036] FIG. 16 is an example of an annotation superimposed on the field of view in the form of an outline surrounding areas determined by the machine learning model to be cancer/tumor cells, and a text block providing a quantitative result: “23% tumor involvement.”

[0037] FIG. 17 is an example of an overlay in the form of a circle indicating the detection of a mycobacterium, e.g., tuberculosis) by a machine learning model in the field of view.

[0038] FIG. 18 is an image of a field of view obtained by the camera in the microscope along with the display of an overlay in the form of biomarker quantitation, in this example the number of cells positive for expression of the protein Ki67 (98% in this example).

[0039] FIG. 19 is another example of is an image of a field of view obtained by the camera in the microscope along with the display of an overlay in the form of biomarker quantitation, in this example the number of cells positive for expression of the protein P53 (49% in this example).

DETAILED DESCRIPTION

[0040] FIG. 1 is a schematic diagram of an augmented reality microscope system 100 for pathology, which is shown in conjunction with an optional connected pathologist workstation 140. The system 100 includes a conventional pathologist microscope 102 which includes an eyepiece 104 (optionally a second eyepiece in the case of a stereoscopic microscope). A stage 110 supports a slide 114 containing a biological sample. An illumination source 112 projects light through the sample. A microscope objective lens 108 directs an image of the sample as indicated by the arrow 106 to an optics module 120. Additional lenses 108A and 108B are provided in the microscope for providing different levels of magnification. A focus adjustment knob 160 allows the user to change the depth of focus of the lens 108.

[0041] The microscope includes an optics module 120 which incorporates a component, such as a semitransparent mirror 122 or beam combiner/splitter for overlaying an enhancement onto the field of view through the eyepiece. The optics module 120 allows the pathologist to see the field of view of the microscope as he would in a conventional microscope, and, on demand or automatically, see an enhancement (heat map, boundary or outline, annotations, etc.) as an overlay on the field of view which is projected into the field of view by an augmented reality (AR) display generation unit 128 and lens 130. The image generated by the display unit 128 is combined with the microscope field of view by the semitransparent mirror 122. As an alternative to the semitransparent mirror, a liquid crystal display (LCD) could be placed in the optical path that uses a transmissive negative image to project the enhancement into the optical path.

[0042] The optics module 120 can take a variety of different forms, and various nomenclature is used in the art to describe such a module. For example, it is referred to as a “projection unit”, “image injection module” or “optical see-through display technology.” Literature describing such units include US patent application publication 2016/0183779 (see description of FIGS. 1, 11, 12, 13) and published PCT application WO 2016/130424A1 (see description of FIGS. 2, 3, 4A-4C); Watson et al., Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images, Journal of Biomedical optics, vol. 20 (10) October 2015; Edwards et al., Augmentation of Reality Using an Operating Microscope, J. Image Guided Surgery. Vol. 1 no. 3 (1995); Edwards et al., Stereo augmented reality in the surgical microscope, Medicine Meets Virtual Reality (19997) J. D. Westward et al (eds.) IOS Press, p. 102.

[0043] The semi-transparent mirror 122 directs the field of view of the microscope to both the eyepiece 104 and also to a digital camera 124. A lens for the camera is not shown but is conventional. The camera may take the form of a high resolution (e.g., 16 megapixel) video camera operating at say 10 or 30 frames per second. The digital camera captures magnified images of the sample as seen through the eyepiece of the microscope. Digital images captured by the camera are supplied to a compute unit 126. The compute unit 126 will be described in more detail in FIG. 5. Alternatively, the camera may take the form of an ultra-high resolution digital camera such as APS-H-size (approx. 29.2.times.20.2 mm) 250 megapixel CMOS sensor developed by Cannon and announced in September 2015.

[0044] Briefly, the compute unit 126 includes a machine learning pattern recognizer which receives the images from the camera. The machine learning pattern recognizer may take the form of a deep convolutional neural network which is trained on a set of microscope slide images of the same type as the biological specimen under examination. Additionally, the pattern recognizer will preferably take the form of an ensemble of pattern recognizers, each trained on a set of slides at a different level of magnification, e.g., 5.times., 10.times., 20.times., 40.times.. The pattern recognizer is trained to identify regions of interest in an image (e.g., cancerous cells or tissue, pathogens such as viruses or bacteria, eggs from parasites, etc.) in biological samples of the type currently placed on the stage. The pattern recognizer recognizes regions of interest on the image captured by the camera 124. The compute unit 126 generates data representing an enhancement to the view of the sample as seen by the user, which is generated and projected by the AR display unit 128 and combined with the eyepiece field of view by the semitransparent mirror 122.

[0045] The essentially continuous capture of images by the camera 124, rapid performance of interference on the images by the pattern recognizer, and generation and projection of enhancements as overlays onto the field of view, enables the system 100 of FIG. 1 to continue to provide enhancements to the field of view and assist the pathologist in characterizing or classifying the specimen in substantial real time as the operator navigates around the slide (e.g., by use of a motor 116 driving the stage), by changing magnification by switching to a different objective lens 108A or 108B, or by changing depth of focus by operating the focus knob 160. This is a substantial advance in the art and improvement over conventional pathology using a microscope.

[0046] By “substantial real time,” we mean that an enhancement or overlay is projected onto the field of view within 10 seconds of changing magnification, changing depth of focus, or navigating and then stopping at a new location on the slide. In practice, as explained below, with the optional use of inference accelerators, we expect that in most cases the new overlay can be generated and projected onto the field of view within a matter of a second or two or even a fraction of a second of a change in focus, change in magnification, or change in slide position.

[0047] In summary then, a method is disclosed of assisting a user (e.g., pathologist) in review of a slide 114 containing a biological sample with a microscope 102 having an eyepiece 104. The method includes a step of capturing with a camera 124 a digital image of the sample as seen by the user through the eyepiece of the microscope, using a machine learning pattern recognizer (200, FIG. 5, FIG. 8) to identify areas of interest in the sample from the image captured by the camera 124, and superimposing an enhancement to the view of the sample as seen by the user through the eyepiece of the microscope as an overlay. As the user moves the sample relative to the microscope optics or changes magnification or focus of the microscope, a new image is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is overlaid onto the new view of the sample as seen through the eyepiece in substantial real time, The overlaid enhancement assists the user in classifying the biological sample.

[0048] FIG. 2A is an illustration of the field of view 150 of a microscope showing a breast cancer specimen 152 at a given magnification level, for example 10.times.. FIG. 2A shows the field of view with no enhancement, as would be the case with a prior art microscope. FIG. 2B is an illustration of an augmented view seen by the pathologist using the microscope of FIG. 1, with an enhancement 154 in the form of a “heat map” superimposed on the field of view in registry will cells in the sample which are likely to be cancerous. The “heat map” is a set of pixels representing tissue likely to be cancerous which are colored in accordance with the code of FIG. 7 to highlight areas (e.g. in red) which have a high probability of containing cancerous cells. The superimposing of the heat map 154 in FIG. 2B assists the pathologist in characterizing the sample because it directs their attention to areas of interest that are particularly likely to be cancerous. If the pathologist were to change microscope objective lenses (e.g., select lens 108A in FIG. 1) in order to zoom in on the heat map area 154 of FIG. 2B (e.g., change to a 40.times. lens), a new field of view of the sample would be seen through the microscope eyepiece and directed to the camera. The camera 124 captures a new image, and in substantial real time (e.g., with a second or two) a new heat map 154 (not shown) would be generated and overlaid on the field of view to further aid the pathologist’s investigation of the sample at the higher magnification.

[0049] In one possible configuration, the microscope 102 includes a capability to identify which microscope objective lens is currently in position to image the sample, e.g., with a switch or by user instruction to microscope electronics controlling the operation of the turret containing the lenses, and such identification is passed to the compute unit 126 using simple electronics so that the correct machine learning pattern recognition module in an ensemble of pattern recognizers (see FIG. 8 below) is tasked to perform inference on the new field of view image. The microscope may include the automated objective identification features of PCT application serial no. PCT/US2019/012674 filed Jan. 8, 2019 in this respect, the content of which is incorporated by reference herein.

[0050] FIG. 3A is an illustration of the field of view 150 of a microscope showing a prostate cancer specimen at a given magnification level, for example 10.times., as it would be in a conventional microscope without the capability of this disclosure. FIG. 3B is an illustration of an augmented field of view 150 seen by the pathologist using the microscope of FIG. 1, with an enhancement in the form of an outline 156 superimposed on the field of view circumscribing cells in the sample which are likely to be cancerous. The enhancement further includes a text box 158 providing annotations, in this example Gleason score grading and size measurements, In this particular example, the annotations are that 87 percent of the cells within the outline are Gleason grade 3 score, 13 percent of the cells are Gleason grade 4 score, and the tumor composed of cells of Gleason grade 4 score has a diameter of 0.12 .mu.m.

[0051] Another possible enhancement is a confidence score that the cells of the sample are cancerous. For example, the enhancement could take the form of a probability or confidence score, such as 85% confidence that the cells in the outline are Gleason Grade 3, and 15% confidence that the cells in the outline are Gleason Grade 4. Additionally, the measurement (0.12 .mu.m) could be the diameter of the whole outlined region.

[0052] The superimposing of the outline and annotations FIG. 3B assists the pathologist in characterizing the sample because it directs their attention to areas of interest that are particularly likely to be cancerous and provides proposed scores for the sample. If the pathologist were to change depth of focus of the microscope in order to probe the area of interest within the outline 156, a new field of view of the sample would be seen through the microscope eyepiece and captured by the camera 124, and in substantial real time (e.g., within a second or two) a new enhancement, e.g., outline and annotation text box, would be overlaid on the field of view (not shown) to further aid the pathologist’s investigation of the sample. The system of FIG. 1 optionally includes the ability for the pathologist to turn on or off the enhancement projections, e.g., by providing controls for the system on the attached workstation 140 of FIG. 1, providing a simple user interface on the compute unit 126, or by a foot switch that turns on and off the AR display unit 128.

[0053] FIG. 4A is a hypothetical illustration of the field of view 150 through the microscope of a blood sample at low magnification, as it would be seen in a conventional microscope. The view includes various blood fragments (red and white blood cells) and components such as platelets. FIG. 4B shows the same field of view of FIG. 4A but with an enhancement in the form of rectangles 156 identifying malaria parasites (plasmodium) present in the sample overlaid on the field of view to assist the pathologist in characterizing the sample, in this case as positive for malaria.

[0054] Table 1 below lists optical characteristics of a typical microscope for pathology and the digital resolution of a camera 124 which could be used in FIG. 1.

TABLE-US-00001 TABLE 1 Digital Field resolution of View (.mu.m per Objective (diameter) pixel)* Used for 4x 4.5 mm 3.5 Low power (screening) 10x 1.8 mm 1.4 Low power (tissue morphology) 20x 0.9 mm 0.7 Medium power 40x 0.45 mm 0.35 High power (cellular detail) 100x 0.18 mm 0.14 Special purpose (cytology, e.g. malaria) needs special optics with oil immersion *based on an 16 MP camera

[0055] FIG. 5 is a block diagram of one possible form of the compute unit 126 of FIG. 1. Essentially, in one possible configuration the compute unit is a special purpose computer system designed to perform the required tasks of the system of FIG. 1, including performing inference on captured images, generation of digital data for overlays for the field of view, optional inference acceleration to perform the inference operations sufficiently quickly to enable substantial real time display of enhancements, as well as the capability to load additional machine learning models (pattern recognizers) to support additional pathology tasks.

[0056] In FIG. 5, the compute unit includes a deep convolutional neural network pattern recognizer 200 in the form of a memory 202 storing processing instructions and parameters for the neural network and a central processing unit 204 for performance of inference on a captured image. The module may also include a graphics card 206 for generating overlay digital data (e.g. heat maps, annotations, outlines, etc.) based on the inference results from the pattern recognizer 200. A memory 212 includes processing instructions for selecting the appropriate machine learning model based on the current magnification level, and coordinate sharing of the image of the field of view with a remote workstation 140 (FIG. 1), and other tasks as explained herein. The compute unit may also include an inference accelerator 214 to speed up the performance of inference on captured images. The compute unit further includes various interfaces to other components of the system including an interface, not shown, to receive the digital images from the camera, such as a USB port, an interface (e.g., network cable port or HDMI port) 208 to send digital display data to the AR display unit 128, an interface (e.g., network cable port) 216 to the workstation 140, and an interface 210 (e.g., SC card reader) enabling the compute unit to receive and download portable media containing additional pattern recognizers (see FIG. 9) to expand the capability of the system to perform pattern recognition and overlay generation for different pathology applications. A high speed bus 220 or network connects the modules in the compute unit 126. In practice, additional hard disk drives, processors, or other components may be present in the compute unit, the details of which are not particularly important.

[0057] In another possible configuration, the compute unit 126 could take the form of a general purpose computer (e.g., PC) augmented with the pattern recognizer(s) and accelerator, and graphics processing modules as shown in FIG. 5. The personal computer has an interface to the camera (e.g., a USB port receiving the digital image data from the camera), an interface to the AR projection unit, such as an HDMI port, and a network interface to enable downloading of additional pattern recognizers and/or communicate with a remote workstation as shown in FIG. 1.

[0058] In use, assuming multiple different pattern recognizers are loaded into the compute unit, an automatic specimen type detector or manual selector switches between the specimen dependent pattern recognition models (e.g. prostate cancer vs breast cancer vs malaria detection), and based on that the proper machine learning pattern recognizer or model is chosen, Movement of the slide to a new location (e.g., by use of a motor 116 driving the stage) or switching to another microscope objective 108 (i.e. magnification) triggers an update of the enhancement, as explained previously. Optionally, if only the magnification is changed, an ensemble of different models operating at different magnification levels (see FIG. 8) performs inference on the specimen and inference results could be combined on the same position of the slide. Further details on how this operation could be performed are described in the pending PCT application entitled “Method and System for Assisting Pathologist Identification of Tumor Cells in Magnified Tissue Images”, serial no. PCT/US17/019051, filed Feb. 23, 2017, the content of which is incorporated by reference herein. Another option is that the compute unit could know the current magnification from the microscope by means of simple electronic communication from the microscope to the compute unit. The microscope monitors which lens is placed by the user into the optical path and communicates the selection to the compute unit.

[0059] Deep convolutional neural network pattern recognizers, of the type used in the compute unit of FIG. 5 shown at 200, are widely known in the art of pattern recognition and machine vision, and therefore a detailed description thereof is omitted for the sake of brevity. The Google Inception-v3 deep convolutional neural network architecture, upon which the present pattern recognizers are based, is described in the scientific literature. See the following references, the content of which is incorporated by reference herein: C. Szegedy et al., Going Deeper with Convolutions, arXiv:1409.4842 [cs.CV] (September 2014); C. Szegedy et al., Rethinking the Inception Architecture for Computer Vision, arXiv:1512.00567 [cs.CV] (December 2015); see also US patent application of C. Szegedy et al., “Processing images Using Deep Neural Networks”, Ser. No. 14/839,452 filed Aug. 28, 2015. A fourth generation, known as Inception-v4 is considered an alternative architecture for the pattern recognizers 306. See C. Szegedy et al., Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, arXiv:1602.0761 [cs.CV] (February 2016). See also U.S. patent application of C. Vanhoucke, “Image Classification Neural Networks”, Ser. No. 15/395,530 filed Dec. 30, 2016. The description of the convolutional neural networks in these papers and patent applications is incorporated by reference herein.

[0060] Additional literature describing deep neural network pattern recognizers include the following G. Litjens, et al., Deep learning as a tool for increasing accuracy and efficiency of histopathological diagnosis, www.nature.com/scientificreports 6:26286 (May 2016); D. Wang et al., Deep Learning for Identifying Metastatic Breast Cancer, arXiv:1606.05718v1 (June 2016); A. Madabhushi et al., Image analysis and machine learning in digital pathology: Challenges and opportunities, Medical Image Analysis 33 p 170-175 (2016); A. Schuamberg, et al., H&E-stained Whole Slide Deep Learning Predicts SPOP Mutation State in Prostate Cancer, bioRxiv preprint http:/.bioRxiv.or/content/early/2016/07/17/064279.

[0061] Sources for training slides for training the deep neural network pattern recognizer 200 can be generated from scratch by whole slide scanning of a set of slides of the type of samples of interest. For example, slide images for training can be obtained from Naval Medical Center in San Diego, Calif. (NMCSD) and publicly available sources such as from the CAMELYON16 challenge and The Cancer Genome Atlas (TCGA). Alternatively, they could be generated from a set of images of different slides captured by the camera of FIG. 1.

[0062] Digital whole slide scanners and systems for staining slides are known in the art. Such devices and related systems are available from Aperio Technologies, Hamamatsu Photonics, Philips, Ventana Medical Systems, Inc., and others. The digital whole slide image can be obtained at a first magnification level (e.g. 40.times.), which is customary. The image can be upsampled or downsampled to obtain training images at other magnifications. Alternatively, the training slides can be scanned multiple times at different magnifications, for example at each magnification level offered by conventional manually-operated microscopes.

[0063] Inference Speed

[0064] In some implementations it may be possible to perform inference on a digital image that is the entire field of view of the microscope. In other situations, it may be desirable to perform inference on only a portion of the image, such as several 299.times.299 rectangular patches of pixels located about the center of the field of view, or on some larger portion of the field of view.

[0065] Using an Inception v3-based model with 299.times.299 pixel input size and a 16 MP camera, a dense coverage of a spherical area of the optical FoV (2700 pixels diameter) requires .about.120 patch inferences. If inference is run only for the center third (increasing inference granularity, and using the other two third as context), it will require .about.1200 inference calls. Additional inference calls might be required if one adds rotations and flips, or ensembling.

[0066] Table 2 lists the number of inference calls and inference times using conventional state of the art graphics processing units and inference accelerators.

TABLE-US-00002 TABLE 2 # inference inference calls time inference time Configuration for FoV (GPU)* (accelerator)** Dense coverage, 120 0.8 sec 2 msec Inception V3 (baseline) Dense coverage, 1200 8 sec 0.02 sec inference on center third (stride 1/3) 8 Rotations and flips 9600 64 sec 0.17 sec Ensembling (5 models) 48000 320 sec 0.85 sec *assuming 150 inferences per second, Inception-v3 **assuming 56000 inferences per second with an inference accelerator system

[0067] Assuming a camera 124 operates at 30 frames per second (fps) for a seamless substantial near real time experience, a dense coverage with a reasonable combination of rotation, flips, and ensembling is possible.

[0068] Inference Accelerator (214, FIG. 5)

[0069] Inference accelerators, also known as artificial intelligence (AI) accelerators, are an emerging class of microprocessors or coprocessors which are designed to speed up the process of performing inference of input data sets for pattern recognition. These systems currently take the form of a combination of custom application-specific integrated circuit chips (ASICs), field programmable gate arrays (FPGAs), graphics processing units (GPUs) and general purpose computing units, In some applications of the system of FIG. 1 it may be desirable to include an inference accelerator in the compute unit 126, as shown in FIG. 5. Inference accelerators are described in the art, see Jonathon Ross, et al., U.S. patent application publication 2016/0342891 entitled “Neural Network Processor”, and currently available on the market such as the NVidia.TM. and Tesla.TM. P40 and P4 GPU Accelerators and the Intel.TM. Deep Learning Inference Accelerator.

[0070] In a simple implementation, the system of FIG. 1 could just use a USB camera output plugged into a standard PC (compute unit 126) which performs the pattern recognition and outputs the overlay graphic (enhancement) via a graphic card output interface (e.g., HDMI) to the AR display device. The inference can be done by a graphics processing unit (GPU) in the standard PC. In this configuration, an on-device inference accelerator would be optional and not necessary. In the event that the need arises for faster inference, the computer could be augmented later on with an off-the shelf inference accelerator as a plug-in module.

[0071] Generation of Enhancement

[0072] The generation of the enhancement to project onto the field of view can be performed as follows:

[0073] 1) the machine learning pattern recognizer 200 in the compute unit 126 runs model inference on the field of view, to create tumor probability per region (using cancer detection as an example here).

[0074] 2a) heatmap: the tumor probability for each image patch in the field of view is translated into a color value (e.g. RGB), and those color values are stitched together to create a heatmap. This task can be performed by the graphics card 206.

[0075] 2b) polygon outline: the tumor probabilities are thresholded at a certain score (e.g. probability>50%), and the boundary of the remaining region (or regions, if there are several not connected regions) form the polygon outline. Again this task can be performed by the graphics card 206.

[0076] 3) the digital image data from step 2A or 2B is translated into an image on a display by the AR display unit 128, that is then projected into the optical path by lens 130 and semi-transparent mirror 120.

……
……
……

You may also like...