雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Microsoft Patent | 360 Degree Camera

Patent: 360 Degree Camera

Publication Number: 20200296268

Publication Date: 20200917

Applicants: Microsoft

Abstract

A camera system including a first imaging sensor having a first imaging surface with a first diagonal length, a first lens arranged to guide a first image formation light flux toward the first imaging surface with the first image formation light flux having at the first imaging surface a width equal to or greater than the first diagonal length, a second imaging sensor having a second imaging surface with a second diagonal length, a second lens arranged to guide a second image formation light flux toward the second imaging surface with the second image formation light flux having at the second imaging surface a width equal to or greater than the second diagonal length. The first lens and the second lens are oriented in opposing directions, and the first imaging sensor, the first lens, the second imaging sensor, and the second lens are each mounted partially within an enclosure.

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of priority from pending U.S. Provisional Patent Application Ser. No. 62/819,589 (filed on Mar. 16, 2019 and entitled “Two-Camera 360-Degree Camera for Conference Rooms”) and 62/822,288 (filed on Mar. 22, 2019 and entitled “Two-Camera 360-Degree Camera for Conference Rooms”), each of which are incorporated by reference herein in their entireties.

BACKGROUND

[0002] 360-degree cameras such as Microsoft RoundTable.TM. and the Polycom CX5500 Unified Conference Station have been shown to be practical commercial solutions for capturing video for teleconferencing scenarios. Since their initial release, new and/or more advanced teleconferencing capabilities, such as person identification, active speaker identification and high accuracy video-assisted speech recognition, have been developed using video processing techniques. However, the single camera solutions often used to capture video effective for those video processing techniques have an insufficient vertical field of view (VFOV) in various common real-world teleconferencing environments and scenarios (as observed with single sensor wide angle fisheye lens designs), such as when a human participant is seated at a low height or is standing, and/or have too limited of a depth of field (as observed with catadioptric designs). Although a suitable VFOV is provided by the Microsoft RoundTable.TM. and the Polycom CX5500 Unified Conference Station, the cost of these 360-degree cameras impedes their adoption and, as a result users are unable take full advantage of the above teleconferencing capabilities.

SUMMARY

[0003] A camera system, in accordance with a first aspect of this disclosure, includes an enclosure, a first imaging sensor having a first imaging surface with a first diagonal length in a first direction between two opposing corners of the first imaging surface, and a first lens arranged to guide a first image formation light flux toward the first imaging surface with the first image formation light flux having at the first imaging surface a first width in the first direction that is equal to or greater than the first diagonal length of the first imaging sensor. The camera system further includes a second imaging sensor, different than the first imaging sensor, having a second imaging surface with a second diagonal length in a second direction between two opposing corners of the second imaging surface, and a second lens arranged to guide a second image formation light flux toward the second imaging surface with the second image formation light flux having at the second imaging surface a second width in the second direction that is equal to or greater than the second diagonal length of the second imaging sensor. Additionally, the first imaging sensor, the first lens, the second imaging sensor, and the second lens are each mounted at least partially within the enclosure, and the first lens and the second lens are oriented in opposing directions.

[0004] A method of obtaining a 360-degree composite image frame, in accordance with a first aspect of this disclosure, the method including receiving first image data for a first image frame captured by a first imaging sensor having a first imaging surface with a first diagonal length in a first direction between two opposing corners of the first imaging surface, wherein a first lens is arranged to guide a first image formation light flux toward the first imaging surface with the first image formation light flux having at the first imaging surface a first width in the first direction that is equal to or greater than the first diagonal length of the first imaging sensor. The method further includes receiving second image data for a second image frame captured by a second imaging sensor, different than the first imaging sensor, having a second imaging surface with a second diagonal length in a second direction between two opposing corners of the second imaging surface, wherein a second lens is arranged to guide a second image formation light flux toward the second imaging surface with the second image formation light flux having at the second imaging surface a second width in the second direction that is equal to or greater than the second diagonal length of the second imaging sensor. Additionally, the method includes generating the 360-degree composite image frame based on the first image data for the first image frame and the second image data for the second image frame. Also, the first imaging sensor, the first lens, the second imaging sensor, and the second lens are each mounted at least partially within an enclosure, the first lens and the second lens are oriented in opposing directions, and the 360-degree composite image frame has a 360 degree horizontal field of view.

[0005] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] The drawing figures depict one or more implementations in accord with the present teachings, by way of example only, not by way of limitation. In the figures, like reference numerals refer to the same or similar elements. Furthermore, it should be understood that the drawings are not necessarily to scale.

[0007] FIGS. 1A-1D illustrate an example implementation of a first panoramic camera configured to use two image sensors and respective lenses to produce 360-degree image. FIG. 1A shows a front top isometric view of the first panoramic camera. FIG. 1B shows a front rear isometric view of the first panoramic camera. FIG. 1C shows a right side view of the first panoramic camera. FIG. 1D shows a top plan view of the first panoramic camera.

[0008] FIGS. 1E-1F illustrate an example implementation of an integrated camera system incorporating the first panoramic camera shown in FIGS. 1A-1D. FIG. 1E shows a front top isometric view of the integrated camera system. FIG. 1F shows a top plan view of the integrated camera system.

[0009] FIG. 2A shows a vertical cross section view of an example implementation of a second panoramic camera in which first and second image sensors are arranged vertically.

[0010] FIG. 2B shows a vertical cross section view of an example implementation of the second panoramic camera.

[0011] FIG. 3A shows a horizontal cross section view of an example implementation of a third panoramic camera in which the first and second image sensors are instead arranged horizontally. FIG. 3B shows a horizontal cross section view of an example implementation of the third panoramic camera.

[0012] FIGS. 4A-4D illustrate various configurations for the image circle shown in FIG. 2A or FIG. 2B for the second panoramic camera and in FIG. 3A or FIG. 3B for the third panoramic camera in relation to the first image sensor, in which the first image sensor has an aspect ratio of 16:9.

[0013] FIGS. 5A-5D illustrate various configurations for the image circle shown in FIG. 2A or FIG. 2B for the second panoramic camera and in FIG. 3A or FIG. 3B for the third panoramic camera in relation to the first image sensor, in which the first image sensor has an aspect ratio of 2:1. FIGS. 5E-5H show various example images corresponding to FIGS. 5A, 5B, and 5D.

[0014] FIGS. 6A-6F show examples in which the second panoramic camera shown in FIG. 2A or FIG. 2B and/or the third panoramic camera shown in FIG. 3A or FIG. 3B is/are configured to mechanically shift the first image sensor to effect corresponding shifts in VFOV obtained by the first image sensor. In FIGS. 6A and 6B, the first image sensor is in a first sensor corresponding to the position of the first image sensor in FIGS. 1C, 2A, 2B, 3A, 3B, 4D, and 5D. In FIGS. 6C and 6D, the first image sensor has been shifted in a positive lateral to a second sensor position, resulting in a downward VFOV shift. In FIGS. 6E and 6F, the first image sensor has been shifted in a negative lateral direction to a third sensor position, resulting in a downward VFOV shift.

[0015] FIGS. 7A and 7B illustrate examples in which the VFOV changes shown in FIGS. 6C-6F are performed by the second panoramic camera shown in FIG. 2A or FIG. 2B. In FIG. 7A, the first image sensor and the second image sensor have been shifted together in the positive lateral direction as shown in FIGS. 6C and 6D. In FIG. 7B, the first image sensor and the second image sensor have been shifted together in the negative lateral direction as shown in FIGS. 6E and 6F.

[0016] FIGS. 8A and 8B illustrate examples in which the VFOV changes shown in FIGS. 6C-6F are performed by the third panoramic camera shown in FIG. 3A or FIG. 3B. In FIG. 8A, the first image sensor and the second image sensor have been shifted together in the positive lateral direction as shown in FIGS. 6C and 6D. In FIG. 8B, the first image sensor and the second image sensor have been shifted together in the negative lateral direction as shown in FIGS. 6E and 6F.

[0017] FIG. 9 is a block diagram illustrating an example 360-degree camera system;

[0018] FIG. 10 is a block diagram of an example computing device, which may be used to provide implementations of the mechanisms described herein;* and*

[0019] FIG. 11 is a block diagram illustrating components of an example machine configured to read instructions from a machine-readable medium.

DETAILED DESCRIPTION

[0020] In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.

[0021] There are various known options for 360-degree cameras suitable for teleconferencing use, but each has associated shortcomings. For some designs, a single sensor fisheye lens has been used. Although this is a simple and low-cost design approach, it fails to deliver a suitable vertical field of view (VFOV) with a suitable resolution. There have also been various single sensor catadioptric designs; for example, designs produced by Realtime Immersion, Inc. of Westborough, Mass., US. Although this approach can provide the VFOV not available from single sensor fisheye lens designs, they fail to provide a suitable depth of field including 0.5 m to 10 m, as discussed in S. Baker and S. K. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision, 35(2):175-196, 1999. Some designs utilize more than two sensors and a mirror or prism arrangement providing a small common center of projection (a camera’s center of projection is the point about which the camera can be rotated and not induce parallax) that can reduce image stitching errors to less than one pixel. U.S. Pat. No. 7,495,694 (entitled “Omni-directional camera with calibration and up look angle improvements” and issued on Feb. 24, 2009), which is incorporated by reference herein in its entirety, describes, among other things, examples of designs using five sensors. Although such designs offer suitable performance, they are costly to manufacture, and a lower cost approach is desirable.

[0022] There have been various attempts at stitching images from wide baseline cameras. Although the hardware design is simpler than designs such as the above-mentioned arrangement of more than two sensors and a mirror or prism arrangement, there are significant unsolved image processing issues. To create a seamless panorama, a depth needs to be correctly estimated for each pixel, which still is an open (unsolved) problem in computer vision. Approximations are computationally expensive and fail to generate frames at suitably high resolutions in real time (as illustrated by F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Watson, M. Gross, Panoramic Video from Unstructured Camera Arrays, Computer Graphics Forum (Proc. Eurographics 2015), Vol. 34, No. 2, May 2015, Zurich, Switzerland), and in addition will likely fail to fulfill stitching performance and subjective user experience requirements.

[0023] Another design approach involves the use of two sensors and a mirror or prism to provide a compact optical arrangement for capturing 360-degree images. It is significantly less costly to manufacture these designs than the above-mentioned arrangement of more than two sensors and a mirror or prism arrangement. U.S. Pat. No. 10,151,905 (entitled “Image capture system and imaging optical system” and issued on Dec. 11, 2018), which is incorporated by reference herein in its entirety, illustrates an example of this design approach. This application describes various designs and methods that overcome shortcomings encountered with previous implementations of this design approach. For example, designs described in this application offer a significantly higher image resolution.

[0024] FIGS. 1A-1D illustrate an example implementation of a first panoramic camera 100 configured to use two image sensors and respective lenses 120a and 120b to produce 360-degree image. FIG. 1A shows a front top isometric view of the first panoramic camera 100. The first panoramic camera 100 includes an enclosure 110 (which may be referred to as a “housing”) and a first lens 120a with an optical axis along the X axis and having an object surface 122a (“external surface” or “first surface”) extending from and raised relative to a front side 112a (“front surface”) of the enclosure 110 of the first panoramic camera 100. FIG. 1B shows a front rear isometric view of the first panoramic camera 100. The first panoramic camera 100 includes a second lens 120b with an optical axis along the X axis and having an object surface 122b extending from and raised relative to a rear side (“read surface”) of the enclosure 110 of the first panoramic camera 100. As shown in FIGS. 1A and 1B the first lens 110a and the second lens 110b are mounted partially within the enclosure 110 with the lenses 120a and 120b protruding from and raised relative to an outer surface of the enclosure 110.

[0025] FIG. 1C shows a right side view of the first panoramic camera 100. The first lens 120a has a VFOV .theta.130a, and the second lens 120b has a VFOV .theta.130b. A first image sensor corresponding to the first lens 120a has a VFOV .theta.140a, and a second image sensor corresponding to the second lens 120b has an VFOV .theta.140b. In this example, the VFOVs 130a and 130b are 220 degrees and the VFOVs .theta.140a and .theta.140b are about 60 degrees, although in some examples other values may be used. In some implementations, the first lens 120a and the second lens 120b are anamorphic lenses having a higher focal length and/or a lower field of view in the vertical direction than in the horizontal direction, allowing for a selected reduction in the VFOVs 0140a and 0140b to increase the angular pixel density in the vertical direction in comparison to non-anamorphic lenses with equal focal lengths and fields of view in the horizontal and vertical directions. An anamorphic lens may be implemented by incorporating cylindrical lens elements. In some examples, computer-assisted lens design and manufacturing techniques can be used to implement a wide-angle anamorphic lens with specified anamorphosis and/or distortion characteristics in selected portions of the field of view; such as via “panomorph” lens design tools available from ImmerVision of Montreal, Quebec, CA. In some examples, the VFOVs .theta.130a and .theta.130b may be different. In some examples, the VFOVs .theta.140a and .theta.140b may be different. For implementations using two lenses, the term “wide-angle lens” refers to a lens having an HFOV of at least 180 degrees.

[0026] FIG. 1D shows a top plan view of the first panoramic camera 100. The first lens 120a has a horizontal field of view (HFOV) 0132a, and the second lens 120b has an HFOV .theta.132b. The first image sensor corresponding to the first lens 120a has an HFOV .theta.150a, and the second image sensor corresponding to the second lens 120b has an HFOV .theta.150b. In this example, the HFOVs 132a and 132b of the lenses 120a and 120b are 220 degrees and the HFOVs .theta.150a and .theta.150b of their respective image sensors image sensors are 190 degrees. In some examples, the HFOVs 132a and 132b may be different. In some examples, the HFOVs .theta.150a and .theta.150b may be different. In some examples, the VFOV .theta.130a and the HFOV .theta.132a may be different. In some examples, the VFOV .theta.130a and the HFOV .theta.132b may be different.

[0027] FIGS. 1E-1F illustrate an example implementation of an integrated camera system 160 incorporating the first panoramic camera 100 shown in FIGS. 1A-1D. FIG. 1E shows a front top isometric view of the integrated camera system 160. The integrated camera system 160 includes a base 170, from which a riser 172 extends upward and supports the first panoramic camera 100. The integrated camera system 160 also includes a speaker 174, microphones 176, and a control panel 178. In the example shown in FIG. 1E, the speaker 174, microphones 176, and/or control panel 178 are included in the base 170. FIG. 1F shows a top plan view of the integrated camera system 160.

[0028] FIG. 2A shows a vertical (parallel to the X-Z plane) cross section view of an example implementation of a second panoramic camera 200 in which first and second imaging surfaces 240a and 240b or respective first and second image sensors 238a and 238b are arranged vertically. The first lens 120a includes a front lens group 210a, a reflector 220a, an aperture 232a, a rear lens group 230a, and has an optical axis 250a (which may also be referred to as a “chief ray” of the first lens 120a). Light received by the first lens 120a passes through the front lens group 210a and is reflected by the reflector 220a to pass through the aperture 232a and the rear lens group 230a, with the first lens 120a projecting and guiding a first image formation light flux 260a (which may be referred to as “image formation light rays”) to the first imaging surface 240a. The reflector 220a may comprise a mirror, prism, and/or reflecting film. A front side of the first imaging surface 240a, that receives light from the first lens 120a, lies in a first imaging plane 242a (parallel to the X-Y plane) at which the first image formation light flux 260a has a first height d244a (which may be referred to as an “image circle height”) in the direction of the X-axis. The second lens 120b includes a front lens group 210b, a reflector 220b, an aperture 232b, a rear lens group 230b, and has an optical axis 250b (“chief ray”). Light received by the second lens 120b passes through the front lens group 210b and is reflected by the reflector 220b to pass through the aperture 232b and the rear lens group 230b, with the second lens 120b projecting and guiding a second image formation light flux 260b to the second imaging surface 240b. The reflector 220b may comprise a mirror, prism, and/or reflecting film. A front side of the second imaging surface 240b lies in a second imaging plane 242b (parallel to the X-Y plane) at which the second image formation light flux 260b has a second height d244b in the direction of the X-axis.

[0029] In some implementations, as shown in FIG. 2A, a top 142a (“upper extent”) of the first VFOV .theta.140a is at a greater angle from the Z-axis (and the optical axis 250b for the front lens group 210a) than the bottom 144a (“lower extent”) of the first VFOV .theta.140a, which is useful for tabletop video conferencing applications. In some examples, this is implemented using a “view camera” effect, in which a center of the first imaging surface 240a (and the first imaging sensor 238a) is shifted or otherwise positioned in the negative X-axis direction from the chief ray 250a, with the first imaging surface 240a being perpendicular to the chief ray 250a of the first lens 120a. Likewise, the view camera effect can be applied for the second imaging surface 240b (and second imaging sensor 238a), placing a top 142b and a bottom 144b of the second VFOV .theta.140b at the same angles with respect to the optical axis 250b (and/or the X-axis) as the respective angles of the top 142a and the bottom 144a with respect to the optical axis 250a (and/or the X-axis).

[0030] With the front lens group 210a and the front lens group 210b arranged coaxially, a significant reduction in stitching error is obtained over a non-coaxial arrangement; for example, a stitching error can be reduced from 14 pixels to about 5 pixels in certain implementations. Additionally, with the front lens group 210a and the front lens group 210b arranged coaxially, stitching errors can be “fixed” adaptively by searching for a correct object depth using a significantly smaller window; for example, a 14.times.14 window can be reduced to a 5.times.5 window in certain implementations. Further, with the front lens group 210a and the front lens group 210b arranged coaxially, the epipolar constraint can be applied to perform a linear depth search, offering a significant speedup that facilitates real-time panoramic image stitching.

[0031] FIG. 2B shows a vertical cross section view of an example implementation of the second panoramic camera 200, which includes the various features of the second panoramic camera 200 shown in FIG. 2A. In some examples, as shown in FIG. 2B, the first lens 120a has a first edge of field entrance pupil 270aa (or more simply, “first entrance pupil”) for an incoming ray at a first angle .theta.272 from the optical axis of the front lens group 210a. In this example, the first angle .theta.272 for the first edge of field entrance pupil 270aa is approximately 90 degrees (for example, 90 degrees in the X-Y plane), corresponding to an angular position at which panoramic image stitching is likely to occur between an image captured by the first image sensor 238a and an image captured by the second image sensor 238b; additionally, a second entrance pupil 270ab for a second angle of approximately 60 degrees, a third entrance pupil 270ac for a second angle of approximately 45 degrees, and a fourth center of field entrance pupil 270ad (or more simply, “fourth entrance pupil”) for an angle of 0 degrees are shown. The second lens 120b has a fifth edge of field entrance pupil 270ba (or more simply, “fifth entrance pupil”) for an incoming ray at the same first angle .theta.272 from the optical axis of the front lens group 210b, and a sixth center of field entrance pupil 270bd (or more simply, “sixth entrance pupil”) for an angle of 0 degrees.

[0032] The first lens 120a and the second lens 120b are arranged closely to reduce parallax errors between images captured by the image sensors 238a and 238b. In some implementations, the first lens 120a and the second lens 120b are arranged with a distance d274a between the first edge of field entrance pupil 270aa and the fifth edge of field entrance pupil 270ba. In some implementations, the distance d274a is less than or equal to 25 mm. In some implementations, the distance d274a is less than or equal to 10 mm. In some implementations, the first lens 120a and the second lens 120b are arranged with a distance d274d between the fourth center of field entrance pupil 270ad and the sixth center of field entrance pupil 270bd. In some implementations, the distance d274d is less than or equal to 25 mm. In some implementations, the distance d274d is less than or equal to 10 mm. In some implementations, the view camera effect described in connection with FIG. 2A is used in the example shown in FIG. 2B.

[0033] FIG. 3A shows a horizontal (parallel to the X-Y plane) cross section view of an example implementation of a third panoramic camera 300 in which the first and second imaging surfaces 240a and 240b are instead arranged horizontally. There is a similar configuration for the first lens 120a of a front lens group 310a, a reflector 320a, an aperture 332a, a rear lens group 330a, an optical axis 350a, and a first image formation light flux 260a as shown in FIG. 2A, so their descriptions are omitted. Likewise, there is a similar configuration for the second lens 120b of a front lens group 310b, a reflector 320b, an aperture 332b, a rear lens group 330b, an optical axis 350b, and a second image formation light flux 260b as shown in FIG. 2A, so their descriptions are omitted. A front side of the first imaging surface 240a lies in a first imaging plane 242a (parallel to the X-Z plane) at which the first image formation light flux 260a has a first width d246a (which may be referred to as an “image circle width”) in the direction of the X-axis. A front side of the second imaging surface 240b lies in a second imaging plane 242b (parallel to the X-Z plane) at which the second image formation light flux 260b has a second width d246a in the direction of the X-axis. In the third panorama camera 300, the front lens group 310a and the front lens group 310b are arranged coaxially or have respective optical axes within 1 mm of each other, as described above from the second panorama camera 200 and with similar benefits obtained.

[0034] FIG. 3B shows a horizontal cross section view of an example implementation of the third panoramic camera 300, which includes the various features of the third panoramic camera 300 shown in FIG. 3A. In some examples, as shown in FIG. 3B, the first lens 120a has a first edge of field entrance pupil 370aa for an incoming ray at a first angle .theta.372 from the optical axis of the front lens group 310a. In this example, the first angle .theta.372 for the first edge of field entrance pupil 370aa is approximately 90 degrees (for example, 90 degrees in the X-Y plane), corresponding to an angular position at which panoramic image stitching is likely to occur between an image captured by the first image sensor 238a and an image captured by the second image sensor 238b; additionally, a second entrance pupil 370ab for a second angle of approximately 60 degrees, a third entrance pupil 370ac for a third angle of approximately 45 degrees, and a fourth center of field entrance pupil 370ad (or more simply, “fourth entrance pupil”) for an angle of 0 degrees are shown. The second lens 120b has a fifth edge of field entrance pupil 370ba (or more simply, “fifth entrance pupil”) for an incoming ray at the same first angle .theta.372 from the optical axis of the front lens group 310b, and a sixth center of field entrance pupil 370bd (or more simply, “sixth entrance pupil”) for an angle of 0 degrees.

[0035] The first lens 120a and the second lens 120b are arranged closely to reduce parallax errors between images captured by the image sensors 238a and 238b. In some implementations, the first lens 120a and the second lens 120b are arranged with a distance d374a between the first edge of field entrance pupil 370aa and the fifth edge of field entrance pupil 370ba. In some implementations, the distance d374a is less than or equal to 25 mm. In some implementations, the distance d374a is less than or equal to 10 mm. In some implementations, the first lens 120a and the second lens 120b are arranged with a distance d374d between the fourth center of field entrance pupil 370ad and the sixth center of field entrance pupil 370bd. In some implementations, the distance d374d is less than or equal to 25 mm. In some implementations, the distance d374d is less than or equal to 10 mm. In some implementations, the view camera effect described in connection with FIG. 2A is used in the examples shown in FIGS. 3A and 3B.

[0036] FIGS. 4A-4D illustrate various configurations for the first image formation light flux 260a shown in FIG. 2A or FIG. 2B for the second panoramic camera 200 and in FIG. 3A or FIG. 3B for the third panoramic camera 300 in relation to the first imaging surface 240a of the first image sensor 238a, in which the first imaging surface 240a has an aspect ratio of 16:9. The aspect ratio is a ratio of a width 410 of the first imaging surface 240a in a first direction 402 (the Y-axis for the second panoramic camera 200, or the X-axis for the third panoramic camera 300) to a height 412 of the first imaging surface 240a in a second direction 404 (the Y-axis for the second panoramic camera 200, or the X-axis for the third panoramic camera 300) perpendicular to the first direction 402. In FIGS. 4A-4D, the first image formation light flux 260a is shown as a first image circle 260a (with a height d244a in the second direction 404 and a width d246a in the first direction 402) at a plane of the first imaging surface 240a (shown as the first imaging plane 242a in FIGS. 2A-3B). It is understood that the first image formation light flux 260a refers to the portion of the light flux projected by the lens 120a toward the first imaging surface 240a that, at the first imaging plane 242a, is suitable for, and may result in image data suitable for, panoramic image processing. For example, although the first lens 120a will likely project additional light flux toward the first imaging surface 240a in an area outside of the image circle 260a, outside of the image circle 260a optical errors, such as, but not limited to, vignetting, decreased saturation, decreased resolution, or distortion, are too great for producing suitable image data (in some examples, even despite image correction that is successfully applied for other areas within the image circle 260a). In some examples, the image circle 260a is not circular and the height d244a is different than the width d246a.

[0037] FIGS. 4A-4D each illustrate a front side of the first imaging surface 240a, which is configured to receive and measure light received by the first lens 120a and generate corresponding image data. For the second panoramic camera 200 shown in FIG. 2A or FIG. 2B, FIGS. 4A-4D illustrate the front side of the first image sensor 238a, including the first imaging surface 240a, as viewed downward along the Z-axis and with the front side of the first imaging surface 240a arranged parallel to the X-Y plane. For the third panoramic camera 300 shown in FIG. 3A or FIG. 3B, FIGS. 4A-4D illustrate the front side of the first image sensor 238a, including the first imaging surface 240a, as viewed rightward along the Y-axis and with the front side of the first imaging surface 240a arranged parallel to the X-Z plane. In FIG. 4A, a diameter 420 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is equal to the height 412 of the first imaging surface 240a. In FIG. 4B, a diameter 422 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is equal to the width 410 of the first imaging surface 240a. In FIG. 4C, a diameter 424 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is equal to a diagonal length 414 of the first imaging surface 240a (a distance between opposite corners 416 and 418 of the first imaging surface 240a). In FIG. 4C, the image circle 260a has a diagonal width d448a in a third direction 406 parallel to the diagonal of the first imaging surface 240a between the corners 416 and 418 that is equal to the diagonal length 414 of the first imaging surface 240a. In FIG. 4D, a diameter 426 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is larger than the diagonal length 414 of the first imaging surface 240a. In FIG. 4D, the diagonal width d448a of the image circle 260a in the third direction 406 is greater than the diagonal length 414 of the first imaging surface 240a. In implementations in which the image circle 260a has a diameter and/or the diagonal width d448a greater than or equal to the diagonal length 414, as shown in FIGS. 4C and 4D, the entire first imaging surface 240a can capture useful image data, with increased angular resolution versus the example in FIG. 4A.

[0038] The below tables are for a 190 degree minimum field of view for an example of the first imaging surface 240a which has a diagonal length 414 of 1/2.3”, a width 410 of 9.63 mm and a height 412 of 5.41 mm. The rows labeled “shiftable” are for a design obtaining a plus or minus 10 degree shift in the VFOV (as discussed in connection with FIGS. 6A-8B), and the indicated VFOV includes the VFOV of the first imaging surface 240a plus the additional 20 degrees available via shifting.

TABLE-US-00001 TABLE 1 stereographic lens Lens Lens Image Imaging Imaging field focal circle surface surface of view length diam- VFOV HFOV (degrees) (mm) eter (degrees) (degrees) Circular 190 2.48 5.41 190 (FIG. 4A) Cropped 190 4.41 9.63 126.1 190 Circle (FIG. 4B) Full Frame 206 4.39 11.04 126.6 190.5 (FIG. 4C) Shiftable 212 4.38 11.63 146.8 (126.8 + 190.7 (FIG. 4D) 20)

TABLE-US-00002 TABLE 2 equidistant lens Lens Lens Image Imaging Imaging field focal circle surface surface of view length diam- VFOV HFOV (degrees) (mm) eter (degrees) (degrees) Circular 190 3.27 5.41 190 (FIG. 4A) Cropped 190 5.81 9.63 106.9 190 Circle (FIG. 4B) Full Frame 218 5.80 11.04 106.9 190 (FIG. 4C) Shiftable 229 5.79 11.57 127.2 (107.2 + 190.5 (FIG. 4D) 20)

TABLE-US-00003 TABLE 3 equisolid lens Lens Lens Image Imaging Imaging field focal circle surface surface of view length diam- VFOV HFOV (degrees) (mm) eter (degrees) (degrees) Circular 190 3.67 5.41 190 (FIG. 4A) Cropped 190 6.53 11.04 98 190 Circle (FIG. 4B) Full Frame 231 6.53 11.04 98 190 (FIG. 4C) Shiftable 250 6.52 11.57 118 (98 + 190.1 (FIG. 4D) 20)

[0039] FIGS. 5A-5D illustrate various configurations for the first image formation light flux 260a shown in FIG. 2A or FIG. 2B for the second panoramic camera 200 and in FIG. 3A or FIG. 3B for the third panoramic camera 300 in relation to first imaging surface 240a of the first image sensor 238a, in which the first imaging surface 240a has an aspect ratio of 2:1. The aspect ratio is a ratio of a width 510 of the first imaging surface 240a in a first direction 502 (the Y-axis for the second panoramic camera 200, or the X-axis for the third panoramic camera 300) to a height 512 of the first imaging surface 240a in a second direction 504 (the Y-axis for the second panoramic camera 200, or the X-axis for the third panoramic camera 300) perpendicular to the first direction 502. In FIGS. 5A-5D, the first image formation light flux 260a is shown as a first image circle 260a (with a height d244a in the second direction 504 and a width d246a in the first direction 502) at a plane of the first imaging surface 240a (shown as the first imaging plane 242a in FIGS. 2A-3B).

[0040] FIGS. 5A-5D each illustrate a front side of the first image sensor 238a, including the first imaging surface 240a, according to the same arrangements described for FIGS. 4A-4D in connection with the second panoramic camera 200 and the third panoramic camera 300, as indicated by the illustrated axes. One possible benefit of the wider and greater 2:1 aspect ratio than the 16:9 aspect ratio in FIGS. 4A-4D is an increased number of columns of pixels along the width 510 of the first imaging surface 240a without a corresponding reduction in pixel width, resulting in higher horizontal angular resolution without reducing image quality. It is understood that additional aspect ratios may be used. In FIG. 5A, a diameter 520 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is equal to a height 512 of the first imaging surface 240a. In FIG. 5B, a diameter 522 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is equal to a width 510 of the first imaging surface 240a. In FIG. 5C, a diameter 524 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is equal to a diagonal length 514 of the first imaging surface 240a (a distance between opposite corners 516 and 518 of the first imaging surface 240a). In FIG. 5C, the image circle 260a has a diagonal width d548a in a third direction 506 parallel to the diagonal of the first imaging surface 240a between the corners 516 and 518 that is equal to the diagonal length 514 of the first imaging surface 240a. In FIG. 5D, a diameter 526 of the image circle 260a (and also the width d246a and the height d244a of the image circle 260a) is larger than the diagonal length 514 of the first imaging surface 240a. In FIG. 5D, the diagonal width d548a of the image circle 260a in the third direction 506 is greater than the diagonal length 514 of the first imaging surface 240a. In implementations in which the image circle 260a has a diameter and/or the diagonal width d548a greater than or equal to the diagonal length 514, as shown in FIGS. 5C and 5D, the entire first imaging surface 240a can be used to capture image data, with increased angular resolution versus the example in FIG. 5A.

[0041] FIGS. 5E-5H show various example images corresponding to FIGS. 5A, 5B, and 5D. FIG. 5E shows an example corresponding to FIG. 5A. FIG. 5F shows an example corresponding to FIG. 5B. FIG. 5G shows an example corresponding to FIG. 5D. FIG. 5H shows an example corresponding to FIG. 5D, but in which the VFOV has been shifted upward such that less of the field of view is used to capture a desk and such that if a person stands, they are likely to remain in the field of view. Additionally, it is noted that a lens mapping function can affect the field of view required for the lenses 120a and 120b. Examples of such mappings include stereographic, equidistant, and equisolid.

[0042] The below tables are for a 190 degree minimum field of view for an example of the first imaging surface 240a which has a width 510 of 10.83 mm and a height 512 of 5.41 mm. The rows labeled “shiftable” are for a design obtaining a plus or minus 10 degree shift in the VFOV (as discussed in connection with FIGS. 6A-8B), and the indicated VFOV includes the VFOV of the first imaging surface 240a plus the additional 20 degrees available via shifting.

TABLE-US-00004 TABLE 4 stereographic lens Lens Lens Image Imaging Imaging field focal circle surface surface of view length diam- VFOV HFOV (degrees) (mm) eter (degrees) (degrees) Circular 190 2.48 5.41 190 (FIG. 5A) Cropped 190 4.96 10.83 104.5 190 Circle (FIG. 5B) Full Frame 203 4.95 12.11 104.8 190.3 (FIG. 5C) Shiftable 208 4.95 12.68 134.6 (114.6 + 190.2 (FIG. 5D) 20)

TABLE-US-00005 TABLE 5 equidistant lens Lens Lens Image Imaging Imaging field focal circle surface surface of view length diam- VFOV HFOV (degrees) (mm) eter (degrees) (degrees) Circular 190 3.27 5.41 190 (FIG. 5A) Cropped 190 6.53 10.83 95 190 Circle (FIG. 5B) Full Frame 213 6.51 12.11 95.3 190.5 (FIG. 5C) Shiftable 222 6.53 12.66 115 (95 + 190 (FIG. 5D) 20)

TABLE-US-00006 TABLE 6 equisolid lens Lens Lens Image Imaging Imaging field focal circle surface surface of view length diam- VFOV HFOV (degrees) (mm) eter (degrees) (degrees) Circular 190 3.67 5.41 190 (FIG. 5A) Cropped 190 7.34 10.83 86.5 190 Circle (FIG. 5B) Full Frame 222 7.35 12.11 86.5 190 (FIG. 5C) Shiftable 239 7.33 12.67 106.6 (86.6 + 190.3 (FIG. 5D) 20)

[0043] It is noted that in all of the examples shown in Tables 1-6 for FIGS. 4B-4D and 5B-5D, the VFOV received by the first imaging surface 240a is less than 130 degrees, which increases the angular pixel density in the vertical direction compared to the examples of FIGS. 4A and 5A. In some implementations, the VFOV received by the first imaging surface 240a is less than or equal to 90 degrees, further increasing the angular density. In some implementations, the VFOV received by the first imaging surface 240a is less than or equal to 90 degrees. In some implementations, the VFOV received by the first imaging surface 240a is less than or equal to 70 degrees (for example, approximately 60 degrees, as in the examples shown in FIGS. 1C and 2A), further increasing the angular density while providing a VFOV that is useful for tabletop videoconferencing applications. As previously discussed, in some implementations the lenses 120a and 120b are anamorphic lenses having a higher focal length and/or a lower field of view in the vertical direction than in the horizontal direction to achieve a lower VFOV while maintaining an HFOV of greater than 180 degrees and using image sensor devices with commonly available aspect ratios, such as the 16:9 aspect ratio in FIGS. 4A-4D or the 2:1 aspect ratio in FIGS. 5A-5D.

[0044] FIGS. 6A-6F show an example in which the second panoramic camera 200 shown in FIG. 2A or FIG. 2B and/or the third panoramic camera 300 shown in FIG. 3A or FIG. 3B is/are configured to mechanically shift or otherwise displace the first image sensor 238a to cause a corresponding shift in a VFOV obtained by the first imaging surface 240a. The convenience of discussion, in FIGS. 6B, 6D, and 6F, the second panoramic camera 200 and/or the third panoramic camera 300 is referred to as a fourth panoramic camera 600.

[0045] In FIGS. 6A and 6B, the first image sensor 238a is in a first sensor position 610a (which may be referred to as a “home” or “initial” position) corresponding to the position of the first image sensor 238a in FIGS. 1C, 2, 3, 4D, and 5D. In FIGS. 6C and 6D, the first image sensor 238a (including the first imaging surface 240a) has been shifted from the first sensor position 610a in a positive lateral direction (along the X axis in the positive direction for the second panoramic camera 200, and along the Z axis in the positive direction for the third panoramic camera 300) to a second sensor position 612a. As a result, the VFOV obtained by the first imaging surface 240a has made a corresponding shift downward from the first VFOV .theta.140a shown in FIG. 6B to a second VFOV .theta.620a. In this example, at the second sensor position 612a further movement of the first image sensor 238a in the positive lateral direction would cause a portion of the first imaging surface 240a to exit the image circle 260a. FIG. 6C shows a first position 650 of a first corner of the first imaging surface 240a while in the second sensor position 612a. In FIGS. 6E and 6F, the first image sensor 238a has been shifted in a negative lateral direction with respect to the first and second sensor positions 610a and 612a shown in FIGS. 6A and 6C, to a third sensor position 612a. As a result, the VFOV obtained by the first imaging surface 240a has made a corresponding shift upward from to a third VFOV .theta.630a. In this example, at the third sensor position 614a further movement of the first image sensor 238a in the negative lateral direction would cause a portion of the first imaging surface 240a to exit the image circle 260a. FIG. 6E shows a second position 652 of a second corner of the first imaging surface 240a, opposite to the first corner for the first position 650, while in the third sensor position 614a, and a distance d654 between the positions 650 and 652. In this example, the width d246a of the image circle 260a is equal to the distance d654.

……
……
……

您可能还喜欢...