雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Facebook Patent | Ultra-Wide Field-Of-View Scanning Devices For Depth Sensing

Patent: Ultra-Wide Field-Of-View Scanning Devices For Depth Sensing

Publication Number: 10613413

Publication Date: 20200407

Applicants: Facebook

Abstract

A depth camera assembly for determining depth information for objects in a local area comprises a light generator, a camera and a controller. The light generator illuminates the local area with structured light in accordance with emission instructions from the controller. The light generator includes an illumination source, an acousto-optic deflector (AOD), and a liquid crystal device (LCD) with liquid crystal gratings (LCGs). The AOD functions as a dynamic diffraction grating that diffracts optical beams emitted from the illumination source to form diffracted scanning beams, based on emission instructions from the controller. Each LCG in the LCD is configured to further diffract light from the AOD to generate the structured light projected into the local area. The camera captures images of portions of the structured light reflected from objects in the local area. The controller determines depth information for the objects based on the captured images.

BACKGROUND

The present disclosure generally relates to depth sensing, and specifically relates to ultra-wide field-of-view scanning devices for three-dimensional (3D) depth sensing.

To achieve a compelling user experience for depth sensing when using head-mounted displays (HMDs) and near-eye displays (NEDs), it is important to create a dynamic and all solid-state light scanning device with both ultrafast scanning speed (e.g., MHz) and large field-of-view. Usually, there are tradeoffs between speed, field-of-view and real-time reconfigurable illumination characteristics. Typically, a microelectromechanical system (MEM) having a mechanical-based mirror device can be used for scanning. However, the mechanical-based mirror device has stability issues and has a limited scanning speed. In addition, the mechanical-based mirror device is not reconfigurable in real time applications.

Most depth sensing methods rely on active illumination and detection. The conventional methods for depth sensing involve mechanical scanning or fixed diffractive-optics pattern projection, using structured light or time-of-flight techniques. Depth sensing based on time-of-flight uses a MEM with a mechanical-based mirror device (scanner) to send short pulses into an object space. The depth sensing based on time-of-flight further uses a high speed detector to time-gate back scattered light from the object to create high resolution depth maps. However, the mechanical-based scanner performs inadequately in relation to scanning speed, real-time reconfiguration and mechanical stability. The scanning speed is often limited to a few kHz along a fast axis and a few hundred Hertz along a slow axis. In addition, the mechanical-based scanner has stability and reliability issues. Depth sensing based on a fixed structured light pattern uses a diffractive optical element to generate a fixed structured light pattern projected into an object space. The depth sensing based on the fixed structured light pattern further uses a pre-stored look-up table to compute and extract depth maps. However, the depth sensing based on the fixed structured light pattern and the diffractive optical element is not robust enough for dynamic depth sensing.

SUMMARY

A depth camera assembly (DCA) determines depth information associated with one or more objects in a local area. The DCA comprises a light generator, an imaging device and a controller. The light generator is configured to illuminate the local area with structured light in accordance with emission instructions. The light generator comprises an illumination source, an acousto-optic deflector (AOD), a liquid crystal device (LCD), and a projection assembly. The illumination source is configured to emit one or more optical beams. The AOD generates diffracted scanning beams (in one or two dimensions) from the one or more optical beams emitted from the illumination source. The AOD is configured to function as at least one dynamic diffraction grating that diffracts the one or more optical beams by at least one diffraction angle to form the diffracted scanning beams based in part on the emission instructions. The LCD includes a plurality of liquid crystal gratings (LCGs). Each LCG in the LCD has an active state in which the LCG is configured to diffract the diffracted scanning beams by another diffraction angle larger than the at least one diffraction angle based in part on the emission instructions to generate the structured light. The projection assembly is configured to project the structured light into the local area. The imaging device is configured to capture one or more images of portions of the structured light reflected from one or more objects in the local area. The controller may be coupled to both the light generator and the imaging device. The controller generates the emission instructions and provides the emission instructions to the light generator. The controller is also configured to determine depth information for the one or more objects based at least in part on the captured one or more images.

An eyeglass-type platform representing a near-eye display (NED) can integrate the DCA. The NED further includes an electronic display and an optical assembly. The NED may be part of an artificial reality system. The electronic display of the NED is configured to emit image light. The optical assembly of the NED is configured to direct the image light to an eye-box of the NED corresponding to a location of a user’s eye. The image light may comprise the depth information of the one or more objects in the local area determined by the DCA.

A head-mounted display (HMD) can further integrate the DCA. The HMD further includes an electronic display and an optical assembly. The HMD may be part of an artificial reality system. The electronic display is configured to emit image light. The optical assembly is configured to direct the image light to an eye-box of the HMD corresponding to a location of a user’s eye. The image light may comprise the depth information of the one or more objects in the local area determined by the DCA.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A is a diagram of a near-eye-display (NED), in accordance with one or more embodiments.

FIG. 1B is a cross-section of an eyewear of the NED in FIG. 1A, in accordance with one or more embodiments.

FIG. 2A is a diagram of a head-mounted display (HMD), in accordance with one or more embodiments.

FIG. 2B is a cross section of a front rigid body of the HMD in FIG. 2A, in accordance with one or more embodiments.

FIG. 3A is an example depth camera assembly (DCA), in accordance with one or more embodiments.

FIG. 3B illustrates a scanning field covered by the DCA in FIG. 3A, in accordance with one or more embodiments.

FIG. 3C illustrates different diffraction settings in the DCA in FIG. 3A to cover the scanning field in FIG. 3B, in accordance with one or more embodiments.

FIG. 4 is a flow chart illustrating a process of determining depth information of objects in a local area based on ultra-wide field-of-view scanning, in accordance with one or more embodiments.

FIG. 5 is a block diagram of an artificial reality system in which a console operates, in accordance with one or more embodiments.

The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.

DETAILED DESCRIPTION

Embodiments of the present disclosure may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a near-eye display (NED), a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.

A depth camera assembly (DCA) for determining depth information of objects in a local area surrounding some or all of the DCA is presented herein. The DCA includes a light source, one or more cameras and a controller. The light source includes a laser source and an acousto-optic deflector (AOD) that generates structured light using light emitted from the laser source. The AOD can be composed of one or more acousto-optic devices or plates. Each acousto-optic plate can be configured to diffract incident light by a specific diffraction angle controlled by, e.g., an electric field applied to the acousto-optic plate. The light source also includes a plurality of active liquid crystal gratings (LCGs). Adjustments to settings of the plurality of LCGs determine where the structured light is projected into the local area. The one or more cameras capture one or more images of portions of the structured light reflected from the objects in the local area. Note that the portions of the structured light can be also scattered from one or more objects in the local area, wherein scattering represents a form of diffuse reflection. The controller determines depth information based on the captured one or more images.

In some embodiments, the DCA is integrated into a NED that captures data describing depth information in a local area surrounding some or all of the NED. The NED further includes an electronic display and an optical assembly. The NED may be part of an artificial reality system, e.g., an AR system and/or VR system. The electronic display of the NED is configured to emit image light. The optical assembly of the NED is configured to direct the image light to an eye-box of the NED corresponding to a location of a user’s eye, the image light comprising the depth information of the objects in the local area determined by the DCA.

In some embodiments, the DCA is integrated into a HMD that captures data describing depth information in a local area surrounding some or all of the HMD. The HMD may be part of an artificial reality system. The HMD further includes an electronic display and an optical assembly. The electronic display is configured to emit image light. The optical assembly is configured to direct the image light to an eye-box of the HMD corresponding to a location of a user’s eye, the image light comprising the depth information of the objects in the local area determined by the DCA.

FIG. 1A is a diagram of a NED 100, in accordance with one or more embodiments. The NED 100 presents media to a user. Examples of media presented by the NED 100 include one or more images, video, audio, or some combination thereof. In some embodiments, audio is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the NED 100, a console (not shown), or both, and presents audio data based on the audio information. The NED 100 may be part of an artificial reality system (not shown). The NED 100 is generally configured to operate as an artificial reality NED. In some embodiments, the NED 100 may augment views of a physical, real-world environment with computer-generated elements (e.g., images, video, sound, etc.).

The NED 100 shown in FIG. 1A includes a frame 105 and a display 110. The frame 105 includes one or more optical elements which together display media to users. The display 110 is configured for users to see the content presented by the NED 100. The display 110 generates an image light to present media to an eye of the user. The NED 100 also includes a DCA (not shown in FIG. 1A) configured to determine depth information of a local area surrounding some or all of the NED 100. The NED 100 also includes an illumination aperture 113, and an illumination source of the DCA emits light (e.g., structured light) through the illumination aperture 113. An imaging device of the DCA captures light from the illumination source that is reflected from the local area, e.g., through the imaging aperture 115. Light emitted from the illumination source of the DCA through the illumination aperture 113 comprises structured light, as discussed in more detail in conjunction with FIG. 3A. Light reflected from the local area through the imaging aperture 115 and captured by the imaging device of the DCA comprises portions of the reflected structured light. The NED 100 may also include an orientation detection device 120 that generates one or more measurement signals in response to motion of the NED 100 and generates information about orientation of the NED 100. Examples of the orientation detection device 120 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, or some combination thereof.

FIG. 1B is a cross section 125 of an eyewear of the NED 100 illustrated in FIG. 1A, in accordance with one or more embodiments. The cross section 125 includes at least one display assembly 130 integrated into the display 110, an eye-box 140, and a DCA 150. The eye-box 140 is a location where an eye 145 is positioned when a user wears the NED 100. In some embodiments, the frame 105 may represent a frame of eye-wear glasses. For purposes of illustration, FIG. 1B shows the cross section 125 associated with a single eye 145 and a single display assembly 130, but in alternative embodiments not shown, another display assembly which is separate from the display assembly 130 shown in FIG. 1B, provides image light to another eye 145 of the user.

The display assembly 130 is configured to direct the image light to the eye 145 through the eye-box 140. In some embodiments, when the NED 100 is configured as an AR NED, the display assembly 130 also directs light from a local area surrounding the NED 100 to the eye 145 through the eye-box 140. The display assembly 130 may be configured to emit image light at a particular focal distance in accordance with varifocal instructions, e.g., provided from a varifocal module (not shown in FIG. 1B).

The display assembly 130 may be composed of one or more materials (e.g., plastic, glass, etc.) with one or more refractive indices that effectively minimize the weight and present to the user a field of view of the NED 100. In alternate configurations, the NED 100 includes an optical assembly with one or more optical elements between the display assembly 130 and the eye 145. The optical elements may act to, e.g., correct aberrations in image light emitted from the display assembly 130, magnify image light, perform some other optical adjustment of image light emitted from the display assembly 130, or some combination thereof. The example for optical elements may include an aperture, a Fresnel lens, a convex lens, a concave lens, a liquid crystal lens, a diffractive element, a waveguide, a filter, a polarizer, a diffuser, a fiber taper, one or more reflective surfaces, a polarizing reflective surface, a birefringent element, or any other suitable optical element that affects image light emitted from the display assembly 130.

The frame 105 further includes a DCA 150 configured to determine depth information of one or more objects in a local area surrounding some or all of the NED 100. The DCA 150 includes an illumination source 155, an imaging device 160, and a controller 165 that may be coupled to at least one of the illumination source 155 and the imaging device 160. In some embodiments (now shown in FIG. 1B), the illumination source 155 and the imaging device 160 each may include its own internal controller. In some embodiments (not shown in FIG. 1B), the illumination source 155 and the imaging device 160 can be widely separated, e.g., the illumination source 155 and the imaging device 160 can be located in different assemblies.

The illumination source 155 may be configured to illuminate the local area with structured light through the illumination aperture 113 in accordance with emission instructions generated by the controller 165. The illumination source 155 may include a plurality of emitters that each emits light having certain characteristics (e.g., wavelength, polarization, coherence, temporal behavior, etc.). The characteristics may be the same or different between emitters, and the emitters can be operated simultaneously or individually. In one embodiment, the plurality of emitters could be, e.g., laser diodes (e.g., edge emitters), inorganic or organic LEDs, a vertical-cavity surface-emitting laser (VCSEL), or some other source.

The imaging device 160 includes one or more cameras configured to capture, through the imaging aperture 115, one or more images of at least a portion of the structured light reflected from one or more objects in the local area. In one embodiment, the imaging device 160 is an infrared camera configured to capture images in the infrared spectrum. Additionally or alternatively, the imaging device 160 may be also configured to capture images of visible spectrum light. The imaging device 160 may include a charge-coupled device (CCD) detector, a complementary metal-oxide-semiconductor (CMOS) detector or some other types of detectors (not shown in FIG. 1B). The imaging device 160 may be configured to operate with a predetermined frame rate for fast detection of objects in the local area.

The controller 165 may generate the emission instructions and provide the emission instructions to the illumination source 155 for controlling operation of the illumination source 155. The controller 165 may control, based on the emission instructions, operation of the illumination source 155 to dynamically adjust a pattern of the structured light illuminating the local area, an intensity of the light pattern, a density of the light pattern, location of the light being projected at the local area, etc. The controller 165 may be also configured to determine depth information for the one or more objects in the local area based in part on the one or more images captured by the imaging device 160. In some embodiments, the controller 165 provides the determined depth information to a console (not shown in FIG. 1B) and/or an appropriate module of the NED 100 (e.g., a varifocal module, not shown in FIG. 1B). The console and/or the NED 100 may utilize the depth information to, e.g., generate content for presentation on the display 110. More details about the structure and operation of the DCA 150 are disclosed in conjunction with FIGS. 3A-3C and FIG. 4.

FIG. 2A is a diagram of a HMD 200, in accordance with one or more embodiments. The HMD 200 may be part of an artificial reality system. In embodiments that describe AR system and/or a MR system, portions of a front side 202 of the HMD 200 are at least partially transparent in the visible band (.about.380 nm to 750 nm), and portions of the HMD 200 that are between the front side 202 of the HMD 200 and an eye of the user are at least partially transparent (e.g., a partially transparent electronic display). The HMD 200 includes a front rigid body 205, a band 210, and a reference point 215. The HMD 200 also includes a DCA configured to determine depth information of a local area surrounding some or all of the HMD 200. The HMD 200 also includes an imaging aperture 220 and an illumination aperture 225, and an illumination source of the DCA emits light (e.g., structured light) through the illumination aperture 225. An imaging device of the DCA captures light from the illumination source that is reflected from the local area through the imaging aperture 220. Light emitted from the illumination source of the DCA through the illumination aperture 225 comprises structured light, as discussed in more detail in conjunction with FIG. 3A and FIG. 4. Light reflected from the local area through the imaging aperture 220 and captured by the imaging device of the DCA comprises portions of the reflected structured light.

The front rigid body 205 includes one or more electronic display elements (not shown in FIG. 2A), one or more integrated eye tracking systems (not shown in FIG. 2A), an Inertial Measurement Unit (IMU) 230, one or more position sensors 235, and the reference point 215. In the embodiment shown by FIG. 2A, the position sensors 235 are located within the IMU 230, and neither the IMU 230 nor the position sensors 235 are visible to a user of the HMD 200. The IMU 230 is an electronic device that generates fast calibration data based on measurement signals received from one or more of the position sensors 235. A position sensor 235 generates one or more measurement signals in response to motion of the HMD 200. Examples of position sensors 235 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 230, or some combination thereof. The position sensors 235 may be located external to the IMU 230, internal to the IMU 230, or some combination thereof.

FIG. 2B is a cross section 240 of the front rigid body 205 of the HMD 200 shown in FIG. 2A. As shown in FIG. 2B, the front rigid body 205 includes an electronic display 245 and an optical assembly 250 that together provide image light to an eye-box 255. The eye-box 255 is the location of the front rigid body 205 where a user’s eye 260 is positioned. For purposes of illustration, FIG. 2B shows a cross section 240 associated with a single eye 260, but another optical assembly 250, separate from the optical assembly 250, provides altered image light to another eye of the user. The front rigid body 205 also has an optical axis corresponding to a path along which image light propagates through the front rigid body 205.

The electronic display 245 generates image light. In some embodiments, the electronic display 245 includes an optical element that adjusts the focus of the generated image light. The electronic display 245 displays images to the user in accordance with data received from a console (not shown in FIG. 2B). In various embodiments, the electronic display 245 may comprise a single electronic display or multiple electronic displays (e.g., a display for each eye of a user). Examples of the electronic display 245 include: a liquid crystal display, an organic light emitting diode (OLED) display, an inorganic light emitting diode (ILED) display, an active-matrix organic light-emitting diode (AMOLED) display, a transparent organic light emitting diode (TOLED) display, some other display, a projector, or some combination thereof. The electronic display 245 may also include an aperture, a Fresnel lens, a convex lens, a concave lens, a diffractive element, a waveguide, a filter, a polarizer, a diffuser, a fiber taper, a reflective surface, a polarizing reflective surface, or any other suitable optical element that affects the image light emitted from the electronic display. In some embodiments, one or more of the display block optical elements may have one or more coatings, such as anti-reflective coatings.

The optical assembly 250 magnifies received light from the electronic display 245, corrects optical aberrations associated with the image light, and the corrected image light is presented to a user of the HMD 200. At least one optical element of the optical assembly 250 may be an aperture, a Fresnel lens, a refractive lens, a reflective surface, a diffractive element, a waveguide, a filter, or any other suitable optical element that affects the image light emitted from the electronic display 245. Moreover, the optical assembly 250 may include combinations of different optical elements. In some embodiments, one or more of the optical elements in the optical assembly 250 may have one or more coatings, such as anti-reflective coatings, dichroic coatings, etc. Magnification of the image light by the optical assembly 250 allows elements of the electronic display 245 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase a field-of-view of the displayed media. For example, the field-of-view of the displayed media is such that the displayed media is presented using almost all (e.g., 110 degrees diagonal), and in some cases all, of the user’s field-of-view. In some embodiments, the optical assembly 250 is designed so its effective focal length is larger than the spacing to the electronic display 245, which magnifies the image light projected by the electronic display 245. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.

As shown in FIG. 2B, the front rigid body 205 further includes a DCA 265 for determining depth information of one or more objects in a local area 270 surrounding some or all of the HMD 200. The DCA 265 includes a light generator 275, an imaging device 280, and a controller 285 that may be coupled to both the light generator 275 and the imaging device 280. The light generator 275 emits light through the illumination aperture 225. In accordance with embodiments of the present disclosure, the light generator 275 is configured to illuminate the local area 270 with structured light 290 in accordance with emission instructions generated by the controller 285. The controller 285 is configured to control operation of certain components of the light generator 275, based on the emission instructions. The controller 285 provides the emission instructions to a plurality of diffractive optical elements of the light generator 275 to control a field-of-view of the local area 270 illuminated by the structured light 290. More details about controlling the plurality of diffractive optical elements of the light generator 275 by the controller 285 are disclosed in conjunction with FIGS. 3A-3C and FIG. 4.

The light generator 275 may include a plurality of emitters that each emits light having certain characteristics (e.g., wavelength, polarization, coherence, temporal behavior, etc.). The characteristics may be the same or different between emitters, and the emitters can be operated simultaneously or individually. In one embodiment, the plurality of emitters could be, e.g., laser diodes (e.g., edge emitters), inorganic or organic LEDs, a vertical-cavity surface-emitting laser (VCSEL), or some other source. In some embodiments, a single emitter or a plurality of emitters in the light generator 275 can emit light having a structured light pattern. More details about the DCA 265 that includes the light generator 275 are disclosed in conjunction with FIG. 3A.

The imaging device 280 includes one or more cameras configured to capture, through the imaging aperture 220, portions of the structured light 290 reflected from the local area 270. The imaging device 280 captures one or more images of one or more objects in the local area 270 illuminated with the structured light 290. The controller 285 is also configured to determine depth information for the one or more objects based on the captured portions of the reflected structured light. In some embodiments, the controller 285 provides the determined depth information to a console (not shown in FIG. 2B) and/or an appropriate module of the HMD 200 (e.g., a varifocal module, not shown in FIG. 2B). The console and/or the HMD 200 may utilize the depth information to, e.g., generate content for presentation on the electronic display 245.

In some embodiments, the front rigid body 205 further comprises an eye tracking system (not shown in FIG. 2B) that determines eye tracking information for the user’s eye 260. The determined eye tracking information may comprise information about an orientation of the user’s eye 260 in the eye-box 255, i.e., information about an angle of an eye-gaze. The eye-box 255 represents a three-dimensional volume at an output of a HMD in which the user’s eye is located to receive image light. In one embodiment, the user’s eye 260 is illuminated with a structured light. Then, the eye tracking system can use locations of the reflected structured light in a captured image to determine eye position and eye-gaze. In another embodiment, the eye tracking system determines eye position and eye-gaze based on magnitudes of image light captured over a plurality of time instants.

In some embodiments, the front rigid body 205 further comprises a varifocal module (not shown in FIG. 2B). The varifocal module may adjust focus of one or more images displayed on the electronic display 245, based on the eye tracking information. In one embodiment, the varifocal module adjusts focus of the displayed images and mitigates vergence-accommodation conflict by adjusting a focal distance of the optical assembly 250 based on the determined eye tracking information. In another embodiment, the varifocal module adjusts focus of the displayed images by performing foveated rendering of the one or more images based on the determined eye tracking information. In yet another embodiment, the varifocal module utilizes the depth information from the controller 285 to generate content for presentation on the electronic display 245.

FIG. 3A is an example DCA 300 configured for depth sensing based on structured light with an ultra-wide field-of-view, in accordance with one or more embodiments. The DCA 300 includes a light generator 305, an imaging device 310, and a controller 315 coupled to both the light generator 305 and the imaging device 310. The DCA 300 may be configured to be a component of the NED 100 in FIG. 1A and/or a component of the HMD 200 in FIG. 2A. Thus, the DCA 300 may be an embodiment of the DCA 150 in FIG. 1B and/or an embodiment of the DCA 265 in FIG. 2B; the light generator 305 may be an embodiment of the illumination source 155 in FIG. 1B and/or an embodiment of the light generator 275 in FIG. 2B; and the imaging device 310 may be an embodiment of the imaging device 160 in FIG. 1B and/or an embodiment of the imaging device 280 in FIG. 2B.

The light generator 305 is configured to illuminate and scan a local area 320 with structured light in accordance with emission instructions from the controller 315. The light generator 305 includes an illumination source 325 (e.g., laser diode) configured to emit one or more optical beams 330. The illumination source 325 may directly generate the one or more optical beams 330 as polarized light. The one or more optical beams 330 can be circularly polarized (right handed or in other embodiments left handed). In alternate embodiments, the one or more optical beams 330 can be linearly polarized (vertical and horizontal), or elliptically polarized (right or left). Alternatively, the illumination source 325 may emit unpolarized light, and a polarizing element (not shown in FIG. 3A) separate from the illumination source 325 may generate the one or more optical beams 330 as polarized light, based in part on the emission instructions from the controller 315. The polarizing element may be integrated into the illumination source 325 or placed in front of the illumination source 325. In some embodiments, for depth sensing based on time-of-flight, the one or more optical beams 330 are temporally modulated for generating temporally modulated illumination of the local area 320.

A beam conditioning assembly 335 collects light emitted from the illumination source 325 and directs the collected light toward a portion of an AOD 340. The beam conditioning assembly 335 may be composed of one or more optical elements, e.g., lenses having specific optical powers.

The AOD 340 diffracts light into one or more dimensions. The AOD 340 is composed of one or more acousto-optic devices or plates that generate diffracted scanning beams 345 in one or two dimensions by diffracting the one or more optical beams 330. In some embodiments, the diffracted scanning beams 345 represent structured light of a defined pattern, e.g., a pattern of light having parallel stripes, a dot pattern, etc. In some embodiments, the AOD 340 is configured to function as at least one dynamic diffraction grating that diffracts the one or more optical beams 330 to form the diffracted scanning beams 345 based in part on emission instructions from the controller 315. Each acousto-optic device in the AOD 340 may include a transducer or an array of transducers and one or more diffraction areas (not shown in FIG. 3A). Responsive to at least one radio frequency in the emission instructions, the transducer or the array of transducers of the acousto-optic device in the AOD 340 may be configured to generate at least one sound wave in the one or more diffraction areas of the acousto-optic device to form the at least one dynamic diffraction grating.

The AOD 340 can be configured to actively scan a plurality of diffraction angles at which the one or more optical beams 330 are diffracted and interfered to form the diffracted scanning beams 345. The AOD 340 is configured to scan the plurality of diffraction angles between, e.g., -5 degrees and +5 degrees. In this way, the diffracted scanning beams 345 formed by the AOD 340 covers a scanning zone with a field-of-view of, e.g., 10 degrees, along one or two dimensions. In some embodiments, the AOD 340 is configured to scan the plurality of diffraction angles with the scanning resolution of 0.1 degree, thus supporting a fine-grained scanning. Due to a relatively narrow scanning zone, the AOD 340 can support fast scanning with scanning speeds, e.g., in the order of MHz.

您可能还喜欢...