Qualcomm Patent | Virtual models for communications between autonomous vehicles and external observers
Patent: Virtual models for communications between autonomous vehicles and external observers
Publication Number: 20250306673
Publication Date: 2025-10-02
Assignee: Qualcomm Incorporated
Abstract
Systems and methods for interactions between an autonomous vehicle and one or more external observers include virtual models of drivers the autonomous vehicle. The virtual models may be generated by the autonomous vehicle and displayed to one or more external observers, and in some cases using devices worn by the external observers. The virtual models may facilitate interactions between the external observers and the autonomous vehicle using gestures or other visual cues. The virtual models may be encrypted with characteristics of an external observer, such as the external observer's face image, iris, or other representative features. Multiple virtual models for multiple external observers may be simultaneously used for multiple communications while preventing interference due to possible overlap of the multiple virtual models.
Claims
What is claimed is:
1.A user device, comprising:an image sensor; a display; a memory; and a processor configured to:obtain a first one or more images from the image sensor, the first one or more images including at least a portion of a first external observer and at least a portion of a second external observer relative to an exterior of the user device; determine, based on one or more features in the first one or more images, that the first external observer and the second external observer are in a presence of an external environment of the user device; and output, to the display based on the determination that the first external observer and the second external observer are in the presence of the external environment of the user device, a first virtual model observable by the first external observer and a second virtual model observable by the second external observer, the first virtual model and the second virtual model depicting at least a portion of a user of the user device.
2.The user device of claim 1, wherein the first external observer is in a first location in the external environment and the second external observer is in a second location in the external environment, the first location being different from the second location.
3.The user device of claim 1, wherein the first virtual model is observable by the first external observer and simultaneously the second virtual model is observable by the second external observer.
4.The user device of claim 1, wherein the display comprises a glass surface, and wherein, to output the first virtual model observable by the first external observer and the second virtual model observable by the second external observer, the processor is configured to cause a first set of frames to pass through a first portion of the glass surface in a first field of view of the first external observer and cause a second set of frames to pass through a second portion of the glass surface in a second field of view of the second external observer.
5.The user device of claim 1, wherein the processor is configured to:after the first virtual model and the second virtual model are output to the display based on the determination that the first external observer and the second external observer are in the presence of the external environment of the user device, obtain a second one or more images from the image sensor, the second one or more images not including at least one of the first external observer or the second external observer; determine, based on the second one or more images, that at least one of the first external observer or the second external observer are not in the presence of the external environment of the user device; and disable the output of at least one of the first virtual model based on a determination that the first external observer is not in the presence of the external environment of the user device or the second virtual model based on a determination that the second external observer is not in the presence of the external environment of the user device.
6.The user device of claim 5, wherein, to determine that at least one of the first external observer or the second external observer are not in the presence of the external environment of the user device, the processor is configured to determine that no external observer is in the presence of the external environment of the user device, and wherein the processor is configured to disable the output of the first virtual model and the second virtual model based on the determination that no external observer is in the presence of the external environment of the user device.
7.The user device of claim 1, wherein the processor is configured to disable the output of at least one of the first virtual model or the second virtual model based on display of a lower quality projection on the display.
8.The user device of claim 1, wherein the first virtual model is customized based on one or more characteristics of the first external observer and wherein the second virtual model is customized based on one or more characteristics of the second external observer.
9.The user device of claim 1, wherein the processor is configured to update at least one of the first virtual model based on a change in location the first external observer relative to the user device or the second virtual model based on a change in location of the second external observer relative to the user device.
10.The user device of claim 1, wherein the processor is further configured to:identify an input associated with the first external observer; and determine that the first external observer is in the presence of the external environment of the user device based on the input.
11.The user device of claim 10, wherein the input includes one or more gestures from the first external observer.
12.The user device of claim 10, wherein the input includes a voice input from the first external observer.
13.The user device of claim 1, wherein, to determine that the first external observer and the second external observer are in the presence of the external environment of the user device, the processor is configured to:track, based on the one or more features in the first one or more images, an eye gaze of the first external observer and an eye gaze of the second external observer.
14.The user device of claim 13, wherein the processor is configured to:determine a field of view of the first external observer and a field of view of the second external observer based on tracking the eye gaze; and determine that the field of view of the first external observer includes at least a first portion of the user device and that the field of view of the second external observer includes at least a second portion of the user device.
15.The user device of claim 1, wherein the image sensor includes at least one of a camera or a depth sensor.
16.The user device of claim 1, wherein the first virtual model is a visual depiction of at least the portion of the user of the user device.
17.The user device of claim 1, wherein, to output the first virtual model, the processor is further configured to output audio associated with the first virtual model.
18.The user device of claim 1, wherein the processor is configured to:communicate with the first external observer using the first virtual model in response to a communication from the first external observer; and communicate with the second external observer using the second virtual model in response to a communication from the second external observer.
19.The user device of claim 18, wherein the communication from at least one of the first external observer or the second external observer includes one or more gestures.
20.The user device of claim 1, wherein the user device includes a head mounted display, a virtual reality device, an augmented reality device, or a vehicle.
21.A method, comprising:obtaining, by a user device, a first one or more images from an image sensor, the first one or more images including at least a portion of a first external observer and at least a portion of a second external observer relative to an exterior of the user device; determining, based on one or more features in the first one or more images, that the first external observer and the second external observer are in a presence of an external environment of the user device; and displaying, based on the determination that the first external observer and the second external observer are in the presence of the external environment of the user device, a first virtual model observable by the first external observer and a second virtual model observable by the second external observer, the first virtual model and the second virtual model depicting at least a portion of a user of the user device.
22.The method of claim 21, wherein the first virtual model is observable by the first external observer and simultaneously the second virtual model is observable by the second external observer.
23.The method of claim 21, wherein the display comprises a glass surface, and wherein displaying the first virtual model observable by the first external observer and the second virtual model observable by the second external observer comprises causing a first set of frames to pass through a first portion of the glass surface in a first field of view of the first external observer and cause a second set of frames to pass through a second portion of the glass surface in a second field of view of the second external observer.
24.The method of claim 21, further comprising:after the first virtual model and the second virtual model are displayed based on the determination that the first external observer and the second external observer are in the presence of the external environment of the user device, obtaining a second one or more images from the image sensor, the second one or more images not including at least one of the first external observer or the second external observer; determining, based on the second one or more images, that at least one of the first external observer or the second external observer are not in the presence of the external environment of the user device; and disabling the display of at least one of the first virtual model based on a determination that the first external observer is not in the presence of the external environment of the user device or the second virtual model based on a determination that the second external observer is not in the presence of the external environment of the user device.
25.The method of claim 24, wherein determining that at least one of the first external observer or the second external observer are not in the presence of the external environment of the user device comprises determining that no external observer is in the presence of the external environment of the user device, the method further comprising disabling the display of the first virtual model and the second virtual model based on the determination that no external observer is in the presence of the external environment of the user device.
26.The method of claim 21, wherein the first virtual model is customized based on one or more characteristics of the first external observer and wherein the second virtual model is customized based on one or more characteristics of the second external observer.
27.The method of claim 21, further comprising updating at least one of the first virtual model based on a change in location the first external observer relative to the user device or the second virtual model based on a change in location of the second external observer relative to the user device.
28.The method of claim 21, further comprising:identifying an input associated with the first external observer, wherein the input includes at least one of one or more gestures from the first external observer or a voice input from the first external observer; and determining that the first external observer is in the presence of the external environment of the user device based on the input.
29.The method of claim 21, further comprising:communicating with the first external observer using the first virtual model in response to a communication from the first external observer; and communicating with the second external observer using the second virtual model in response to a communication from the second external observer.
30.The method of claim 21, wherein the user device includes a head mounted display, a virtual reality device, an augmented reality device, or a vehicle.
Description
CROSS-REFERENCES TO RELATED APPLICATIONS
This application is a continuation of U.S. application Ser. No. 18/450,640, filed Aug. 16, 2023, which is a continuation of U.S. application Ser. No. 18/045,403, filed Oct. 10, 2022, which is a divisional of U.S. application Ser. No. 16/864,016, filed Apr. 30, 2020, which claims the benefit of U.S. Provisional Application No. 62/846,445, filed on May 10, 2019, all of which are hereby incorporated by reference, in their entirety and for all purposes.
FIELD
This application relates communications between autonomous vehicles and external observers. For example, aspects of the application are directed to virtual models of drivers used for communications between an autonomous vehicle and one or more pedestrians.
BACKGROUND
Avoiding accidents and fostering safe driving ambience are important goals of operating autonomous vehicles while pedestrians and/or other external observers are present. In situations involving conventional vehicles with human drivers, real-time interactions between the human drivers and the external observers may help with reducing unsafe traffic conditions. However, the lack of a human driver in autonomous vehicles may pose challenges to such interactions.
SUMMARY
In some examples, techniques and systems are described for generating virtual models that depict virtual drivers for autonomous vehicles. A virtual model generated using the techniques described herein allows interactions between an autonomous vehicle and one or more external observers including pedestrians and/or other passengers and/or drivers of other vehicles other than the autonomous vehicle. A virtual model can include an augmented reality and/or virtual reality three-dimensional (3D) model of a virtual driver (e.g., a hologram, an anthropomorphic, humanoid, or human-like rendition of a driver) of the autonomous vehicle.
In some examples, a virtual model can be generated by an autonomous vehicle. In some examples, a virtual model can be generated by a server or other remote device in communication with an autonomous vehicle, and the autonomous vehicle can receive the virtual model from the server or other remote device. In some examples, one or more virtual models may be displayed within or on a part (e.g., a windshield, a display, and/or other part of the vehicle) of the autonomous vehicle so that the one or more virtual models can be seen by one or more external observers. In some examples, the autonomous vehicle can cause a virtual model to be displayed by one or more devices (e.g., a head mounted display (HMD), a heads-up display (HUD), an augmented reality (AR) device such as AR glasses, and/or other suitable device) worn by, attached to, or collocated with one or more external observers.
The virtual models can facilitate interactions between the one or more external observers and the autonomous vehicle. For instance, the one or more external observers can interact with the autonomous vehicle using one or more user inputs, such as using gestures or other visual cues, audio inputs, and/or other user inputs. In some examples, other types of communication techniques (e.g., utilizing audio and/or visual messages) can be used along with the one or more inputs to communicate with the autonomous vehicle. In one illustrative example, a gesture input and another type of communication technique (e.g., one or more audio and/or visual messages) can be used to communicate with the autonomous vehicle.
In some aspects, a virtual model can be encrypted with a unique encryption for a particular external observer. In some examples, the encryption can be based on a face image, iris, and/or other representative feature(s) of the external observer. In such examples, the external observer's face image, iris, and/or other representative feature(s) can be used to decrypt the virtual model that pertains to the external observer, while other virtual models, which may not pertain to the external observer (but may pertain to other external observers, for example), may not be decrypted by the external observer. Thus, by using the external observer-specific decryption, the external observer is enabled to view and interact with the virtual model created for that external observer, while the virtual models for other external observers are hidden from the external observer.
According to at least one example, a method of communication between one or more vehicles and one or more external observers is provided. The method includes detecting a first external observer for communicating with a vehicle. The method further includes obtaining, for the vehicle, a first virtual model for communicating with the first external observer. The method includes encrypting, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model. The method further includes and communicating with the first external observer using the encrypted first virtual model.
In another example, an apparatus for communication between one or more vehicles and one or more external observers is provided that includes a memory configured to store data, and a processor coupled to the memory. The processor can be implemented in circuitry. The processor is configured to and can detect a first external observer for communicating with a vehicle. The apparatus is further configured to and can obtain, for the vehicle, a first virtual model for communicating with the first external observer. The apparatus is configured to and can encrypt, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model. The apparatus is configured to and can communicate with the first external observer using the encrypted first virtual model.
In another example, a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processor to: detect a first external observer for communicating with a vehicle; obtain, for the vehicle, a first virtual model for communicating with the first external observer; encrypt, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model; and communicate with the first external observer using the encrypted first virtual model.
In another example, an apparatus for communication between one or more vehicles and one or more external observers is provided. The apparatus includes means for detecting a first external observer for communicating with a vehicle. The apparatus further includes means for obtain, for the vehicle, a first virtual model for communicating with the first external observer. The apparatus includes means for encrypting, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model, and means for communicating with the first external observer using the encrypted first virtual model.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise detecting that at least the first external observer of the one or more external observers is attempting to communicate with the vehicle.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: detecting that at least the first external observer is attempting to communicate with the vehicle using one or more gestures.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: extracting one or more image features from one or more images comprising at least a portion of the first external observer; and detecting, based on the one or more image features, that the first external observer is attempting to communicate with the vehicle.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: identifying an input associated with the first external observer; and detecting, based on the input, that the first external observer is attempting to communicate with the vehicle. In some examples, the input includes one or more gestures.
In some aspects, detecting that at least the first external observer is attempting to communicate with the vehicle comprises: identifying one or more traits of the first external observer; detecting that the first external observer is performing the one or more gestures; and interpreting the one or more gestures based on the one or more traits of the first external observer.
In some aspects, the one or more traits comprise at least one of a language spoken by the first external observer, a race of the first external observer, or an ethnicity of the first external observer.
In some aspects, detecting that the first external observer is performing the one or more gestures and interpreting the one or more gestures based on the one or more traits comprises accessing a database of gestures.
In some aspects, the first virtual model is generated for the first external observer based on one or more traits of the first external observer.
In some aspects, the one or more traits comprise at least one of a language spoken by the first external observer, a race of the first external observer, or an ethnicity of the first external observer.
In some aspects, detecting the first external observer comprises: tracking a gaze of the first external observer; determining a field of view of the first external observer based on tracking the gaze; and detecting that the field of view includes at least a portion of the vehicle.
In some aspects, the one or more characteristics of the first external observer comprise at least one of a face characteristic or an iris of the first external observer.
In some aspects, communicating with the first external observer using the encrypted first virtual model comprises: decrypting frames of the encrypted first virtual model based on the one or more characteristics of the first external observer; and projecting the decrypted frames of first virtual model towards the first external observer.
In some aspects, projecting the decrypted frames of the first virtual model towards the first external observer comprises: detecting a field of view of the first external observer; and projecting a foveated rendering of the decrypted frames of the first virtual model to the first external observer based on the field of view.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise enabling a first set of frames of the encrypted first virtual model to be visible to the first external observer; and preventing the first set of frames from being visible to one or more other external observers.
In some aspects, enabling the first set of frames to be visible comprises: displaying the first set of frames on a glass surface with a variable refractive index; and modifying the refractive index of the glass surface to selectively allow the first set of frames to pass through the glass surface in a field of view of the first external observer.
In some aspects, preventing the first set of frames from being visible comprises: displaying the first set of frames on a glass surface with a variable refractive index; and modifying the refractive index to selectively block the first set of frames from passing through the glass surface in a field of view of the one or more other external observers.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: detecting a second external observer for communicating with the vehicle; obtaining, for the vehicle, a second virtual model for communicating with the second external observer; encrypting, based on one or more characteristics of the second external observer, the second virtual model to generate an encrypted second virtual model; and communicating with the second external observer using the encrypted second virtual model simultaneously with communicating with the first external observer using the encrypted first virtual model.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: projecting a first set of frames of the encrypted first virtual model towards the first external observer; projecting a second set of frames of the encrypted second virtual model towards the second external observer; and preventing the first set of frames from overlapping the second set of frames.
In some aspects, preventing the first set of frames from overlapping the second set of frames comprises: displaying the first set of frames and the second set of frames on a glass surface with a variable refractive index; modifying a refractive index of a first portion of the glass surface to selectively allow the first set of frames to pass through the first portion of the glass surface in a field of view of the first external observer while blocking the second set of frames from passing through the first portion of the glass surface in the field of view of the first external observer; and modifying a refractive index of a second portion of the glass surface to selectively allow the second set of frames to pass through the second portion of the glass surface in a field of view of the second external observer while blocking the first set of frames from passing through the second portion of the glass surface in the field of view of the second external observer.
In some aspects, detecting the first external observer to communicate with the vehicle comprises detecting a device of the first external observer. In some aspects, the device includes a head mounted display (HMD). In some aspects, the device includes augmented reality glasses.
In some aspects, communicating with the first external observer using the encrypted first virtual model comprises establishing a connection with the device and transmitting, using the connection, frames of the encrypted first virtual model to the device. In some aspects, the device can decrypt the encrypted first virtual model based on the one or more characteristics.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise generating the first virtual model. For example, in some examples, the apparatus is the vehicle or is a component (e.g., a computing device) of the vehicle. In such examples, the vehicle or component of the vehicle can generate the first virtual model. In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise receiving the first virtual model from a server.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise disabling or lowering a quality of the first virtual model upon termination of communication with at least the first external observer.
According to at least one other example, a method of communication between a vehicle and one or more external observers is provided. The method includes establishing, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle. The method further includes, receiving, at the device, a virtual model of a virtual driver from the vehicle, and communicating with the vehicle using the virtual model.
In another example, an apparatus for communication between a vehicle and one or more external observers is provided that includes a memory configured to store data, and a processor coupled to the memory. The processor is configured to and can establish, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle. The processor is configured to and can receive, at the device, a virtual model of a virtual driver from the vehicle, and communicate with the vehicle using the virtual model.
In another example, a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processor to: establish, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle; receive, at the device, a virtual model of a virtual driver from the vehicle; and communicate with the vehicle using the virtual model.
In another example, an apparatus for communication between a vehicle and one or more external observers is provided. The apparatus includes means for establishing, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle; means for receiving, at the device, a virtual model of a virtual driver from the vehicle; and means for communicating with the vehicle using the virtual model.
In some aspects, the device includes a head mounted display (HMD). In some aspects, the device includes augmented reality glasses.
In some aspects, the virtual model is encrypted based on one or more characteristics of the external observer.
In some aspects, establishing the connection is based on receiving a request to communicate with the vehicle. In some aspects, establishing the connection is based on sending a request to communicate with the vehicle. In some aspects, the virtual model is displayed by the device.
In some aspects, communicating with the vehicle using the received virtual model is based on one or more gestures.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, any or all drawings, and each claim.
The foregoing, together with other features and embodiments, will become more apparent upon referring to the following specification, claims, and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Illustrative embodiments of the present application are described in detail below with reference to the following figures:
FIG. 1 illustrates an example system comprising an autonomous vehicle and one or more external observers, according to this disclosure.
FIG. 2 illustrates an example of a process for creating virtual models of drivers for interacting with external observers, according to this disclosure.
FIG. 3 illustrates an example of a process for creating and encrypting virtual models of drivers for interacting with external observers, according to this disclosure.
FIG. 4 illustrates an example of a process for projecting beams of virtual models of drivers for interacting with external observers, according to this disclosure.
FIG. 5 illustrates an example system comprising an autonomous vehicle and two or more external observers with overlapping fields of view, according to this disclosure.
FIG. 6 illustrates an example of a process for preventing interference between multiple virtual models in overlapping fields of views of multiple external observers, according to this disclosure.
FIG. 7 illustrates an example system for modifying a refractive index of a glass surface, according to this disclosure.
FIG. 8 illustrates an example system comprising an autonomous vehicle and one or more external observers with head mounted displays, according to this disclosure.
FIG. 9A-FIG. 9B illustrate example processes for interactions between an autonomous vehicle and one or more external observers with head mounted displays, according to this disclosure.
FIG. 10A and FIG. 10B illustrate examples of processes for providing communication between an autonomous vehicle and one or more external observers to implement techniques described in this disclosure.
FIG. 11 illustrates an example computing device architecture to implement techniques described in this disclosure.
Publication Number: 20250306673
Publication Date: 2025-10-02
Assignee: Qualcomm Incorporated
Abstract
Systems and methods for interactions between an autonomous vehicle and one or more external observers include virtual models of drivers the autonomous vehicle. The virtual models may be generated by the autonomous vehicle and displayed to one or more external observers, and in some cases using devices worn by the external observers. The virtual models may facilitate interactions between the external observers and the autonomous vehicle using gestures or other visual cues. The virtual models may be encrypted with characteristics of an external observer, such as the external observer's face image, iris, or other representative features. Multiple virtual models for multiple external observers may be simultaneously used for multiple communications while preventing interference due to possible overlap of the multiple virtual models.
Claims
What is claimed is:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Description
CROSS-REFERENCES TO RELATED APPLICATIONS
This application is a continuation of U.S. application Ser. No. 18/450,640, filed Aug. 16, 2023, which is a continuation of U.S. application Ser. No. 18/045,403, filed Oct. 10, 2022, which is a divisional of U.S. application Ser. No. 16/864,016, filed Apr. 30, 2020, which claims the benefit of U.S. Provisional Application No. 62/846,445, filed on May 10, 2019, all of which are hereby incorporated by reference, in their entirety and for all purposes.
FIELD
This application relates communications between autonomous vehicles and external observers. For example, aspects of the application are directed to virtual models of drivers used for communications between an autonomous vehicle and one or more pedestrians.
BACKGROUND
Avoiding accidents and fostering safe driving ambience are important goals of operating autonomous vehicles while pedestrians and/or other external observers are present. In situations involving conventional vehicles with human drivers, real-time interactions between the human drivers and the external observers may help with reducing unsafe traffic conditions. However, the lack of a human driver in autonomous vehicles may pose challenges to such interactions.
SUMMARY
In some examples, techniques and systems are described for generating virtual models that depict virtual drivers for autonomous vehicles. A virtual model generated using the techniques described herein allows interactions between an autonomous vehicle and one or more external observers including pedestrians and/or other passengers and/or drivers of other vehicles other than the autonomous vehicle. A virtual model can include an augmented reality and/or virtual reality three-dimensional (3D) model of a virtual driver (e.g., a hologram, an anthropomorphic, humanoid, or human-like rendition of a driver) of the autonomous vehicle.
In some examples, a virtual model can be generated by an autonomous vehicle. In some examples, a virtual model can be generated by a server or other remote device in communication with an autonomous vehicle, and the autonomous vehicle can receive the virtual model from the server or other remote device. In some examples, one or more virtual models may be displayed within or on a part (e.g., a windshield, a display, and/or other part of the vehicle) of the autonomous vehicle so that the one or more virtual models can be seen by one or more external observers. In some examples, the autonomous vehicle can cause a virtual model to be displayed by one or more devices (e.g., a head mounted display (HMD), a heads-up display (HUD), an augmented reality (AR) device such as AR glasses, and/or other suitable device) worn by, attached to, or collocated with one or more external observers.
The virtual models can facilitate interactions between the one or more external observers and the autonomous vehicle. For instance, the one or more external observers can interact with the autonomous vehicle using one or more user inputs, such as using gestures or other visual cues, audio inputs, and/or other user inputs. In some examples, other types of communication techniques (e.g., utilizing audio and/or visual messages) can be used along with the one or more inputs to communicate with the autonomous vehicle. In one illustrative example, a gesture input and another type of communication technique (e.g., one or more audio and/or visual messages) can be used to communicate with the autonomous vehicle.
In some aspects, a virtual model can be encrypted with a unique encryption for a particular external observer. In some examples, the encryption can be based on a face image, iris, and/or other representative feature(s) of the external observer. In such examples, the external observer's face image, iris, and/or other representative feature(s) can be used to decrypt the virtual model that pertains to the external observer, while other virtual models, which may not pertain to the external observer (but may pertain to other external observers, for example), may not be decrypted by the external observer. Thus, by using the external observer-specific decryption, the external observer is enabled to view and interact with the virtual model created for that external observer, while the virtual models for other external observers are hidden from the external observer.
According to at least one example, a method of communication between one or more vehicles and one or more external observers is provided. The method includes detecting a first external observer for communicating with a vehicle. The method further includes obtaining, for the vehicle, a first virtual model for communicating with the first external observer. The method includes encrypting, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model. The method further includes and communicating with the first external observer using the encrypted first virtual model.
In another example, an apparatus for communication between one or more vehicles and one or more external observers is provided that includes a memory configured to store data, and a processor coupled to the memory. The processor can be implemented in circuitry. The processor is configured to and can detect a first external observer for communicating with a vehicle. The apparatus is further configured to and can obtain, for the vehicle, a first virtual model for communicating with the first external observer. The apparatus is configured to and can encrypt, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model. The apparatus is configured to and can communicate with the first external observer using the encrypted first virtual model.
In another example, a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processor to: detect a first external observer for communicating with a vehicle; obtain, for the vehicle, a first virtual model for communicating with the first external observer; encrypt, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model; and communicate with the first external observer using the encrypted first virtual model.
In another example, an apparatus for communication between one or more vehicles and one or more external observers is provided. The apparatus includes means for detecting a first external observer for communicating with a vehicle. The apparatus further includes means for obtain, for the vehicle, a first virtual model for communicating with the first external observer. The apparatus includes means for encrypting, based on one or more characteristics of the first external observer, the first virtual model to generate an encrypted first virtual model, and means for communicating with the first external observer using the encrypted first virtual model.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise detecting that at least the first external observer of the one or more external observers is attempting to communicate with the vehicle.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: detecting that at least the first external observer is attempting to communicate with the vehicle using one or more gestures.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: extracting one or more image features from one or more images comprising at least a portion of the first external observer; and detecting, based on the one or more image features, that the first external observer is attempting to communicate with the vehicle.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: identifying an input associated with the first external observer; and detecting, based on the input, that the first external observer is attempting to communicate with the vehicle. In some examples, the input includes one or more gestures.
In some aspects, detecting that at least the first external observer is attempting to communicate with the vehicle comprises: identifying one or more traits of the first external observer; detecting that the first external observer is performing the one or more gestures; and interpreting the one or more gestures based on the one or more traits of the first external observer.
In some aspects, the one or more traits comprise at least one of a language spoken by the first external observer, a race of the first external observer, or an ethnicity of the first external observer.
In some aspects, detecting that the first external observer is performing the one or more gestures and interpreting the one or more gestures based on the one or more traits comprises accessing a database of gestures.
In some aspects, the first virtual model is generated for the first external observer based on one or more traits of the first external observer.
In some aspects, the one or more traits comprise at least one of a language spoken by the first external observer, a race of the first external observer, or an ethnicity of the first external observer.
In some aspects, detecting the first external observer comprises: tracking a gaze of the first external observer; determining a field of view of the first external observer based on tracking the gaze; and detecting that the field of view includes at least a portion of the vehicle.
In some aspects, the one or more characteristics of the first external observer comprise at least one of a face characteristic or an iris of the first external observer.
In some aspects, communicating with the first external observer using the encrypted first virtual model comprises: decrypting frames of the encrypted first virtual model based on the one or more characteristics of the first external observer; and projecting the decrypted frames of first virtual model towards the first external observer.
In some aspects, projecting the decrypted frames of the first virtual model towards the first external observer comprises: detecting a field of view of the first external observer; and projecting a foveated rendering of the decrypted frames of the first virtual model to the first external observer based on the field of view.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise enabling a first set of frames of the encrypted first virtual model to be visible to the first external observer; and preventing the first set of frames from being visible to one or more other external observers.
In some aspects, enabling the first set of frames to be visible comprises: displaying the first set of frames on a glass surface with a variable refractive index; and modifying the refractive index of the glass surface to selectively allow the first set of frames to pass through the glass surface in a field of view of the first external observer.
In some aspects, preventing the first set of frames from being visible comprises: displaying the first set of frames on a glass surface with a variable refractive index; and modifying the refractive index to selectively block the first set of frames from passing through the glass surface in a field of view of the one or more other external observers.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: detecting a second external observer for communicating with the vehicle; obtaining, for the vehicle, a second virtual model for communicating with the second external observer; encrypting, based on one or more characteristics of the second external observer, the second virtual model to generate an encrypted second virtual model; and communicating with the second external observer using the encrypted second virtual model simultaneously with communicating with the first external observer using the encrypted first virtual model.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise: projecting a first set of frames of the encrypted first virtual model towards the first external observer; projecting a second set of frames of the encrypted second virtual model towards the second external observer; and preventing the first set of frames from overlapping the second set of frames.
In some aspects, preventing the first set of frames from overlapping the second set of frames comprises: displaying the first set of frames and the second set of frames on a glass surface with a variable refractive index; modifying a refractive index of a first portion of the glass surface to selectively allow the first set of frames to pass through the first portion of the glass surface in a field of view of the first external observer while blocking the second set of frames from passing through the first portion of the glass surface in the field of view of the first external observer; and modifying a refractive index of a second portion of the glass surface to selectively allow the second set of frames to pass through the second portion of the glass surface in a field of view of the second external observer while blocking the first set of frames from passing through the second portion of the glass surface in the field of view of the second external observer.
In some aspects, detecting the first external observer to communicate with the vehicle comprises detecting a device of the first external observer. In some aspects, the device includes a head mounted display (HMD). In some aspects, the device includes augmented reality glasses.
In some aspects, communicating with the first external observer using the encrypted first virtual model comprises establishing a connection with the device and transmitting, using the connection, frames of the encrypted first virtual model to the device. In some aspects, the device can decrypt the encrypted first virtual model based on the one or more characteristics.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise generating the first virtual model. For example, in some examples, the apparatus is the vehicle or is a component (e.g., a computing device) of the vehicle. In such examples, the vehicle or component of the vehicle can generate the first virtual model. In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise receiving the first virtual model from a server.
In some aspects, the methods, apparatuses, and computer-readable medium described above further comprise disabling or lowering a quality of the first virtual model upon termination of communication with at least the first external observer.
According to at least one other example, a method of communication between a vehicle and one or more external observers is provided. The method includes establishing, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle. The method further includes, receiving, at the device, a virtual model of a virtual driver from the vehicle, and communicating with the vehicle using the virtual model.
In another example, an apparatus for communication between a vehicle and one or more external observers is provided that includes a memory configured to store data, and a processor coupled to the memory. The processor is configured to and can establish, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle. The processor is configured to and can receive, at the device, a virtual model of a virtual driver from the vehicle, and communicate with the vehicle using the virtual model.
In another example, a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processor to: establish, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle; receive, at the device, a virtual model of a virtual driver from the vehicle; and communicate with the vehicle using the virtual model.
In another example, an apparatus for communication between a vehicle and one or more external observers is provided. The apparatus includes means for establishing, by a device, a connection between the device of an external observer of the one or more external observers and the vehicle; means for receiving, at the device, a virtual model of a virtual driver from the vehicle; and means for communicating with the vehicle using the virtual model.
In some aspects, the device includes a head mounted display (HMD). In some aspects, the device includes augmented reality glasses.
In some aspects, the virtual model is encrypted based on one or more characteristics of the external observer.
In some aspects, establishing the connection is based on receiving a request to communicate with the vehicle. In some aspects, establishing the connection is based on sending a request to communicate with the vehicle. In some aspects, the virtual model is displayed by the device.
In some aspects, communicating with the vehicle using the received virtual model is based on one or more gestures.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, any or all drawings, and each claim.
The foregoing, together with other features and embodiments, will become more apparent upon referring to the following specification, claims, and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Illustrative embodiments of the present application are described in detail below with reference to the following figures:
FIG. 1 illustrates an example system comprising an autonomous vehicle and one or more external observers, according to this disclosure.
FIG. 2 illustrates an example of a process for creating virtual models of drivers for interacting with external observers, according to this disclosure.
FIG. 3 illustrates an example of a process for creating and encrypting virtual models of drivers for interacting with external observers, according to this disclosure.
FIG. 4 illustrates an example of a process for projecting beams of virtual models of drivers for interacting with external observers, according to this disclosure.
FIG. 5 illustrates an example system comprising an autonomous vehicle and two or more external observers with overlapping fields of view, according to this disclosure.
FIG. 6 illustrates an example of a process for preventing interference between multiple virtual models in overlapping fields of views of multiple external observers, according to this disclosure.
FIG. 7 illustrates an example system for modifying a refractive index of a glass surface, according to this disclosure.
FIG. 8 illustrates an example system comprising an autonomous vehicle and one or more external observers with head mounted displays, according to this disclosure.
FIG. 9A-FIG. 9B illustrate example processes for interactions between an autonomous vehicle and one or more external observers with head mounted displays, according to this disclosure.
FIG. 10A and FIG. 10B illustrate examples of processes for providing communication between an autonomous vehicle and one or more external observers to implement techniques described in this disclosure.
FIG. 11 illustrates an example computing device architecture to implement techniques described in this disclosure.