Google Patent | Multi-mode guard for voice commands

Patent: Multi-mode guard for voice commands

Drawings: Click to check drawins

Publication Number: 20210082435

Publication Date: 20210318

Applicant: Google

Abstract

Embodiments may be implemented by a computing device, such as a head-mountable display, in order to use a single guard phrase to enable different voice commands in different interface modes. An example device includes an audio sensor and a computing system configured to analyze audio data captured by the audio sensor to detect speech that includes a predefined guard phrase, and to operate in a plurality of different interface modes comprising at least a first and a second interface mode. During operation in the first interface mode, the computing system may initially disable one or more first-mode speech commands, and respond to detection of the guard phrase by enabling the one or more first-mode speech commands. During operation in the second interface mode, the computing system may initially disable a second-mode speech command, and to respond to the guard phrase by enabling the second-mode speech command.

Claims

  1. A method implemented by one or more processors of a computing device, the method comprising: transitioning the computing device into a given interface mode in response to receiving sensor data captured by at least one sensor of the computing device; in response to the computing device being in the given interface mode: activating one or more speech commands that are specific to the given interface mode, wherein the given interface mode corresponds to a current state of a user interface of the computing device, and wherein the given interface mode is one of multiple interface modes each corresponding to corresponding alternate states of the user interface; while the computing device is in the given interface mode and while the one or more speech commands are activated: receiving audio data captured by at least one audio sensor of the computing device; analyzing, based on the one or more speech commands being activated, the audio data to determine whether any of the one or more speech commands are included in the audio data; in response to determining a given speech command, of the one or more speech commands, is included in the audio data without a predefined guard phrase, and in response to determining that the audio data is received within a predetermined period of time of transitioning the computing device into the given interface mode: performing one or more actions, via the computing device, that correspond to the given speech command.

  2. The method of claim 1, further comprising: while the computing device is in the given interface mode and while the one or more speech commands are activated: determining the predetermined period of time has lapsed; and in response to determining the predetermined period of time has lapsed, deactivating one or more of the speech commands that are specific to the given interface mode.

  3. The method of claim 2, further comprising: while the computing device is in the given interface mode and while the one or more speech commands are deactivated: in response to determining the given speech command, of the one or more speech commands, is included in the audio data without the predefined guard phrase, and in response to determining that the audio data is not received within the predetermined period of time: refraining from performing one or more of the actions, via the computing device, that correspond to the given speech command.

  4. The method of claim 3, further comprising: subsequent to refraining from performing one or more of the actions, via the computing device, that correspond to the given speech command: receiving additional audio data captured by the at least one audio sensor of the computing device, the additional audio data including at least the predefined guard phrase; and re-activating one or more speech of the commands that are specific to the given interface mode.

  5. The method of claim 1, further comprising: while the computing device is in the given interface mode and while the one or more speech commands are activated: displaying a visual cue for one or more of the speech commands via a display of the computing device.

  6. The method of claim 5, further comprising: while the computing device is in the given interface mode and while the one or more speech commands are activated: determining the predetermined period of time has lapsed; and in response to determining the predetermined period of time has lapsed, causing the visual cue for the one or more speech commands to be removed from the display of the computing device.

  7. The method of claim 1, wherein activating the one or more speech commands that are specific to the given interface mode comprises loading, at the computing device, at least one hotword process for the one or more speech commands; and wherein analyzing, based on the one or more speech commands being activated, the audio data to determine whether any of the one or more speech commands are included in the audio data comprises: analyzing the audio data using the at least one hotword process.

  8. The method of claim 1, wherein the sensor data captured by the at least one sensor of the computing device comprises preceding audio data captured by the at least one audio sensor of the computing device, the preceding audio data including at least the predefined guard phrase and an indication of the given interface mode.

  9. The method of claim 1, wherein the predetermined period of time is specific to the given interface mode.

  10. The method of claim 1, wherein activating one or more of the speech commands in based on the first audio data.

  11. The method of claim 1, wherein the computing device is a head-mountable device.

  12. A computing device including memory and one or more processors configured to execute instructions stored in memory, the instructions comprising instructions to: transition the computing device into a given interface mode in response to receiving sensor data captured by at least one sensor of the computing device; in response to the computing device being in the given interface mode: activate one or more speech commands that are specific to the given interface mode, wherein the given interface mode corresponds to a current state of a user interface of the computing device, and wherein the given interface mode is one of multiple interface modes each corresponding to corresponding alternate states of the user interface; while the computing device is in the given interface mode and while the one or more speech commands are activated: receive audio data captured by at least one audio sensor of the computing device; analyze, based on the one or more speech commands being activated, the audio data to determine whether any of the one or more speech commands are included in the audio data; in response to determining a given speech command, of the one or more speech commands, is included in the audio data without a predefined guard phrase, and in response to determining that the audio data is received within a predetermined period of time of transitioning the computing device into the given interface mode: perform one or more actions, via the computing device, that correspond to the given speech command.

  13. The computing device of claim 12, wherein the instructions further comprise instructions to: while the computing device is in the given interface mode and while the one or more speech commands are activated: determine the predetermined period of time has lapsed; and in response to determining the predetermined period of time has lapsed, deactivate one or more of the speech commands that are specific to the given interface mode.

  14. The computing device of claim 13, wherein the instructions further comprise instructions to: while the computing device is in the given interface mode and while the one or more speech commands are deactivated: in response to determining the given speech command, of the one or more speech commands, is included in the audio data without the predefined guard phrase, and in response to determining that the audio data is not received within the predetermined period of time: refrain from performing one or more of the actions, via the computing device, that correspond to the given speech command.

  15. The computing device of claim 14, wherein the instructions further comprise instructions to: subsequent to refraining from performing one or more of the actions, via the computing device, that correspond to the given speech command: receive additional audio data captured by the at least one audio sensor of the computing device, the additional audio data including at least the predefined guard phrase; and re-activate one or more speech of the commands that are specific to the given interface mode.

  16. The computing device of claim 12, wherein the instructions further comprise instructions to: while the computing device is in the given interface mode and while the one or more speech commands are activated: display a visual cue for one or more of the speech commands via a display of the computing device.

  17. The computing device of claim 16, wherein the instructions further comprise instructions to: while the computing device is in the given interface mode and while the one or more speech commands are activated: determine the predetermined period of time has lapsed; and in response to determining the predetermined period of time has lapsed, cause the visual cue for the one or more speech commands to be removed from the display of the computing device.

  18. The computing device of claim 12, wherein the instructions to activate the one or more speech commands that are specific to the given interface mode comprise instructions to load, at the computing device, at least one hotword process for the one or more speech commands; and wherein the instructions to analyze, based on the one or more speech commands being activated, the audio data to determine whether any of the one or more speech commands are included in the second audio data comprise instructions to: analyze the audio data using the at least one hotword process.

  19. The computing device of claim 12, wherein the sensor data captured by the at least one sensor of the computing device comprises preceding audio data captured by the at least one audio sensor of the computing device, the preceding audio data including at least the predefined guard phrase and an indication of the given interface mode.

  20. A non-transitory computer-readable storage medium storing instructions executable by at least one processor, the instructions including instructions to: transition the computing device into a given interface mode in response to receiving sensor data captured by at least one sensor of the computing device; in response to the computing device being in the given interface mode: activate one or more speech commands that are specific to the given interface mode, wherein the given interface mode corresponds to a current state of a user interface of the computing device, and wherein the given interface mode is one of multiple interface modes each corresponding to corresponding alternate states of the user interface; while the computing device is in the given interface mode and while the one or more speech commands are activated: receive audio data captured by at least one audio sensor of the computing device; analyze, based on the one or more speech commands being activated, the audio data to determine whether any of the one or more speech commands are included in the audio data; in response to determining a given speech command, of the one or more speech commands, is included in the audio data without a predefined guard phrase, and in response to determining that the audio data is received within a predetermined period of time of transitioning the computing device into the given interface mode: perform one or more actions, via the computing device, that correspond to the given speech command.

Description

BACKGROUND

[0001] Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.

[0002] Computing devices such as personal computers, laptop computers, tablet computers, cellular phones, and countless types of Internet-capable devices are increasingly prevalent in numerous aspects of modern life. Over time, the manner in which these devices are providing information to users is becoming more intelligent, more efficient, more intuitive, and/or less obtrusive.

[0003] The trend toward miniaturization of computing hardware, peripherals, as well as of sensors, detectors, and image and audio processors, among other technologies, has helped open up a field sometimes referred to as “wearable computing.” In the area of image and visual processing and production, in particular, it has become possible to consider wearable displays that place a graphic display close enough to a wearer’s (or user’s) eye(s) such that the displayed image appears as a normal-sized image, such as might be displayed on a traditional image display device. The relevant technology may be referred to as “near-eye displays.”

[0004] Wearable computing devices with near-eye displays may also be referred to as “head-mountable displays” (HMDs), “head-mounted displays,” “head-mounted devices,” or “head-mountable devices.” A head-mountable display places a graphic display or displays close to one or both eyes of a wearer. To generate the images on a display, a computer processing system may be used. Such displays may occupy a wearer’s entire field of view, or only occupy part of wearer’s field of view. Further, head-mounted displays may vary in size, taking a smaller form such as a glasses-style display or a larger form such as a helmet, for example.

[0005] Emerging and anticipated uses of wearable displays include applications in which users interact in real time with an augmented or virtual reality. Such applications can be mission-critical or safety-critical, such as in a public safety or aviation setting. The applications can also be recreational, such as interactive gaming. Many other applications are also possible.

SUMMARY

[0006] An example head-mountable device (HMD) may be operable to receive and interpret voice commands. In this and other contexts, it may be desirable to disable certain voice commands until a guard phrase is detected, in order to reduce the occurrence of false-positive detections of voice commands. It may also be desirable for the HMD to support speech commands in some places within a UI, but not in others. However, it can be challenging to make such a UI simple for users to understand, such that the user knows when certain speech commands are and are not available. Accordingly, an example HMD may be configured to respond to the same guard phrase in different ways, depending upon the state the UI. In particular, an HMD may define a single, multi-modal, guard phrase, and may also define multiple interface modes that correspond to different states of the HMD’s UI. The same guard phrase may therefore be used to enable a different speech command or commands in different interface modes.

[0007] In one aspect, a device may include at least one audio sensor and a computing system configured to: (a) analyze audio data captured by the at least one audio sensor in order to detect speech that includes a predefined guard phrase and (b) operate in a plurality of different interface modes comprising at least a first and a second interface mode. During operation in the first interface mode, the computing system is configured to initially disable one or more first-mode speech commands, and to respond to detection of the guard phrase by enabling the one or more first-mode speech commands. During operation in the second interface mode, the computing system is configured to initially disable one or more second-mode speech commands, and to respond to detecting the guard phrase by enabling the one or more second-mode speech commands.

[0008] In another aspect, a computer-implemented method may involve: (a) a computing device operating in a first interface mode, wherein, during operation in the first interface mode, the computing device initially disables one or more first-mode speech commands, and responds to detection of a guard phrase by enabling the one or more first-mode speech commands; and (b) a computing device operating in a second interface mode, wherein, during operation in the second interface mode, the computing device initially disables one or more second-mode speech commands, and responds to detection of a guard phrase by enabling the one or more second-mode speech commands.

[0009] In a further aspect, a non-transitory computer readable medium may have stored therein instructions that are executable by a computing device to cause the computing device to perform functions comprising: (a) operating in a first interface mode, wherein the functions for operating in the first interface mode comprise initially disabling one or more first-mode speech commands, and responding to detection of a guard phrase by enabling the one or more first-mode speech commands; and (b) operating in a second interface mode, wherein the functions for operating in the second interface mode comprise initially disabling one or more second-mode speech commands, and responding to detection of a guard phrase by enabling the one or more second-mode speech commands.

[0010] In yet a further aspect, a system may include: (a) a means for causing a computing device to operate in a first interface mode, wherein, during operation in the first interface mode, the computing device initially disables one or more first-mode speech commands and responds to detection of a guard phrase by enabling the one or more first-mode speech commands; and (b) a means for causing a computing device to operate in a second interface mode, wherein, during operation in the second interface mode, the computing device initially disables one or more second-mode speech commands and responds to detection of a guard phrase by enabling the one or more second-mode speech commands.

[0011] These as well as other aspects, advantages, and alternatives will become apparent to those of ordinary skill in the art by reading the following detailed description, with reference where appropriate to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] FIG. 1 shows screen views of a user-interface during a transition between two interface modes, according to an example embodiment.

[0013] FIG. 2A illustrates a wearable computing system according to an example embodiment.

[0014] FIG. 2B illustrates an alternate view of the wearable computing device illustrated in FIG. 2A.

[0015] FIG. 2C illustrates another wearable computing system according to an example embodiment.

[0016] FIG. 2D illustrates another wearable computing system according to an example embodiment.

[0017] FIGS. 2E to 2G are simplified illustrations of the wearable computing system shown in FIG. 1D, being worn by a wearer.

[0018] FIG. 3A is a simplified block diagram of a computing device according to an example embodiment.

[0019] FIG. 3B shows a projection of an image by a head-mountable device, according to an example embodiment.

[0020] FIGS. 4A and 4B are flow charts illustrating methods, according to example embodiments.

[0021] FIGS. 5A to 5C illustrate applications of a multi-mode guard phrase, according to example embodiments.

DETAILED DESCRIPTION

[0022] Example methods and systems are described herein. It should be understood that the words “example” and “exemplary” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as being an “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features. In the following detailed description, reference is made to the accompanying figures, which form a part thereof. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein.

[0023] The example embodiments described herein are not meant to be limiting. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein.

I. Overview

[0024] A head-mountable device (HMD) may be configured to provide a voice interface, and as such, may be configured to listen for commands that are spoken by the wearer. Herein spoken commands may be referred to interchangeably as either “voice commands” or “speech commands.”

[0025] When an HMD enables speech commands, the HMD may continuously listen for speech, so that a user can readily use the speech commands to interact with the HMD. In such case, it may be desirable to implement a “guard phrase,” which the user must recite before the speech commands are enabled. By disabling voice commands until such a guard phrase is detected, an HMD may be able to reduce the occurrence of false-positives. In other words, the HMD may be able to reduce instances where the HMD incorrectly interprets speech as including a particular speech command, and thus takes an undesired action. However, implementing such a guard phrase in a streamlined manner may be difficult, as users can perceive the need to speak a guard word before a speech command as an extra step that complicates a UI.

[0026] Further, it may also be desirable for an HMD to support speech commands in some places within a user interface (UI), but not in others. However, it can be challenging to make such a UI simple for users to understand (e.g., so that the user knows when speech commands are and are not available). This can be further complicated by the fact that different speech commands may be needed in different places within the UI.

[0027] According to an example embodiment, an HMD may be configured to respond to the same guard phrase in different ways, depending upon the state the UI. In particular, an HMD may define multiple interface modes that correspond to different states of the HMD’s UI. Each interface mode may have a different “hotword” model that, when loaded, listens for one or more speech commands that are specific to the particular interface mode.

[0028] In a further aspect, the HMD may define a single guard phrase to be used in multiple interface modes where speech commands are available. This guard phrase may be to be a multi-modal guard phrase, since it is used in the same way across multiple interface modes. Notably however, the actions taken by HMD in response to detecting the guard phrase are non-modal, as the HMD does not change the interface mode when the guard phrase is detected. Rather, the guard phrase may be used to enable a different speech command or commands in different interface modes (e.g., by activating the different hotword processes specified by the different interface modes).

[0029] Configured as such, an example HMD may switch between different interface modes in order to operate in whichever interface mode corresponds to the current state of the UI (typically the interface mode that provides for speech command(s) that are useful in the current UI state). Each time the HMD switches to a different interface mode, the HMD may disable voice interaction (e.g., by unloading the previous mode’s hotword process and/or refraining from loading the new mode’s hotword process), and require that the user say the guard phrase in order to enable the new mode’s speech commands. Then, when the HMD detects the guard phrase, the HMD enables the speech command(s) that are specific to the particular interface mode (e.g., by activating the hotword process for the interface mode).

[0030] Additionally, in interface modes where speech commands can be enabled, the HMD may display a visual cue of the guard phrase, which can help alert a user that voice interaction can be enabled. And, once the HMD detects the guard phrase, the HMD may display a visual cue or cues that indicate the particular speech command(s) that have been enabled. By combining such visual cues with a multi-modal guard phrase, an HMD may allow for useful voice input in a manner that is more intuitive to the user.

[0031] For example, FIG. 1 shows screen views of a UI during a transition between two interface modes in which a multi-mode guard phrase is implemented, according to an example embodiment.

[0032] More specifically, an HMD may operate in a first interface mode 101, where one or more first-mode speech commands can be enabled by speaking a predefined guard phrase. When the HMD switches to the first interface mode 101 from another interface mode, the HMD may initially disable the first-mode speech commands and display a visual cue for the guard phrase in its display, as shown in screen view 102. If the HMD detects the guard phrase while in the first interface mode, the HMD may enable the one or more first-mode speech commands, and display visual cues that indicate the enabled first-mode speech commands, as shown in screen view 104.

[0033] To provide a specific example, the first interface mode 101 may provide an interface for a home screen, which provides a launching point for a user to access a number of frequently-used features. Accordingly, when the user speaks a command to access a different feature, such as a camera or phone feature, the HMD may switch to the interface mode that provides an interface for the different feature.

[0034] More generally, when the HMD switches to a different aspect of its UI for which one or more second-mode speech commands are supported, the HMD may switch to a second interface mode 103. When the HMD switches to the second interface mode 103, the HMD may disable any speech commands that were enabled, and listen only for the guard phrase (e.g., by loading a guard-phrase hotword process). Further, the HMD may require the user to again speak the guard phrase before enabling the one or more second-mode speech commands.

[0035] To provide a hint to the user that the guard word will enable voice commands, the HMD may again display the visual cue for the guard phrase, as shown in screen 106. And, if the HMD detects the guard phrase while in the second interface mode 103, the HMD may responsively enable the one or more second-mode speech commands (e.g., by loading the hotword process for the second interface mode). When the second-mode speech commands are enabled, the HMD may display visual cues that indicate the enabled second-mode speech commands, as shown in screen view 108.

[0036] Many implementations of a multi-mode guard phrase are possible. One implementation involves an HMD a home screen, which serves as a launch point for various different features (some or all of which may provide for voice commands), one of which may be a video camera. Thus, from the home screen, the user may say the guard phrase followed by another speech command in order to launch a camera application. Further, in some embodiments, the HMD may automatically start recording when the user launches the camera application via the home screen. During video recording, the guard phrase may be displayed to indicate that a speech command can be enabled by saying the guard word. In particular, the user may say the guard phrase followed by “stop recording” (e.g., the speech command that can be enabled in the video-recording mode), in order to stop recording video. Other implementations are also possible.

[0037] In a further aspect, a second protective feature against false positives, in addition to the multi-mode guard phrase, may be utilized in some or all interface modes. In particular, time-out process may be implemented in order to disable the enabled speech commands if at least one of the enabled speech commands is not detected within a predetermined period of time after detection of the guard phrase. For example, in the implementation described above, a time-out process may be implemented when the guard phrase is detected while the HMD is operating in the video-recording mode. As such, when the HMD detects the guard phrase, the HMD may start a timer. Then, if the HMD does not detect the “stop recording” speech command within five seconds, for example, then the HMD may disable the “stop recording” speech command, and require the guard phrase in order to re-enable the “stop recording” speech command.

II. Example Wearable Computing Devices

[0038] Systems and devices in which example embodiments may be implemented will now be described in greater detail. In general, an example system may be implemented in or may take the form of a wearable computer (also referred to as a wearable computing device). In an example embodiment, a wearable computer takes the form of or includes a head-mountable device (HMD).

[0039] An example system may also be implemented in or take the form of other devices that support speech commands, such as a mobile phone, tablet computer, laptop computer, or desktop computer, among other possibilities. Further, an example system may take the form of non-transitory computer readable medium, which has program instructions stored thereon that are executable by at a processor to provide the functionality described herein. An example system may also take the form of a device such as a wearable computer or mobile phone, or a subsystem of such a device, which includes such a non-transitory computer readable medium having such program instructions stored thereon.

[0040] An HMD may generally be any display device that is capable of being worn on the head and places a display in front of one or both eyes of the wearer. An HMD may take various forms such as a helmet or eyeglasses. As such, references to “eyeglasses” or a “glasses-style” HMD should be understood to refer to an HMD that has a glasses-like frame so that it can be worn on the head. Further, example embodiments may be implemented by or in association with an HMD with a single display or with two displays, which may be referred to as a “monocular” HMD or a “binocular” HMD, respectively.

[0041] FIG. 2A illustrates a wearable computing system according to an example embodiment. In FIG. 2A, the wearable computing system takes the form of a head-mountable device (HMD) 202 (which may also be referred to as a head-mounted display). It should be understood, however, that example systems and devices may take the form of or be implemented within or in association with other types of devices, without departing from the scope of the invention. As illustrated in FIG. 2A, the HMD 202 includes frame elements including lens-frames 204, 206 and a center frame support 208, lens elements 210, 212, and extending side-arms 214, 216. The center frame support 208 and the extending side-arms 214, 216 are configured to secure the HMD 202 to a user’s face via a user’s nose and ears, respectively.

[0042] Each of the frame elements 204, 206, and 208 and the extending side-arms 214, 216 may be formed of a solid structure of plastic and/or metal, or may be formed of a hollow structure of similar material so as to allow wiring and component interconnects to be internally routed through the HMD 202. Other materials may be possible as well.

[0043] One or more of each of the lens elements 210, 212 may be formed of any material that can suitably display a projected image or graphic. Each of the lens elements 210, 212 may also be sufficiently transparent to allow a user to see through the lens element. Combining these two features of the lens elements may facilitate an augmented reality or heads-up display where the projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements.

[0044] The extending side-arms 214, 216 may each be projections that extend away from the lens-frames 204, 206, respectively, and may be positioned behind a user’s ears to secure the HMD 202 to the user. The extending side-arms 214, 216 may further secure the HMD 202 to the user by extending around a rear portion of the user’s head. Additionally or alternatively, for example, the HMD 202 may connect to or be affixed within a head-mounted helmet structure. Other configurations for an HMD are also possible.

[0045] The HMD 202 may also include an on-board computing system 218, an image capture device 220, a sensor 222, and a finger-operable touch pad 224. The on-board computing system 218 is shown to be positioned on the extending side-arm 214 of the HMD 202; however, the on-board computing system 218 may be provided on other parts of the HMD 202 or may be positioned remote from the HMD 202 (e.g., the on-board computing system 218 could be wire- or wirelessly-connected to the HMD 202). The on-board computing system 218 may include a processor and memory, for example. The on-board computing system 218 may be configured to receive and analyze data from the image capture device 220 and the finger-operable touch pad 224 (and possibly from other sensory devices, user interfaces, or both) and generate images for output by the lens elements 210 and 212.

[0046] The image capture device 220 may be, for example, a camera that is configured to capture still images and/or to capture video. In the illustrated configuration, image capture device 220 is positioned on the extending side-arm 214 of the HMD 202; however, the image capture device 220 may be provided on other parts of the HMD 202. The image capture device 220 may be configured to capture images at various resolutions or at different frame rates. Many image capture devices with a small form-factor, such as the cameras used in mobile phones or webcams, for example, may be incorporated into an example of the HMD 202.

[0047] Further, although FIG. 2A illustrates one image capture device 220, more image capture device may be used, and each may be configured to capture the same view, or to capture different views. For example, the image capture device 220 may be forward facing to capture at least a portion of the real-world view perceived by the user. This forward facing image captured by the image capture device 220 may then be used to generate an augmented reality where computer generated images appear to interact with or overlay the real-world view perceived by the user.

[0048] The sensor 222 is shown on the extending side-arm 216 of the HMD 202; however, the sensor 222 may be positioned on other parts of the HMD 202. For illustrative purposes, only one sensor 222 is shown. However, in an example embodiment, the HMD 202 may include multiple sensors. For example, an HMD 202 may include sensors 202 such as one or more gyroscopes, one or more accelerometers, one or more magnetometers, one or more light sensors, one or more infrared sensors, and/or one or more microphones. Other sensing devices may be included in addition or in the alternative to the sensors that are specifically identified herein.

[0049] The finger-operable touch pad 224 is shown on the extending side-arm 214 of the HMD 202. However, the finger-operable touch pad 224 may be positioned on other parts of the HMD 202. Also, more than one finger-operable touch pad may be present on the HMD 202. The finger-operable touch pad 224 may be used by a user to input commands. The finger-operable touch pad 224 may sense at least one of a pressure, position and/or a movement of one or more fingers via capacitive sensing, resistance sensing, or a surface acoustic wave process, among other possibilities. The finger-operable touch pad 224 may be capable of sensing movement of one or more fingers simultaneously, in addition to sensing movement in a direction parallel or planar to the pad surface, in a direction normal to the pad surface, or both, and may also be capable of sensing a level of pressure applied to the touch pad surface. In some embodiments, the finger-operable touch pad 224 may be formed of one or more translucent or transparent insulating layers and one or more translucent or transparent conducting layers. Edges of the finger-operable touch pad 224 may be formed to have a raised, indented, or roughened surface, so as to provide tactile feedback to a user when the user’s finger reaches the edge, or other area, of the finger-operable touch pad 224. If more than one finger-operable touch pad is present, each finger-operable touch pad may be operated independently, and may provide a different function.

[0050] In a further aspect, HMD 202 may be configured to receive user input in various ways, in addition or in the alternative to user input received via finger-operable touch pad 224. For example, on-board computing system 218 may implement a speech-to-text process and utilize a syntax that maps certain spoken commands to certain actions. In addition, HMD 202 may include one or more microphones via which a wearer’s speech may be captured. Configured as such, HMD 202 may be operable to detect spoken commands and carry out various computing functions that correspond to the spoken commands.

……
……
……

You may also like...