空 挡 广 告 位 | 空 挡 广 告 位

Apple Patent | Systems And Methods For Determining Estimated Head Orientation And Position With Ear Pieces

Patent: Systems And Methods For Determining Estimated Head Orientation And Position With Ear Pieces

Publication Number: 20200252740

Publication Date: 20200806

Applicants: Apple

Abstract

Aspects of the present disclosure provide systems and methods for determining an estimated orientation and/or position of a user’s head using worn ear pieces and leveraging the estimated head orientation and/or position to provide information to the user. In one exemplary method, first and second spatial positions of respective first and second ear pieces worn by a user may each be determined. Based at least in part on the first and second spatial positions of the respective first and second ear pieces, an estimated orientation of the user’s head may be determined. The method may further include requesting information to be provided to the user based at least in part on the estimated orientation of the user’s head and providing contextual information to the user responsive to the request.

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims benefit under 35 U.S.C. .sctn. 119(e) of Provisional U.S. patent application No. 62/398,762, filed Sep. 23, 2016, the contents of which are incorporated herein by reference in their entirety.

BACKGROUND

[0002] The present disclosure relates to determining an estimated head orientation and position using ear pieces.

[0003] Mobile devices, such as smart phones or tablet computers, are commonly used as an informational and/or navigational aid to a user. In a conventional configuration, the positioning of the mobile device itself may be tracked, such as by GPS (global positioning system). The positioning of the mobile device may be used, for example, in a navigation program on the mobile device to show the user his or her geographical location and even information about nearby restaurants, stores, etc.

SUMMARY

[0004] Systems and methods for determining estimated head orientation and position with ear pieces are disclosed. An exemplary method may comprise determining a first spatial position of a first ear piece worn by a user and a second spatial position of a second ear piece also worn by the user. Based at least in part on the first and second spatial positions, the method may estimate an orientation of the user’s head.

[0005] In an aspect, audio output of the ear pieces may be altered based on the estimated orientation of the user’s head. In an aspect, the contextual information provided to the user may be responsive to an input requesting information. In another aspect, a spatial position of the user’s head may be estimated based on the first and second positions of the first and second ear pieces, respectively. The contextual information provided to the user may be further based on the spatial position of the user’s head. The contextual information may be provided to the user via the first and second ear pieces or via a display associated with the first and second ear pieces. In another aspect, the estimated orientation of the user’s head may be determined further based on an orientation angle representing an orientation about an axis extending between the first ear piece and the second ear piece.

[0006] An exemplary method may comprise receiving a search input indicative of an object positioned in an environment. The method may determine a first spatial position of a first ear piece worn by the user and a second spatial position of a second ear piece also worn by the user. Based at least in part on the first and second spatial positions, the method may determine an estimated orientation of the user’s head. A facing direction of the estimated orientation of the user’s head may be compared with the position of the object. Based at least in part on this comparison, an instruction may be generated to assist the user in locating the object. In an aspect, the instruction may be generated further based on a spatial position of the user’s head. In another aspect, the method may determine if that the object corresponds with the facing direction of the orientation of the user’s head. If the object corresponds with the facing direction of the estimated orientation of the user’s head, the instruction may indicate so. If not, the instruction may include a movement direction of the user.

[0007] An exemplary device may comprise a processor and a memory storing instructions that, when executed by the processor, effectuate operations. The operations may comprise determining a first spatial position of a first ear piece worn by the user and a second spatial position of a second ear piece also worn by the user. Based at least in part on the first and second spatial positions, the operations may determine an estimated orientation of the user’s head. The operations may determine that the object corresponds with a facing direction of the estimated orientation of the user’s head and generate information relating to the object that is to be provided to the user.

[0008] An exemplary set of ear pieces to be worn by a user comprises a first ear piece and a second ear piece. The first and second ear pieces may each comprise a first and second positional transceiver, respectively. The set of ear pieces may further comprise a processor disposed within at least one of the first and second ear pieces and configured to effectuate operations. The operations may comprise determining a first spatial position of the first ear piece based at least in part on wireless signals received by the first positional transceiver and determining a second spatial position of the second ear piece based at least in part on wireless signals received by the second positional transceiver. Based at least in part on the first and second spatial positions, the operations may estimate an orientation of the user’s head. In an aspect, the set of ear pieces further may comprise a speaker via which information may be provided to the user. The information may be based at least in part on the estimated orientation of the user’s head. The set of ear pieces further may comprise a microphone and the operations may capture a parameter of a search request via the microphone. The set of ear pieces further may comprise a motion sensor and the orientation of the user’s head may be estimated further based on sensor data captured by the motion sensor.

[0009] An exemplary method may comprise determining a first spatial position of a first ear piece worn by a user and a second spatial position of a second ear piece also worn by the user. Based at least in part on the first and second spatial positions, the method may determine an estimated orientation of the user’s head. The method may generate first audio content that is modified, based at least in part on the estimated orientation of the user’s head, from second audio content. The first audio content may be provided to a user via the first and second ear pieces. In an aspect, the generating the first audio content may comprise setting an audio attribute of the first audio content, such as volume, frequency equalization, high frequency cut-off, low frequency cut-off, and relative timing between channels of the audio content. In another aspect, the first audio content may comprise first and second audio channels and the generating the first audio content may comprise setting an audio attribute of the first audio channel to a first value and setting an audio attribute of the second audio channel to a second value that is different from the first value.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] FIG. 1 illustrates an exemplary system according to an aspect of the present disclosure.

[0011] FIG. 2 illustrates an exemplary set of ear pieces and a device, each according to an aspect of the present disclosure.

[0012] FIG. 3 illustrates a user and axes of position and orientation.

[0013] FIG. 4 illustrates a head of a user wearing an exemplary set of ear pieces.

[0014] FIG. 5 illustrates a method according to an aspect of the present disclosure.

[0015] FIG. 6 illustrates a system according to an aspect of the present disclosure.

[0016] FIG. 7 illustrates a method according to an aspect of the present disclosure.

[0017] FIG. 8 illustrates a method according to an aspect of the present disclosure.

[0018] FIG. 9 illustrates a method according to an aspect of the present disclosure.

DETAILED DESCRIPTION

[0019] Many mobile devices attempt to provide information to a user of the device by tailoring information to a user’s frame of reference (e.g., “turn left”). Typically, such mobile devices contain positioning systems such as GPS and perhaps gravity sensors that determine the devices’ orientation in free space. Such devices, however, often provide coarse or inaccurate information because the devices’ orientation with respect to the operator cannot be determined accurately. Mobile devices may be placed in pockets, mounted in cars or otherwise provided in a location whose orientation relative to the user cannot be determined. As still another complication, the mobile devices’ orientation may change with respect to the user.

[0020] Aspects of the present disclosure provide systems and methods for determining an estimated orientation and position of a user’s head using worn ear pieces and leveraging the estimated head orientation and position to provide information to the user, such as to help the user locate an object within the user’s environment or to provide information relating to an object identified within the estimated head orientation. In one exemplary method, a first spatial position of a first ear piece worn by a user and a second spatial position of a second ear piece worn by the user may each be determined. Based at least in part on the first spatial position of the first ear piece and the second spatial position of the second ear piece, an estimated head orientation of the user may be determined. The method may further include requesting information to be provided to the user based at least in part on the estimated head orientation and providing information to the user responsive to the request. The method may further determine a position of the user’s head based on the first and second spatial positions of the ear pieces, which also may serve as a basis for the provided information.

[0021] FIG. 1 illustrates a system 100 according to an aspect of the present disclosure. The system 100 may include a pair of ear pieces (collectively 104) and a plurality of positional nodes 106.1-106.N. The ear pieces 104 may be worn by a user 102 in a selected manner, such as by wearing a left ear piece 104a in a left ear of the user 102 and a right ear piece 104b in a right ear of the user 102. The ear pieces 104 may determine their relative position with reference to signals generated by the positional nodes 106.1-106.N. From the relative position determination, the ear pieces 104 may estimate an orientation and position of the user’s head, including the head’s facing.

[0022] The positional nodes 106.1-106.N may be Wi-Fi access points that transmit Service Set Identifier (SSID) and Media Access Control (MAC) data, cellular network transmitters (e.g., base stations or small cells), any other suitable wireless access points, and/or a combination thereof. The positional nodes 106.1-106.N may be provided in a variety of ways. In one aspect, the positional nodes 106.1-106.N may be deployed about a physical space at known locations, and transmit (and/or receive) ranging signals.

[0023] The ear pieces 104 may include receiver circuitry to receive signals from these nodes 106.1-106.N and estimate location therefrom. In one aspect, the ear pieces 104 may estimate location wholly independently of other components, in which case, the ear pieces 104 may include both receiver circuitry to receive transmitted signals from the nodes 106.1-106.N and processing circuitry to estimate location based on the received signals. In another aspect, the ear pieces 104 may work cooperatively with other processing systems, such as a user device 110 and/or a server 140, to estimate location. In this latter aspect, the ear pieces 104 may receive transmitted signals from the nodes 106.1-106.N and develop intermediate signals representing, for example, strength and/or timing information derived from the received signals; other components (the device 110 and/or the server 140) ultimately may estimate location from the intermediate signals.

[0024] Aspects of the present disclosure also provide techniques for consuming data representing the estimated head orientation that is obtained from use of the ear pieces 104. For example, the estimated head orientation and/or position may be used in informational and/or navigational searches as discussed above, in which search results may be presented to a user 102 using references that are tied to the user’s head orientation. For example, estimations of orientation may be applied in the following use cases:

[0025] In one aspect, a virtual guide service may employ estimates of a user’s head orientation and/or position to provide contextual information that is tailored to a user’s frame of reference. For example, a user’s field of view may be estimated from the user’s head orientation and/or position and compared to information of interest. When objects of interest are determined to be located within or near the user’s field of view, contextual information may be provided to the user. For example, when navigating a shopping area to reach a desired item, an estimation of head orientation and/or position may determine which direction the user 102 is facing at the onset of a search. Those search results may factor the head orientation and/or position into the presented information (e.g., “the eggs are behind you and to the right. Turn around and turn right at the end of the row.”). In another example, a user that browses through a museum tour may be presented information on objects that are estimated to be within the user’s field of view based on estimates of the user’s head orientation and/or position. In one aspect, user input and search results may be via spoken exchange, in which case, the ear pieces 104 may capture user commands via a microphone and provide search results through speakers (not shown). In another aspect, user input and/or search results may be provided via the associated device 110.

[0026] In another aspect, a spatial audio service may employ estimates of a user’s head orientation and/or position to emulate a three dimensional audio space. For example, audio information may be provided to the users through speakers in the ear pieces 104. If/when the user changes orientation of the head, the audio playback may be altered to provide effects that emulate a three dimensional space. For example, in a space with live performances of music, audio may be provided from a specific location in free space (e.g., a stage). To emulate the effect as a user changes orientation of the head, audio playback through speakers of the ear pieces 104 may be altered to emulate changes that arise due to the changed orientation. If one ear is oriented toward the direction from which audio is to be sourced, volume of audio in the associated speaker may be increased. Similarly, if another ear is orientated away from the direction from which audio is to be source, volume of the audio in the associated speaker may be decreased. Similarly, an immersive audio experience may locate different sources of audio in different locations in a modeled three-dimensional space; as the user changes orientation within this space, the contributions of each source may be altered accordingly.

[0027] In a further aspect, a virtual reality service may employ estimates of a user’s head orientation and/or position to govern presentation of visual information through a peripheral device (for example, a display). Here, a user’s field of view in a modeled three-dimensional space may be estimated from estimates of a user’s head orientation in free space. Visual elements may be presented on an associated display (e.g., goggles or other display device) by mapping the estimated field of view to content in the modeled three-dimensional space.

[0028] The ear pieces 104 and/or the device 110 may be communicatively connected, via a network 150, to a server 140 to effectuate various operations described herein relating to the estimated head orientation of the user 102. The network 150 may represent any number of networks capable of conveying the various data communications described herein, including for example wireline and/or wireless communication networks. Representative networks include telecommunications networks (e.g., cellular networks), local area networks, wide area networks, and/or the Internet.

[0029] The server 140 may represent one or more computing devices that may interact with the device 110 and/or the ear pieces 104. The server 140, for example, may provide a service to the user that incorporates estimated head orientation and/or position as an input that can alter provision of the service. The server 140 may include an environmental data repository 142 that may store and provide information relating to an environment in which the user 102 may be present (e.g., audio information or visual information in the examples illustrated above). For example, the environmental data repository 142 may include information describing one or more objects found in the environment. As used herein, an “object” may refer to any suitable physical or logical element found in a physical environment or a virtual (e.g., computer generated) environment. For example, an object may be a physical object, such as a product found in a store, an artifact in a museum, or a building or other physical landmark in an outside environment. As another example, an object may further refer to a logically-defined object, such as area within an environment (e.g., the entrance or exit area of a building, the area surrounding an animal exhibit at a zoo, an area encompassing a section of store shelving, or an area collectively representing one or more sub-objects). As yet another example, an object may also refer to an area or element in a computer user interface, wherein the computer user interface is considered the environment.

[0030] The environmental data repository 142 may include further information relating to each object, such as in a cross-referenced table. The information relating to an object may include a name or other identifier of the object, a location of the object within the environment (e.g., an x, y, z coordinate set), and a description of the object. The specific information relating to an object may depend on the particular type of object. For example, information relating to a store product may include a price of the product, a category of the product, a product description, and a location of the product in the store. Information relating to a food product may further include a list of ingredients or allergen information. Information relating to an artifact in a museum may include the location of the artifact in the museum and educational information on the artifact, such as the time period of the artifact and the historical relevance of the artifact. Information relating to a building or physical landmark may include, for example, an address, a visual description (e.g., an indication that a building is made from red brick), and architectural or historical information relating to the building or physical landmark.

[0031] The information relating to an object provided by the environmental data repository 142 may be leveraged in various operations described in greater detail herein. For example, the position of an object may be compared with the facing of a user’s head to determine if the object corresponds to that facing and, if not, provide instructions to the user 102 to adjust his or her body position and/or head orientation so that the object corresponds with the head’s facing. As another example, a table of objects from the environmental data repository 142 may be cross-referenced with a facing direction of a user’s head to determine one or more objects that correspond with that facing direction. Upon determining that an object is corresponds with the facing direction, additional information may be provided to the user 102, such as the price and ingredients for a food product or historical information for a museum artifact.

[0032] In an aspect, the environmental data repository 142, or portion thereof, may be stored locally on the device 110, in addition to or instead of being stored on the server 140.

[0033] FIG. 2 is a simplified block diagram of the ear pieces 104 and the device 110 according to an aspect of the present disclosure. As discussed above, the ear pieces 104 may include a left ear piece 104a and a right ear piece 104b. The left ear piece 104a is intended to be worn on or in the user’s 102 left ear and the right ear piece 104b is intended to be worn on or in the user’s 102 right ear. The ear pieces 104 may possess ergonomic configurations that are suitable for human users, which may resemble the in-ear ear pieces or over-the-ear headphones commonly used with mobile devices, such as a smart phone or portable music player, to listen to music and/or effectuate voice communication.

[0034] The ear pieces 104 may include a pair of transceivers 212a, 212b, a pair of positional transceivers 216a, 216b, a motion sensor 218, and one or more processors 222a, 222b. The transceivers 212a, 212b may provide communication connectivity between the two ear pieces 104 and, optionally, with other components such as the device 110 and/or the server 140. The positional transceivers 216a, 216b may receive signals from the nodes (FIG. 1) at each of the ear pieces 104 and estimate characteristics of the received signals. The motion sensor 218 may estimate orientation of an ear piece 104 with respect to gravity. The processors 222a, 222b may manage overall operation of the ear pieces 104.

[0035] The ear pieces 104 also may include components that are common to headsets. For example, FIG. 2 illustrates the ear pieces 104a, 104b as having respective speakers 214a, 214b and at least one microphone 200 between them.

[0036] As mentioned, one or more of the ear pieces 104 may each include a processor 222a, 222b. The processors 222a, 222b may facilitate various operations disclosed herein, including managing operations of the ear pieces 104 and facilitating communication with other components. For example, one or more of the processors 222a, 222b may perform computations to calculate the estimated head orientation of the user. One or more of the processors 222a, 222b may interface with other components, such as the device 110 and/or the server 140, to perform a search based on an estimated orientation of the user’s head. As another example, one or more of the processors 222a, 222b may compose spoken language prompts, instructions, or other audio outputs to the user 102. As yet another example, one or more of the processors 222a, 222b may perform speech conversions of speech input provided by the user 102. As another example, the one or more processors may generate one or more computer-generated elements to be presented on the display 238 or other display device associated with the device 110. The computer-generated elements may be superimposed over a real-time display or view of the user’s 102 environment. Such presentation is sometimes referred to as augmented reality.

[0037] As indicated, the transceivers 212a, 212b may provide communication connectivity between the ear pieces 104 and, optionally, with other components such as the device 110 and/or the server 140. The transceivers 212a, 212b may each be embodied as a wireless communication interface (e.g., Bluetooth, Wi-Fi, or cellular) and/or a wired communication interface (e.g., a Lightning.RTM. interface from Apple, Inc. of Cupertino, Calif.). The transceivers 212a, 212b may transmit and/or receive digital audio and/or voice data. For example, the ear pieces 104 may receive audio instructions from the device 110 via the transceivers 212a, 212b that direct the user 102 to move in a particular direction or direct the user 102 to turn the user’s 102 head in a particular manner. The transceivers 212a, 212b additionally may transmit and/or receive data relating to the position and orientation of the left and right ear pieces 104a, 104b, as well as the orientation of the user’s 102 head. For example, having determined that the left and right ear pieces 104a, 104b are located at a particular position in space and oriented at a certain angle, such data may be transmitted via the transceivers 212a, 212b to the device 110 so that the device 110 may generate instructions for the user 102 to modify the orientation of his or her head.

[0038] The positional transceivers 216a, 216b may receive signals from the nodes 106.1-106.N (FIG. 1) at each of the ear pieces 104 and estimate characteristics of the received signals, which may be used to determine the position of each of the left and right ear pieces 104a, 104b. In an aspect in which the positional nodes 106.1-106.N are configured to provide and/or receive wireless ranging signals, the positional transceivers 216a, 216b may be similarly configured to transmit and/or receive those wireless ranging signals. In an aspect in which the positional nodes 106.1-106.N are Wi-Fi access points, the positional transceivers 216a, 216b may be configured with Wi-Fi interfaces to communicate with the Wi-Fi access points. In an aspect in which the positional nodes 106.1-106.N are GPS transmitters, the positional transceivers 216a, 216b may be configured as GPS receivers. In an aspect in which the positional nodes 106.1-106.N are cellular network transmitters, the positional transceivers 216a, 216b may include a cellular interface for communicating with the cellular network transmitters. The positional transceivers 216a, 216b may be disposed at a location within the respective ear pieces 104a, 104b to maximize the distance between the ear pieces 104a, 104b and, thus, allow an optimally accurate determination of the relative spatial positions of the ear pieces 104a, 104b.

[0039] In one aspect, the positional transceivers 216a, 216b and the transceivers 212a, 212b may be respectively embodied as the same component. That is, the left ear piece 104a may include a transceiver 212a configured to fulfill the aforementioned functionality of the positional transceiver 216a and the transceiver 212 and the right ear piece 104b may include another transceiver 212b configured to fulfill the functionality of the positional transceiver 216b and the transceiver 212b.

[0040] The motion sensor 218 may estimate orientation of an ear piece 104a, 104b with respect to gravity. The motion sensor 218 may be an accelerometer or a gyroscope, as some examples. The motion sensor 218 may measure the orientation of the user’s 102 head about an axis extending through the left and right ear pieces 104a, b (i.e., the orientation of the user’s 102 head such as if the user 102 was nodding). The motion sensor 218 may also measure the side-to-side orientation or tilt of the user’s 102 head.

[0041] The left and right ear pieces 104a, 104b additionally may each include a respective speaker 214a, 214b for providing audio to the user 102. For example, audio instructions to the user 102 based on the estimation of the user’s 102 head orientation may be delivered via the speakers 214a, 214b. At least one of the left and right ear pieces 104a, 104b may be configured with a microphone 220. The microphone 220 may allow, for example, the user 102 to provide speech input to the device 110 and/or the ear pieces 104 themselves.

[0042] As indicated, in some aspects, the ear pieces 104 may work cooperatively with other devices, such as the device 110. The device 110 may be embodied as a mobile device (e.g., a smart phone, a cellular phone, a portable music play, a portable gaming device, or a tablet computer), a wearable computing device (e.g., a smart watch or an optical head-mounted display), or other type of computing device. The device 110 may be configured with a memory 234 and a processor 232. The memory 234 may store an operating system 242 and one or more applications 244.1-244.N that may perform various operations relating to an estimated orientation of the user’s head. The operating system 242 and the applications 244.1-244.N may be executed by the processor 232. For example, the processor 232 and/or applications 244.1-244.N may perform one or more of the functions described in relation to the processors 222a, 222b, such as determining the spatial positioning and orientation aspects of the ear pieces 104 and/or providing information, instructions, or other output to the user 102.

[0043] The device 110 may further include a display 238 to visually display the operating system 242 and the applications 244.1-244.N and facilitate interaction of the user 102 with the device 110. The device 110 may further be configured with an input (not shown), such as a pointing device, touchscreen, one or more buttons, a keyboard, or the like by which the user may interact with the operating system 242 and the applications 244.1-244.N.

[0044] The device 110 may additionally be configured with a transceiver 240 and/or a positional transceivers 236. The transceiver 240 may include a wireless or a wired communication interface and may be used to effectuate communication between the device 110 and the ear pieces 104 and/or between the device 110 and the server 140. For example, the transceiver 240 may include a Bluetooth interface to communicate with correspondingly-equipped wireless ear pieces 104. As another example, the transceiver 240 may include a cellular communication interface to enable communication over a cellular network with, for instance, the server 140. It will be appreciated that the device 110 may include more than one transceiver 240, such as one for communicating with the ear pieces 104 and another for communicating with the server 140. The positional transceiver 236 of the device 110 may be used to determine the location of the device 110 and, therefore, also the location of the user 102. For example, the positional transceiver 236 may include a GPS receiver.

[0045] FIG. 3 illustrates an exemplary use of ear pieces 104 according to an aspect of the present disclosure. As illustrated, the ear pieces 104 are worn by a user 102 on the user’s head 103. The spatial position of the head 103 may be represented as X, Y, and Z coordinates on the respective axes. The estimate of the user’s 102 head orientation may be determined by establishing the discrete rotation angles of the head 103 (shown by .alpha., .beta., and .gamma. angles). In this example, the angle .alpha. may refer to the angle of rotation about the X axis, i.e., the lateral (left/right) or sideways tilt of the head 103. The angle .alpha. may also be referred to as “roll.” Similarly, angle .beta. may refer to the angle of rotation about the Y axis, i.e., the forward-backward rotational position of the head 103, such as during a nod. The angle .beta. may also be referred to as “pitch.” Finally, the angle .gamma. may refer to the angle of rotation about the Z axis, i.e., the rotational position of the head 103 about the axis of the neck, such as while swiveling the head 103 from left to right. The angle .gamma. may also be referred to as “yaw.” Different implementations, of course, may develop different spatial and/or angular representations to derive estimation of head orientation and spatial positioning.

[0046] In addition, the rotation angles .alpha., .beta., and .gamma. may define a planar orientation of a user’s 102 head. The direction orthogonal to the planar orientation of the user’s face may be considered the direction to which the head is facing. In some instances, it may be inferred that the head’s directional facing may generally correspond with the user’s 102 viewing direction or field of view.

……
……
……

您可能还喜欢...