空 挡 广 告 位 | 空 挡 广 告 位

Facebook Patent | Systems and methods for smart devices

Patent: Systems and methods for smart devices

Drawings: Click to check drawins

Publication Number: 20210289336

Publication Date: 20210916

Applicant: Facebook

Abstract

The disclosed systems may include systems and methods for clock synchronization under random transmission delay conditions. Additionally, systems and methods for horizon leveling for wrist captured images may be disclosed. In addition, the disclosed may include methods, systems, and devices for batch message transfer. The disclosed methods may also include a mobile computing device receiving an indication to initiate an emergency voice call by a user of the mobile computing device and initiating an Internet Protocol Multimedia Subsystem (IMS) emergency call. In addition, systems, methods, and devices for automatic content display may be disclosed. Various other related methods and systems are also disclosed.

Claims

  1. A method comprising: a processor; a memory device comprising instructions that, when executed by the processor, perform at least one of: a process for rotating images comprising: capturing, using an optical sensor, image data; determining a reference orientation associated with the captured image data; rotating the captured image data based on the reference orientation; and storing the rotated image data; a process for batching messages comprising: configuring a wireless communications unit of a first electronic device to a low power mode; receiving, by a second electronic device, at least one message; queuing, in a buffer memory of the second electronic device, the at least one message; determining a threshold associated with the queued at least one message; configuring the wireless communications unit of the first electronic device to a normal power mode; and sending the at least one message from the second electronic device to the first electronic device when the threshold is exceeded; a process for emergency calls comprising: receiving, by a mobile computing device, an indication to initiate a voice call by a user of the mobile computing device; determining, by the mobile computing device, that the voice call is an emergency voice call; initiating, on the mobile computing device, an Internet Protocol Multimedia Subsystem (IMS) emergency call based on determining that the voice call is an emergency voice call; and for a duration of the IMS emergency call: receiving, by the mobile computing device, location data for the user while providing a voice service for the IMS emergency call; and blocking, by the mobile computing device, at least one data service to at least one application executing on the mobile computing device; or a process for updating smartwatch displays comprising: receiving motion data from an inertial measurement unit of a smartwatch; training a first model with the motion data to detect a motion of the smartwatch towards a face of a user of the smartwatch such that a display screen of the smartwatch is within a field of view of the user; receiving image data from an image sensor of the smartwatch; training a second model with the image data to detect a face of a user of the smartwatch; and based on the trained first model and the trained second model, displaying content on the display screen when the display screen is within the field of view of the user and the face of the user is detected.

  2. A system comprising at least one of: an optical sensor device that comprises: an optical sensor; at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: capture, using the optical sensor, image data; determine a reference orientation associated with the captured image data; rotate the captured image data based on the reference orientation; and store the rotated image data; or a smartwatch that comprises: a display screen; an inertial measurement unit configured to detect a motion of the smartwatch; an image sensor configured to capture at least one image; and a processor configured to: determine that the motion of the smartwatch comprises a motion towards a face of a user of the smartwatch; determine that the face of the user is within a field of view of the image sensor based on the at least one captured image; and display content on the display screen when the display screen is determined to be within the field of view of the user and the face of the user is determined to be within the field of view of the image sensor.

  3. A system comprising: a first device with a first clock; a host system with a second clock, wherein the host system: sends a first transmission to the first device at a first time measured by the first clock and identified by a first timestamp; receives a second transmission from the first device at a fourth time measured by the first clock and identified by a fourth timestamp, wherein the second transmission comprises: a second timestamp, measured by the second clock, indicating a second time at which the first transmission was received by the first device from the synchronization system; and a third timestamp, measured by the second clock, indicating a third time at which the second transmission as sent by the first device to the host system; and determines, based at least in part on the first, second, third, and fourth timestamps, an estimated offset of the second clock relative to the first clock and an estimated period of the second clock relative to the first clock.

Description

[0001] This application claims the benefits of U.S. Provisional Application No. 63/104,537, filed Oct. 23, 2020, U.S. Provisional Application No. 63/143,504, filed Jan. 29, 2021, U.S. Provisional Application No. 63/148,812, filed Feb. 12, 2021, U.S. Provisional Application No. 63/164,180, filed Mar. 22, 2021, and U.S. Provisional Application No. 63/179,960, filed Apr. 26, 2021, the disclosures of each of which are incorporated, in their entirety, by this reference.

BRIEF DESCRIPTION OF DRAWINGS AND APPENDICES

[0002] The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.

[0003] FIG. 1 is an illustration of an exemplary time synchronization protocol.

[0004] FIGS. 2A and 2B are illustrations of an exemplary human-machine interface configured to be worn around a user’s lower arm or wrist.

[0005] FIGS. 3A and 3B are illustrations of an exemplary schematic diagram with internal components of a wearable system.

[0006] FIGS. 4A and 4B are diagrams of an example wearable electronic wrist device.

[0007] FIG. 5 is a flow chart of a method for horizon leveling of captured images.

[0008] FIGS. 6A-C depict an example captured image and horizon leveling examples thereof.

[0009] FIGS. 7A-C depict another example captured image and horizon leveling examples thereof.

[0010] FIG. 8 is a perspective view of an example wristband system, according to at least one embodiment of the present disclosure.

[0011] FIG. 9 is a perspective view of a user wearing an example wristband system, according to at least one embodiment of the present disclosure.

[0012] FIG. 10 illustrates a smartphone transferring messages to a wearable device, according to at least one embodiment of the present disclosure.

[0013] FIG. 11 is a chart illustrating normalized power consumption of a wireless communications unit as a function of aggregate message size, according to at least one embodiment of the present disclosure.

[0014] FIG. 12 is flowchart of a method of reducing power consumption in a wireless communications unit by batch messaging, according to at least one embodiment of the present disclosure.

[0015] FIG. 13 is an illustration of a user interacting with a mobile computing device capable of placing E911 service calls.

[0016] FIG. 14 is a flow diagram of an exemplary computer-implemented method for implementing an E911 emergency service on a mobile computing device.

[0017] FIG. 15 is a block diagram of an example system that includes modules for use in implementing E911 call support for a mobile computing device.

[0018] FIG. 16 illustrates an exemplary network environment in which aspects of the present disclosure may be implemented.

[0019] FIG. 17 is a flow diagram of an exemplary computer-implemented method for providing emergency voice call services and support on mobile computing devices.

[0020] FIG. 18 is a plan view of an example wristband system, according to at least one embodiment of the present disclosure.

[0021] FIG. 19 illustrates a user wearing an example wristband system, according to at least one embodiment of the present disclosure.

[0022] FIG. 20 illustrates a user viewing content on an example wristband system, according to at least one embodiment of the present disclosure.

[0023] FIG. 21 is an example block diagram of a wristband system, according to at least one embodiment of the present disclosure.

[0024] FIG. 22 is a flow diagram illustrating an example method of automatically displaying content on an example wristband system, according to at least one embodiment of the present disclosure.

[0025] Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within this disclosure.

DETAILED DESCRIPTION

Example Systems and Methods for Clock Synchronization Under Random Transmission Delay Conditions

[0026] A host system that relies on real-time data from multiple devices may rely on accurate timestamps associated with the data from the devices in order to accurately integrate the data from diverse sources. However, separate clocks, even if initially synchronized, may tend to drift from each other over time. Systems and methods are described herein that obtain an accurate representation of multiple clocks for separate devices (i.e., their rates and relative offsets). By obtaining an accurate representation of the devices’ clocks, these systems and methods facilitate synchronizing inputs from the devices (e.g., from two or more EMG devices, allowing a host system to obtain an accurate neuromuscular and/or musculoskeletal representation of a user wearing the EMG devices). Bidirectional communication of simple time synchronization packets may be used to estimate devices’ clock rates and offsets accurately. The clocks of multiple devices may thereby be kept in sync with a host system’s clock, ensuring that the host system has an accurate view of the timing of inputs from the devices.

[0027] In some examples, the systems and methods described herein may use a recursive least squares algorithm to estimate transmission delays of timestamped data (e.g., sent from devices with separate clocks to a host system). The recursive least squares algorithm may use adaptive bounds based on observed transmission delays from a device (e.g., the lowest transmission delays may be taken as related to the minimum transmission delay from the device (e.g., within a constant offset of a substantially constant minimum transmission delay)).

[0028] In one example, a host system clock may estimate the offset and the period of a device clock using a bidirectional time synchronization protocol. For example, the host system may send a message (including, e.g., the time of the host system clock) to the device. The device may send a return message to the host system that includes the time recorded by the device clock when the host system message was received by the device, the time recorded by the device clock when the device return message was sent to the host system, and, in some examples, the time recorded by the host system clock when the host system message was sent.

[0029] In some examples, the systems and methods described herein may repeat the bidirectional communication process multiple times and may, thereby, estimate the period of the device clock using linear regression and may bound the offset of the device clock with the constraint that all transmission delays must be positive. The systems described herein may determine the upper and lower bounds of the offset of the device clock by the minimum possible transmission delays in both directions.

[0030] In some embodiments, systems and methods described herein may use a recursive least squares algorithm with adaptive bounds. For example, the 1st percentile of observed transmission delays may remain within a constant offset of the constant minimum delay. Therefore, by tracking the 1st percentile and adjusting the model offset based on recent observations, the systems and methods described herein may more accurately estimate transmission delays.

[0031] In order to synchronize inputs arriving from independent devices (e.g., 2 EMG devices), the systems described herein may obtain an accurate representation of each devices’ clock in terms of the host PC clock. While estimating the sample time of incoming data streams using only unidirectional communication (i.e., device to host PC) is possible, it could be less accurate than using bidirectional time synchronization protocols (as is done in NTP and almost all other time synchronization algorithms). In particular, the offset between device clock and host PC clock cannot be determined due to possible, unobservable, deterministic transmission delays.

[0032] In view of the above, systems described herein may use bidirectional communication of simple time synchronization packets to enable estimation of the device clock’s rate and offset accurately. By incorporating such communication protocols into independent devices (e.g., 2 EMG devices), these systems will be able to keep all connected devices in sync with the host PC’s clock and have high confidence that the pipeline is not affected by unknown timing issues.

[0033] The approaches to sync the device clock to the host PC clock described herein may be robust to various sources of noise (including, e.g., clock drift and non-stationary delay distribution). These approaches may be applied to a setup to estimate a device’s clock rate and an upper bound on its offset (even without bidirectional communication). In addition, the systems described herein may estimate how well these perform in synchronizing two devices under realistic conditions. The benefit of implementing bidirectional communication for time sync protocol, even without knowing the minimal transmission delay time and how it varies, may be estimated by running dual-band experiments under various conditions (many bands, environments, etc.).

[0034] The solution detailed herein may also include a mechanism to adjust various parameters (such as time decay constants) based on streaming data statistics.

[0035] The term “clock” may refer to any device that measures time by the number of ticks that passes in a given time interval. Note that having a clock does not necessarily provide the capability of telling what time it is. Nor is it necessarily known how one clock relates to other clocks. Only if a clock is stable, and its rate and offset are known relative to, for example, a standard atomic clock, can the time at each tick be calculated for the clock.

[0036] However, there is no such thing as a perfectly stable clock. All clocks have fluctuations in the rate of ticking (though for some these fluctuations are very small) that cause them to drift away from each other. Thus, if there are two clocks to synchronize (i.e., to model the relations between the ticks of one clock to the ticks of the other), a method may include continuously monitoring this drift and updating the model used to convert time from one clock to time in another.

[0037] Consider two clocks: A master clock (e.g., the host PC clock) and a slave clock (e.g., the device sample clock). For convenience, assume that the master clock ticks faster than the slave clock. Then, mark the clock ticks of the master clock as and the ticks of the slave clock with u. Consider a span of time t for which u and t and are stable with respect to each other. This implies a linear relationship between u and t: t (u)=t.sub.0+.tau.u.

[0038] The goal of a time synchronization protocol (in the present case) includes enabling the host PC (having access to the master clock) to estimate the offset, t.sub.0, and the period, .tau., of the slave clock.

[0039] However, u and t cannot be observed simultaneously, but only through a communication channel that carries some unknown (and, e.g., varying) transmission delay. The bidirectional time sync protocol described herein may therefore exchange clock time information between master and slave in a way to allow averaging out the random delays and having a good estimate of t.sub.0 and .tau..

[0040] An example process is described with reference to FIG. 1. The master clock (or “second clock”) sends (in a first transmission) its own time to the slave clock (or “first clock”) at some time t.sub.s, which is received by the slave clock with some unknown delay, at some time ur. After a short while, the slave clock then sends (in a second transmission) its own time u.sub.s>u.sub.r, along with t.sub.s and u.sub.r, back to the master clock. After an additional unknown delay, t.sub.s, u.sub.r, and u.sub.s (recorded as first, second, and third timestamps, respectively) are received by the master clock at time t.sub.r (recorded as a fourth timestamp).

[0041] The master clock now has two noisy measurements of t as a function of u: (u.sub.r, t.sub.s) and (u.sub.s, t.sub.r). By repeating this process many times one can estimate .tau. using linear regression and bind to using causality constraints (i.e., all delays must be positive). The upper and lower bounds of to are determined by the minimal possible transmission delays in both directions. If the ratio between the minimal possible delays is known, a precise estimate of to can be derived as well.

[0042] The upper and lower bounds of to may be derived as follows. Assume a linear relationship between t and u:

t(u)=t.sub.0+.tau.u (1)

[0043] During the time sync protocol, the master clock sends its own time, t.sub.s, to the slave clock. This information is received at the slave clock’s device at time

u.sub.r=.left brkt-bot.(t.sub.s+.delta..sub.m.fwdarw.s-t.sub.0)/.tau..right brkt-bot. (2)

[0044] where .delta..sub.m.fwdarw.s is a random transmission delay that is distributed according to some probability distribution

.delta..sub.m.fwdarw.s.varies.P.sub.m.fwdarw.s(.delta.;.delta..sub.m.fwd- arw.s.sup.min, … ) (3)

[0045] where .delta..sub.m.fwdarw.s.sup.min>0 and P.sub.m.fwdarw.s is such that P.sub.m.fwdarw.s(.delta.; .delta..sub.m.fwdarw.s.sup.min, … )=0 for all .delta.<.delta..sub.m.fwdarw.s.sup.min. Assume that this distribution is stationary and does not change in time. At some time, u.sub.s>u.sub.r, the slave clock sends its own time back to the master clock, along with t.sub.s and u.sub.r. This information is received back at the master clock at time

t.sub.r=.left brkt-bot.t.sub.0+.tau.u.sub.s+.delta..sub.s.fwdarw.m.right brkt-bot. (4)

[0046] where .delta..sub.s.fwdarw.m is a random transmission delay that is distributed according to some, possibly different, probability distribution

.delta..sub.s.fwdarw.m.varies.P.sub.s.fwdarw.m(.delta.;.delta..sub.s.fwd- arw.m.sup.min, … ) (5)

[0047] where .delta..sub.s.fwdarw.m.sup.min>0 and, similarly to P.sub.m.fwdarw.s, P.sub.s.fwdarw.m is such that P.sub.s.fwdarw.m(.delta.; .delta..sub.s.fwdarw.m.sup.min, … )=0 for all .delta.<.delta..sub.s.fwdarw.m.sup.min. Only the times t.sub.s, u.sub.r, u.sub.s, and t.sub.r are observable but not the offset, period, or random delays.

[0048] Given this model for transmission delays, consider a set of measurements {(t.sub.s, u.sub.r).sub.i}.sub.i=1.sup.N and {(t.sub.r, u.sub.s).sub.i}.sub.i=1.sup.M. Estimate .tau. by linear regression and denote the estimated period as .tau.{circumflex over ( )}. If the distributions of delays in both directions are identical, and M=N, then the offset parameter given by the linear regression converges to the true offset, t.sub.0, as the number of measurements approaches infinity. However, if the distributions are different, the linear regression offset will be biased.

[0049] In one example, the systems described herein may use causality considerations to derive hard bounds on t.sub.o. For any measurement of (t.sub.s, u.sub.r).sub.i, re-write u.sub.r as:

u.sub.r=.left brkt-bot.(t.sub.s+.delta..sub.m.fwdarw.s-t.sub.0)/.tau..right brkt-bot..ident.(t.sub.s+.delta..sub.m.fwdarw.s-t.sub.o)/.tau.-.di-elect cons. (6)

[0050] where 0.ltoreq..di-elect cons.<1 denotes the remainder of the floor operation. Now use the fact that .delta..sub.m.fwdarw.s.gtoreq..delta..sub.m.fwdarw.s.sup.min to assert that for every measurement:

t o + .tau. .times. .times. u r - t s + .tau. .times. .times. .gtoreq. .delta. m .fwdarw. s min .times. ( 7 ) t o + .tau. .times. .times. u r - t s .gtoreq. .delta. m .fwdarw. s min - .tau. .times. .times. ( 8 ) ##EQU00001##

[0051] Since the .delta..sub.m.fwdarw.s.sup.min and .di-elect cons. are not known, the right-hand side of the above equation can be bounded by .delta..sub.m.fwdarw.s.sup.min-.tau..di-elect cons..gtoreq.-.tau.. Thus:

t.sub.o.gtoreq.t.sub.s-.tau.u.sub.r-.tau.. (9)

[0052] In a similar manner, assert that for every measurement of (t.sub.r, u.sub.s).sub.i:

t r = t o + .tau. .times. .times. u s + .delta. s .fwdarw. m .ident. t o + .tau. .times. .times. u s + .delta. s .fwdarw. m - .times. ( 10 ) t r - t o - .tau. .times. .times. u s .gtoreq. .delta. s .fwdarw. m - .gtoreq. - 1 .times. ( 11 ) t o .ltoreq. t r - .tau. .times. .times. u s + 1 ( 12 ) ##EQU00002##

[0053] Since equations (9) and (12) must hold for all measurements, it follows that:

t.sub.o.gtoreq.max.sub.i(t.sub.s-.tau.u.sub.r)-.tau. (13)

and

t.sub.o.ltoreq.min.sub.i(t.sub.r-.tau.u.sub.s)+1 (14)

[0054] The systems described herein may require that these bounds will hold for the estimates of the offset and period: i.e., for a given estimate of the period .tau.{circumflex over ( )}:

t.sub.0.sup.UB.ident.min.sub.i(t.sub.r-.tau.{circumflex over ( )}u.sub.s)+1 (15)

t.sub.0.sup.LB.ident.min.sub.i(t.sub.s-.tau.{circumflex over ( )}u.sub.r)-.tau.{circumflex over ( )} (16)

t.sub.0.sup.LB.ltoreq.t.sub.o.ltoreq.t.sub.0.sup.UB (17)

[0055] Further, given that the systems described herein collect enough samples (and the delay distribution is stationary) it is safe to assume that the estimate for the minimal delay is very close to the actual unknown minimal possible delay. That is, the offset estimate, t.sub.o, obeys the following 2 equations:

t.sub.o-t.sub.0.sup.LB.apprxeq..delta..sub.m.fwdarw.s.sup.min (18)

t.sub.0.sup.UB-t.sub.o.apprxeq..delta..sub.s.fwdarw.m.sup.min (19)

[0056] If the minimal transmission delay is assumed to be symmetric (i.e. .delta..sub.m.fwdarw.s.sup.min=.delta..sub.s.fwdarw.m.sup.min.ident..delt- a..sup.min), then solve for t.sub.o:

t.sub.o=1/2(t.sub.0.sup.UB+t.sub.0.sup.LB) (20)

.delta..sup.min=1/2(t.sub.0.sup.UB-t.sub.0.sup.LB) (21)

[0057] Systems described herein may evaluate the offset and the period in an online (e.g., real-time) scenario. To this end, these systems may apply a recursive least squares algorithm to the data (e.g., t.sub.s, u.sub.s, t.sub.r, u.sub.r), as well as continuously update the upper- and lower-bounds estimates to evaluate the offset and period based only on past observations.

[0058] Systems described herein may operate under certain assumptions, including, e.g., that the master clock and slave clock are substantially stable relative to each other (i.e., little or no clock drift) and that the distribution of transmission delays is substantially stationary.

[0059] To model click draft, the linear model t(u)=t.sub.0+.tau.u is replaced with a stochastic process:

t(u)=t.sub.0+.tau.u (22)

.tau.(u).varies.P(.tau.(u)|.tau.(u-1),.tau.(u-2), … ) (23)

[0060] A simple linear model may not fit the data well. Accordingly, systems described herein may limit the amount of data used to fit the parameters on to the most recent data within a given window.

[0061] In one example, one or more separate devices (with separate clocks) may transmit timestamped data to a host system over a period of time. In some examples, systems described herein may provide high-accuracy clock estimates (e.g., mapping clock and/or timestamp information relating to transmitting devices to clock and/or timestamp information relating to a host system) and/or may provide a guarantee that the time deltas between consecutive timestamps are accuracy within a given tolerance level around a given nominal rate.

[0062] In some examples, transmission delays between a device and a host system may be non-stationary (e.g., may exhibit a bimodal distribution). Systems described herein may therefore employ a recursive least squares algorithm with adaptive bounds constraints to improve accuracy of clock metrics and/or timestamp estimates.

[0063] In one approach, systems described herein may estimate transmission delays by fitting a single linear model (t(u)=t.sub.0+.tau.u), where x is determined by linear regression and to by the causality constraints that all delays must be non-negative. In some examples, minimal delay may vary slowly over time due to clock drifts. When many delays are sampled over time, the shorter end of the distribution (e.g., the 1.sup.st percentile of delays according to a window of most recent delays over time) may vary slowly and smoothly and may be largely unaffected by large changes in mean delays and delay variability.

[0064] Accordingly, systems described herein may track a statistical metric of delays over time (e.g., 1st percentile of delays according to a window of most recent delays over time, or, more generally, a selected low percentile (0.5, 1, 1.5, 2, 5, etc.)) and adjust the model offset based on recent observations such that the 1st percentile delays will always be 0.

[0065] In order to account for clock drift, the systems described herein may apply exponentially decaying weights to a recursive least squares algorithm.

[0066] In some examples, using the methods described above, systems described herein may achieve very high accuracy of clock estimations. The variance in the error may fall within a single time step (e.g., +/-0.5 milliseconds).

[0067] In some examples, the systems described herein may implement a restriction so that time steps will always be within pre-defined tolerance levels. This may address otherwise unconstrained variability in time step size and/or may prevent time steps from being reckoned as negative.

[0068] In addition, in some examples the systems described herein may apply a warm-up period (e.g., lasting approximately 4 seconds) in which the timestamps are given using the nominal rate and raw timing data is collected to estimate the initial offset parameter. Furthermore, in some examples, the initial precision matrix of the recursive least squares algorithm may be configured to reflect the actual observation rate of the device clock (e.g., every 8 samples in a batch instead of every 1 sample):

P 0 = [ u = - .infin. 0 .times. .alpha. - u .function. ( n b .times. u 1 ) .times. ( n b .times. u .times. .times. 1 ) ] - 1 ##EQU00003##

[0069] where, using the earlier-related example, n.sub.b=8 is the number of samples in a batch.

[0070] FIGS. 2A-3B illustrate example devices that may benefit from the clock synchronization approaches detailed herein. Specifically, FIG. 2A illustrates an exemplary human-machine interface (also referred to herein as an EMG control interface) configured to be worn around a user’s lower arm or wrist as a wearable system 200. In this example, wearable system 200 may include sixteen neuromuscular sensors 210 (e.g., EMG sensors) arranged circumferentially around an elastic band 220 with an interior surface 230 configured to contact a user’s skin. However, any suitable number of neuromuscular sensors may be used. The number and arrangement of neuromuscular sensors may depend on the particular application for which the wearable device is used. For example, a wearable armband or wristband can be used to generate control information for controlling an augmented reality system, a robot, controlling a vehicle, scrolling through text, controlling a virtual avatar, or any other suitable control task. As shown, the sensors may be coupled together using flexible electronics incorporated into the wireless device. FIG. 2B illustrates a cross-sectional view through one of the sensors of the wearable device shown in FIG. 2A. In some embodiments, the output of one or more of the sensing components can be optionally processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification). In other embodiments, at least some signal processing of the output of the sensing components can be performed in software. Thus, signal processing of signals sampled by the sensors can be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the technology described herein are not limited in this respect. A non-limiting example of a signal processing chain used to process recorded data from sensors 210 is discussed in more detail below with reference to FIGS. 3A and 3B.

[0071] FIGS. 3A and 3B illustrate an exemplary schematic diagram with internal components of a wearable system with EMG sensors. As shown, the wearable system may include a wearable portion 310 (FIG. 3A) and a dongle portion 320 (FIG. 3B) in communication with the wearable portion 310 (e.g., via BLUETOOTH or another suitable wireless communication technology). As shown in FIG. 3A, the wearable portion 310 may include skin contact electrodes 311, examples of which are described in connection with FIGS. 2A and 2B. The output of the skin contact electrodes 311 may be provided to analog front end 330, which may be configured to perform analog processing (e.g., amplification, noise reduction, filtering, etc.) on the recorded signals. The processed analog signals may then be provided to analog-to-digital converter 332, which may convert the analog signals to digital signals that can be processed by one or more computer processors. An example of a computer processor that may be used in accordance with some embodiments is microcontroller (MCU) 334, illustrated in FIG. 3A. As shown, MCU 334 may also include inputs from other sensors (e.g., IMU sensor 340), and power and battery module 342. The output of the processing performed by MCU 334 may be provided to antenna 350 for transmission to dongle portion 320 shown in FIG. 3B.

[0072] Dongle portion 320 may include antenna 352, which may be configured to communicate with antenna 350 included as part of wearable portion 310. Communication between antennas 350 and 352 may occur using any suitable wireless technology and protocol, non-limiting examples of which include radiofrequency signaling and BLUETOOTH. As shown, the signals received by antenna 352 of dongle portion 320 may be provided to a host computer for further processing, display, and/or for effecting control of a particular physical or virtual object or objects.

[0073] Although the examples provided with reference to FIGS. 2A-2B and FIGS. 3A-3B are discussed in the context of interfaces with EMG sensors, the techniques described herein for reducing electromagnetic interference can also be implemented in wearable interfaces with other types of sensors including, but not limited to, mechanomyography (MMG) sensors, sonomyography (SMG) sensors, and electrical impedance tomography (EIT) sensors. The techniques described herein for reducing electromagnetic interference can also be implemented in wearable interfaces that communicate with computer hosts through wires and cables (e.g., USB cables, optical fiber cables, etc.).

EXAMPLE EMBODIMENTS

[0074] Example 1: A system may include a first device with a first clock, a host system with a second clock, where the host system sends a first transmission to the first device at a first time measured by the first clock and identified by a first timestamp, receives a second transmission from the first device at a fourth time measured by the first clock and identified by a fourth timestamp, where the second transmission may include a second timestamp, measured by the second clock, indicating a second time at which the first transmission was received by the first device from the synchronization system, and a third timestamp, measured by the second clock, indicating a third time at which the second transmission as sent by the first device to the host system, and determines, based at least in part on the first, second, third, and fourth timestamps, an estimated offset of the second clock relative to the first clock and an estimated period of the second clock relative to the first clock.

Example Systems and Methods for Horizon Leveling for Wrist Captured Image

[0075] Wearable electronic devices may provide various functionalities, such as the ability to capture images or take photographs using a camera or other image sensor. Although wearable devices may provide certain usability benefits, for instance by allowing hands-free usage or not requiring devices to be put away for carrying purposes, wearable devices may present certain usability drawbacks. Depending on where the device is worn, users may have difficulty in optimally performing certain functions, such as taking photographs. For example, when using a wrist-worn device such as a smartwatch, users may have difficulty positioning their arms to take level self-portrait or “selfie” photographs. In addition, even if a display of the smartwatch presents a viewfinder or image preview, it may be difficult for users to correctly position their arms to view the display and align their arms for taking photographs.

[0076] The present disclosure is generally directed to horizon leveling for wrist captured images. As will be explained in greater detail below, embodiments of the present disclosure may use sensors and/or data available on wearable devices to realign photographs, thereby countering a tilt that may be produced when taking photographs using wearable devices. By determining a reference orientation associated with captured image data and rotating the captured image data based on the reference orientation, the embodiments described herein may correct unwanted tilting in the image data.

[0077] Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.

[0078] The following will provide, with reference to FIGS. 4A-7C, detailed descriptions of horizon leveling in captured images. Descriptions of example wearable devices are provided with reference to FIGS. 4A-B. Descriptions of a process of horizon leveling are provided with reference to FIG. 5. Descriptions of horizon leveled image examples are provided with reference to FIGS. 6A-7C.

[0079] FIGS. 4A-B are illustrations of example wearable device 400 and wearable device 402. Wearable devices 400 and/or 402 may correspond to, be incorporated with, be a portion of, or otherwise operate in conjunction with any of the devices and/or systems described herein. As seen in FIGS. 4A-B, wearable devices 400 and 402 may include an optical sensor 410 (e.g., a camera), a display 412 (e.g., a touchscreen or other type of display), and a band 414. Wearable device 400 may have a rectangular watch form and wearable device 402 (which may correspond to wearable device 400) may have a circular watch form, although in different examples wearable devices 400 and/or 402 may have different shapes. In other examples, wearable devices 400 and/or 402 may have different wearable form factors. Moreover, the positions or locations of components, such as optical sensor 410, may vary in other examples.

[0080] Wearable devices 400 and/or 402 may be computing devices such as smartwatches or other mobile devices. Although not shown in FIGS. 4A-B, wearable devices 400 and/or 402 may include additional components, such as an inertial measurement unit (IMU), one or more processors, and/or physical memory.

[0081] FIG. 5 is a flow diagram of an exemplary computer-implemented method 500 for horizon leveling of captured images. The steps shown in FIG. 5 may be performed by any suitable computer-executable code and/or computing system, including the devices illustrated in FIGS. 4A and/or 4B. In one example, each of the steps shown in FIG. 5 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below.

[0082] As illustrated in FIG. 5, at step 510 one or more of the systems described herein may capture, using an optical sensor, image data. For example, wearable device 400 may capture, using optical sensor 410, image data.

[0083] In some embodiments, the term “image data” may refer to a single frame or multiple frames. Examples of image data include, without limitation, photographs, videos, stereoscopic photographs, stereoscopic videos, etc.

[0084] The systems described herein may perform step 510 in a variety of ways. In one example, a user may push a button or otherwise provide an input to wearable device 400 to capture the image data. In other examples, the user may set a timer on wearable device 400 to capture the image data.

[0085] FIGS. 6A and 7A illustrate example captured images 600 and 700, respectively. As seen in FIG. 6A, captured image 600 may be a selfie of a user captured using a smartwatch, such as wearable device 400. Although the user’s face may have a straight pose, captured image 600 may present the user’s face at a tilt due to a position of the user’s arm wearing wearable device 400 when captured image 600 was captured. Background objects, such as walls, may also similarly be presented at a tilt. As seen in FIG. 7, captured image 700 may be another selfie of the user captured using a smartwatch, such as wearable device 400. The user’s face may have a tilted pose in captured image 700 (as seen in reference to background objects). However, the background objects, such as walls, may be presented at a tilt due to a position of the user’s arm wearing wearable device 400 when captured image 700 was taken.

[0086] Returning to FIG. 5, at step 520 one or more of the systems described herein may determine a reference orientation associated with the captured image data. For example, wearable device 400 may determine a reference orientation associated with, for example, captured image 600 and/or 700.

[0087] In some embodiments, the term “reference orientation” may refer to a desired orientation for aligning or orienting image data. In some examples, a reference orientation may correspond to a global orientation in which a horizon (e.g., surface of the Earth) is level. When an image aligns with the global orientation, a positive y-axis (e.g., an upward direction in the image) may align with a direction perpendicular or normal to the horizon, a negative y-axis (e.g., a downward direction in the image) may align with a direction of gravity (which may also be perpendicular or normal to the horizon), and an x-axis (e.g., left to right in the image) may be parallel with the Earth’s horizon. In other examples, a reference orientation may correspond to another orientation, for instance with respect to a reference object that may define a reference axis. In such examples, when an image aligns with the reference orientation, an x-axis and/or y-axis of the image may be parallel or perpendicular to the reference axis indicated by the reference orientation.

[0088] The systems described herein may perform step 520 in a variety of ways. In one example, determining the reference orientation may further include saving orientation data from an IMU of the device when capturing the image data, deriving a global reference from the orientation data, and determining the reference orientation from the global reference. For example, wearable device 400 may save orientation data from the IMU when capturing captured images 600 and/or 700. The orientation data may indicate an orientation of wearable device 400 when captured images 600 and/or 700 were captured. Because the IMU may determine the orientation data with respect to a global reference, such as gravity, wearable device 400 may use the orientation data to derive the global reference and determine the reference orientation from the global reference. The orientation data from the IMU may indicate an offset of wearable device 400 from being level (e.g., aligned with the global reference).

[0089] In some examples, determining the reference orientation may further include recognizing an object in the captured image data, deriving orientation data from the recognized object, and determining the reference orientation from the orientation data. For example, wearable device 400 may recognize an object in captured images 600 and/or 700. The recognized object may be a face. Wearable device 400 may determine a pose data of the recognized face to determine the orientation data. The pose data may define a reference axis corresponding to the recognized face. The reference orientation may correspond to the reference axis associated with the recognized face. The reference axis may further correspond to a desired axis for captured image data, such as a positive y-axis direction.

[0090] In other examples, the recognized object may be, for instance, a background object, such as walls, corners, doors, windows, etc., which may define a discernable reference axis (e.g., an axis tracing a substantially straight edge of the object, an axis defined between two or more recognized reference points of the object, etc.). Wearable device 400 may derive the orientation data from the reference axis and further determine the reference orientation from the reference axis.

[0091] In some examples, a user input may define, at least in part, the reference orientation. For instance, the user may be presented with options choosing between using IMU data, face recognition (which may include selecting from one or more recognized faces in the captured image data), selecting an object, etc. In some examples, the user may be presented with a manual assist feature, for instance allowing the user to manually define a desired reference axis from possible reference axis options (e.g., from IMU data, recognized objects, etc.) and/or manually input the desired reference axis (e.g., by drawing the desired reference axis).

[0092] At step 530, one or more of the systems described herein may rotate the captured image data based on the reference orientation. For example, wearable device 400 may rotate captured images 600 and/or 700 based on the reference orientation.

[0093] The systems described herein may perform step 530 in a variety of ways. In one example, rotating the captured image data may further include rotating the captured image data to align (e.g., make parallel with) the reference orientation with an x-axis or y-axis of the image data. For example, if the reference orientation corresponds to a direction of gravity, wearable device 400 may rotate the captured image data to align a negative y-axis of the captured image data with the direction of gravity. If the reference orientation corresponds to a horizon (e.g., an axis perpendicular to the direction of gravity), wearable device 400 may rotate the captured image data to align an x-axis of the captured image data with the horizon. In some examples, orientation from the IMU may indicate an offset of wearable device 400 from a global reference (e.g., direction of gravity, horizon, etc.) such that a similar offset may be applied for rotating the captured image data.

[0094] In some examples, if the reference orientation corresponds to a reference axis defined by an object, wearable device 400 may rotate the captured image data to align with the reference axis. For instance, if the reference axis corresponds to a positive y-axis, wearable device 400 may rotate the captured image data such that a positive y-axis of the image data aligns with the reference axis.

[0095] FIG. 6B illustrates a rotated image 610 that shows captured image 600 rotated based on a reference orientation derived from IMU data (e.g., horizon leveling). FIG. 6C illustrates a rotated image 620 that shows captured image 600 rotated based on a reference orientation derived from face recognition (e.g., portrait leveling). As seen in FIG. 6B, by rotating captured image 600 based on IMU data, background objects such as walls and corners are seen aligned with a direction of gravity (e.g., a downward direction in rotated image 610 is parallel with an expected direction of gravity when viewing rotated image 610). As seen in FIG. 6C, by rotating captured image 600 based on face recognition, the user’s face is seen aligned with a y-axis of rotated image 620 such that the user’s face is presented upright. Because the user’s face generally aligns with the direction of gravity in captured image 600, the rotations produced from horizon leveling (e.g., rotated image 610) and portrait leveling (e.g., rotated image 620) may produce similar results.

[0096] FIG. 7B illustrates a rotated image 710 that shows captured image 700 rotated based on a reference orientation derived from IMU data (e.g., horizon leveling). FIG. 7C illustrates a rotated image 720 that shows captured image 700 rotated based on a reference orientation derived from face recognition (e.g., portrait leveling). As seen in FIG. 7B, by rotating captured image 700 based on IMU data, background objects such as walls and corners are seen aligned with a direction of gravity (e.g., a downward direction in rotated image 710 is parallel with an expected direction of gravity when viewing rotated image 710). As seen in FIG. 7C, by rotating captured image 700 based on face recognition, the user’s face is seen aligned with a y-axis of rotated image 720 such that the user’s face is presented upright. However, because the user’s face is not aligned with the direction of gravity in captured image 700, the background objects (e.g., door, walls, corners, etc.) may not be aligned with the direction of gravity. Accordingly, the rotations produced from horizon leveling (e.g., rotated image 710) and portrait leveling (e.g., rotated image 720) may produce different results. The user may prefer and select one option over the other.

[0097] Moreover, as seen in rotated images 610, 620, 710, and 720, rotating captured images 600 and 700 may produce portions without image data. Because image data may be stored and/or displayed in a two-dimensional rectangular format, rotating such image data within the rectangular format may produce portions missing image data. To remove these portions of missing image data, wearable device 400 may enlarge (e.g., zoom in) and/or crop the rotated image data.

[0098] In some examples, wearable device 400 may automatically perform enlarging and/or cropping. For example, wearable device 400 may determine an optimal resolution, such as a maximum resolution that may minimize an amount of cropped data, or optimizing to minimize cropping of recognized objects. Wearable device 400 may automatically perform enlarging and/or cropping based on user preferences that may define a desired optimization. In other examples, wearable device 400 may present the user an interface to manually enlarge and/or crop the rotated image data.

[0099] As illustrated in FIG. 5, at step 540 one or more of the systems described herein may store the rotated image data. For example, wearable device 400 may save the rotated image data.

[0100] The systems described herein may perform step 540 in a variety of ways. In one example, wearable device 400 may save the rotated image data in a local storage and/or a local buffer. In some examples, wearable device 400 may transfer (e.g., via a wired or wireless network) the rotated image data to another computing device.

[0101] Although the above example refers to photographs, in other examples, the concepts described herein may be applied to other types of image data, such as video. For instance, the systems and methods described herein may be applied to each frame of a video.

[0102] Wearable devices, such as smartwatches, may allow the user to take photographs, such as self-portrait (e.g., “selfie”) photographs. However, due to the placement of the wearable device on a user’s body, and the user’s natural body movements, the user may have difficulty taking level photographs with the wearable device. For example, the user may find it awkward or otherwise difficult to hold their arm level to the horizon when taking a selfie. By using inputs that may already be available with the wearable device, such as IMU measurements, facial recognition, etc., the user may be provided options for adjusting or correcting any unwanted tilt that may be due to the user’s tilted arm when taking the selfie. For example, by using the wearable device’s IMU data, the tilt in the device may be countered by accordingly rotating the associated photo. Alternatively, by recognizing a facial pose in the photo, the photo may be rotated to present the face in an upright orientation. Thus, the systems and methods described herein may efficiently provide corrections to tilted photos without requiring additional hardware.

EXAMPLE EMBODIMENTS

[0103] Example 2: A device comprising: an optical sensor; at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: capture, using the optical sensor, image data; determine a reference orientation associated with the captured image data; rotate the captured image data based on the reference orientation; and store the rotated image data.

[0104] Example 3: The device of Example 2, further comprising an inertial measurement unit (IMU), wherein the instructions for determining the reference orientation further comprises instructions for: saving orientation data from the IMU when capturing the image data; deriving a global reference from the orientation data; and determining the reference orientation from the global reference.

[0105] Example 4: The device of Examples 2 or 3, wherein the instructions for determining the reference orientation further comprises instructions for: recognizing an object in the captured image data; deriving orientation data from the recognized object; and determining the reference orientation from the orientation data.

[0106] Example 5: The device of any of Examples 2-4, wherein the recognized object corresponds to a face and the orientation data corresponds to pose data of the face.

[0107] Example 6: The device of any of Examples 2-5, wherein the instructions further comprise instructions for enlarging the rotated image data.

[0108] Example 7: The device of any of Examples 2-6, wherein the instructions further comprise instructions for cropping the rotated imaged data.

[0109] Example 8: The device of any of Examples 2-7, wherein the instructions for rotating the captured image data further comprise instructions for rotating the captured image data to align the reference orientation with an x-axis or y-axis of the image data.

[0110] Example 9: A computer-implemented method comprising: capturing, using an optical sensor, image data; determining a reference orientation associated with the captured image data; rotating the captured image data based on the reference orientation; and storing the rotated image data.

[0111] Example 10: The method of Example 9, wherein determining the reference orientation further comprises: saving orientation data from an inertial measurement unit (IMU) when capturing the image data; deriving a global reference from the orientation data; and determining the reference orientation from the global reference.

[0112] Example 11: The method of Examples 9 or 10, wherein determining the reference orientation further comprises: recognizing an object in the captured image data; deriving orientation data from the recognized object; and determining the reference orientation from the orientation data.

[0113] Example 12: The method of any of Examples 9-11, wherein the recognized object corresponds to a face and the orientation data corresponds to pose data of the face.

[0114] Example 13: The method of any of Examples 9-12, further comprising enlarging the rotated image data.

[0115] Example 14: The method of any of Examples 9-13, further comprising cropping the rotated imaged data.

[0116] Example 15: The method of any of Examples 9-14, wherein rotating the captured image data further comprises rotating the captured image data to align the reference orientation with an x-axis or y-axis of the image data.

[0117] Example 16: A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: capture, using an optical sensor of the computing device, image data; determine a reference orientation associated with the captured image data; rotate the captured image data based on the reference orientation; and store the rotated image data.

[0118] Example 17: The computer-readable medium of Example 16, wherein the instructions for determining the reference orientation further comprises instructions for: saving orientation data from an inertial measurement unit (IMU) of the computing device when capturing the image data; deriving a global reference from the orientation data; and determining the reference orientation from the global reference.

[0119] Example 18: The computer-readable medium of Examples 16 or 17, wherein the instructions for determining the reference orientation further comprises instructions for: recognizing an object in the captured image data; deriving orientation data from the recognized object; and determining the reference orientation from the orientation data.

[0120] Example 19: The device of any of Examples 16-18, wherein the recognized object corresponds to a face and the orientation data corresponds to pose data of the face.

[0121] Example 20: The device of any of Examples 16-19, wherein the instructions further comprise instructions for enlarging the rotated image data.

[0122] Example 21: The device of any of Examples 16-20, wherein the instructions further comprise instructions for cropping the rotated imaged data.

[0123] Example 22: The device of any of Examples 16-21, wherein the instructions for rotating the captured image data further comprise instructions for rotating the captured image data to align the reference orientation with an x-axis or y-axis of the image data.

Example Methods, Systems, and Devices for Batch Message Transfer

[0124] Wearable devices may be configured to be worn on a user’s body part, such as a user’s wrist, arm, leg, torso, neck, head, finger, etc. Such wearable devices may be configured to perform various functions. For example, a wristband system may be an electronic device worn on a user’s wrist that performs functions such as delivering content to the user, executing social media applications, executing artificial-reality applications, messaging, web browsing, sensing ambient conditions, interfacing with head-mounted displays, monitoring a health status of the user, etc. However, the compact size of wearable devices may restrict the physical dimensions and/or energy capacity of batteries that supply power to the processors, sensors, and actuators of wearable devices.

[0125] The present disclosure details systems, devices, and methods related to conserving power consumption in wearable devices (e.g., a smartwatch, smart eyeglasses, a head-mounted display, etc.) in order to extend the amount of time before battery charging is required. Many of the functions of the wearable device may require wireless communications to exchange data with other devices, smartphones, access points, servers, etc. In some examples, a wireless communication unit in a wearable device may be placed in a low-power mode to conserve battery power. The wireless communication unit may be configured to not receive messages from other devices while in the low-power mode (e.g., a sleep mode). Messages intended for receipt by the wearable device may be temporarily stored in memory by another device (e.g., a smartphone of the user of the wearable device, a gateway device, etc.). The stored messages may be sent to the wearable device as a batch of messages when the wireless communication unit is configured to a normal operating mode (e.g., woken from the low-power mode). Advantages of embodiments of the present disclosure may include reducing power consumption in the wearable device and extending the amount of time the wearable device may be used before requiring a battery recharge.

[0126] The following will provide, with reference to FIGS. 8-12, detailed descriptions of methods, systems, and devices for batch message transfer to reduce battery power consumption in an electronic device with limited battery capacity (e.g., a wearable device). First, a description of a wristband system with limited battery capacity is presented with reference to FIG. 8. A description of a user donning a wearable device with limited battery capacity is presented with reference to FIG. 9. A description of a device (e.g., a smartphone) transferring batch messages to wearable devices is presented with reference to FIG. 10. A chart illustrating normalized power consumption of a wireless communications unit as a function of aggregate message size is presented with reference to FIG. 11. A flowchart of a method of reducing power consumption in a wireless communications unit by batch messaging is presented with reference to FIG. 12.

[0127] FIG. 8 illustrates a perspective view of an example wearable device in the form of a smartwatch 800. Smartwatch 800 may have a substantially rectangular or circular shape and may be configured to allow a user to wear smartwatch 800 on a body part (e.g., a wrist). Smartwatch 800 may include a retaining mechanism 808 (e.g., a buckle, a hook and loop fastener, etc.) for securing watch band 806 to the user’s wrist.

[0128] Smartwatch 800 may be configured to execute functions, such as, without limitation, sending/receiving messages (e.g., text, speech, images, video, etc.), displaying messages, configuring user preferences, displaying visual content to the user (e.g., visual content displayed on display screen 816), sensing user input, messaging, capturing images, determining location, performing financial transactions, providing haptic feedback, performing wireless communications (e.g., Long Term Evolution (LTE), cellular, near field, wireless fidelity (WiFi), Bluetooth.TM. (BT), personal area network, 4G, 5G, 6G), etc. Smartwatch 800 may include a wireless communications unit 812 configured to perform wireless communications including transmitting and/or receiving messages. Wireless communications unit 812 and the other circuits within smartwatch 800 may be powered by battery 811. Battery 811 may have limited capacity due to the limited physical dimensions of smartwatch 800.

[0129] In order to reduce power consumption in battery 811, wireless communication unit 812 may be placed in a low-power mode. Wireless communication unit 812 may be configured to not receive messages from other devices while in the low-power mode (e.g., a sleep mode). Messages intended for receipt by smartwatch 800 may be temporarily stored (e.g., accumulated) in memory by another device (e.g., a companion device such as a smartphone of the user of smartwatch 800). The stored messages may be sent smartwatch 800 as a batch of messages when wireless communication unit 812 is configured to a normal operating mode. Smartwatch 800 may include a user interface that allows the user to configure parameters related to receiving messages. For example, the user may be able to select a low-power mode in which messages are accumulated or a normal mode in which messages are individually received by smartwatch 800. In addition, the user may also be able to select an option in which certain messages that are indicated to have high priority (e.g., based on the message sender, the time of day, the subject matter, etc.) may be delivered to smartwatch 800 without the delay associated with accumulating the messages in another device.

[0130] Smartwatch 800 may be configured to be worn by a user such that an inner surface of watch band 806 and/or an inner surface of watch body 804 may be adjacent to (e.g., in contact with) the user’s skin. Watch band 806 may include multiple sensors 813, 815 that may be distributed on an inside and/or an outside surface of watch band 806. Sensors 813, 815 may include one or more biosensors configured to sense a user’s heart rate, saturated oxygen level, temperature, sweat level, muscle intentions, or a combination thereof.

[0131] Additionally or alternatively, watch body 804 may include the same or different sensors than watch band 806. For example, multiple sensors may be distributed on an inside and/or an outside surface of watch body 804. Watch body 804 may include, without limitation, a proximity sensor, a front facing image sensor, a rear-facing image sensor, a biometric sensor, an inertial measurement unit, a heart rate sensor, a saturated oxygen sensor, a neuromuscular sensor, an altimeter sensor, a temperature sensor, a bioimpedance sensor, a pedometer sensor, an optical sensor, a touch sensor, a sweat sensor, or any combination or subset thereof. Watch body 804 may also include a sensor that provides data about a user’s environment, such as a user’s motion (e.g., with an inertial measurement unit), altitude, location, orientation, gait, or a combination thereof.

[0132] Watch band 806 and/or watch body 804 may include a haptic device (e.g., a vibratory haptic actuator) that is configured to provide haptic feedback (e.g., a cutaneous and/or kinesthetic sensation, etc.) to the user’s skin. The sensors and/or haptic devices may be configured to operate in conjunction with multiple applications including, without limitation, health monitoring, social media, game playing, and artificial reality.

[0133] FIG. 9 is a perspective view of a user wearing a wristband system 900, according to at least one embodiment of the present disclosure. A user may wear wristband system 900 on any body part. For example, a user may wear wristband system 900 on a forearm 903. The wristband system may include a watch body 904 and a wristband 906 for securing the watch body 904 to the user’s forearm 903. Watch body 904 may include a wireless communication unit 911 for communicating with another device.

[0134] FIG. 10 illustrates a device 1005 (e.g., a smartphone) configured to transfer messages to wearable devices including smartwatch 1020 and/or smart eyeglasses 1030, according to at least one embodiment of the present disclosure. Smartwatch 1020 and/or smart eyeglasses 1030 may be configured to be worn by user 1010 in order to provide content and/or messages to user 1010. Smartwatch 1020 and/or smart eyeglasses 1030 may be configured as compact devices with limited battery capacity. For example, battery capacity in smartwatch 1020 and/or smart eyeglasses 1030 may be limited to under 500 mAH, under 400 mAH, under 300 mAH, under 200 mAH, or less. Smartwatch 1020 and/or smart eyeglasses 1030 may be configured to conserve battery power by configuring a wireless communications unit in smartwatch 1020 and/or smart eyeglasses 1030 into a low-power mode.

[0135] In some examples, device 1005 may be configured to accumulate messages intended for smartwatch 1020 and/or smart eyeglasses 1030 in a memory (e.g., a buffer memory) until a threshold associated with the accumulated messages is reached. Once the threshold is reached or exceeded, the wireless communications unit in smartwatch 1020 and/or smart eyeglasses 1030 may switch to a normal-power mode and receive the accumulated messages sent by device 1005. Smartwatch 1020 and/or smart eyeglasses 1030 may communicate wirelessly with device 1005 using communications protocols including, without limitation, WiFi, Bluetooth.TM., near field, cellular, 4G, 5G, 6G, infrared, or a combination thereof.

[0136] Although FIG. 10 shows the wearable device configured as smartwatch 1020 and smart eyeglasses 1030, the present disclosure is not so limited, and the wearable device may include any type of wearable device capable of receiving messages from another device or a communications network. Further, although FIG. 10 shows device 1005 configured as a smartphone, the present disclosure is not so limited and device 1005 may include any device capable of accumulating messages until a threshold is reached or exceeded and sending the accumulated messages to smartwatch 1020 and/or smart eyeglasses 1030. For example, device 1005 may include a laptop, a tablet, an access point, a server, a base station, a router, etc.

[0137] FIG. 11 is a chart 1100 illustrating example normalized power consumption of a wireless communications unit as a function of message size, according to at least one embodiment of the present disclosure. A wireless communication unit in a limited battery capacity device (e.g., a wearable device) may be configured to consume battery power when receiving data (e.g., a message). In some examples, the wireless communication unit may be configured to consume the same amount of battery power when receiving small amounts of data as when receiving larger amounts of data due to the overhead required by the wireless communication unit circuits and/or the communications protocol used to receive the data. Messages containing small amounts of data may require a minimum amount of battery power consumption. For example, referring to chart 1100, power consumption band 1 may include a range of data sizes in which substantially the same amount of power is required. Band 1 may include data sizes from about 1 byte to about 2K bytes, from about 2K bytes to about 4K bytes, from about 4K bytes to about 8K bytes, from about 8K bytes to about 16K bytes, or more. Power consumption band 2 may include a range of data sizes in which substantially the same amount of power is required. Power consumption band 2 may require more battery power than band 1. As shown in FIG. 11, band 2 may require about 20% more battery power than band 1. Since the same amount of battery power may be required for different sized messages up to a threshold, battery power may be conserved by accumulating multiple messages up to an aggregate threshold and sending the accumulated messages to the wearable device in a batch. By doing so, the receipt of the accumulated messages by the wearable device may require less power than separately receiving each message.

[0138] In some embodiments, the wearable device and/or the device from which the wearable device receives messages may include a user interface configured for allowing the user to select between different message receipt modes. For example, the user may be able to select a low-power mode in which messages are accumulated as described above or a normal or high-power mode in which messages are individually received by the wearable device. In addition, the user may also be able to select an option in which certain messages that are indicated to have high importance (e.g., from a certain individual, at a certain time of day, dealing with important subject matter, including certain words, etc.) may be delivered to the wearable device without waiting to reach the aggregate threshold.

[0139] FIG. 12 is flowchart of a method 1200 of reducing power consumption in a wireless communications unit by batch messaging, according to at least one embodiment of the present disclosure. At operation 1210, method 1200 may include configuring a wireless communications unit of a first electronic device to a low-power mode. Operation 1210 may be performed in a variety of ways, as will be understood by one skilled in the art considering the present disclosure. For example, configuring a wireless communications unit of an electronic device (e.g., smartwatch 1020 and/or smart eyeglasses 1030 of FIG. 10) to a low-power mode may include sending a low power command to the wireless communications unit, reducing the supply voltage of the wireless communications unit, reducing a clock speed, gating a clock, or a combination thereof. In some examples, a wireless communications unit may enter a low-power mode by only receiving certain control messages (e.g., a control channel, a WiFi beacon frame, etc.).

……
……
……

您可能还喜欢...