雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Microsoft Patent | Prior informed pose and scale estimation

Patent: Prior informed pose and scale estimation

Drawings: Click to check drawins

Publication Number: 20210125372

Publication Date: 20210429

Applicant: Microsoft

Assignee: Microsoft Technology Licensing

Abstract

A scale and pose estimation method for a camera system is disclosed. Camera data for a scene acquired by the camera system is received. A rotation prior parameter characterizing a gravity direction is received. A scale prior parameter characterizing scale of the camera system is received. A cost of a cost function is calculated for a similarity transformation that is configured to encode a scale and pose of the camera system. The cost of the cost function is influenced by the rotation prior parameter and the scale prior parameter. A solved similarity transformation is determined upon calculating a cost for the cost function that is less than a threshold cost. An estimated scale and pose of the camera system is output based on the solved similarity transformation.

Claims

  1. A scale and pose estimation method for a camera system, the method comprising: receiving camera data for a scene acquired by one or more cameras of the camera system; receiving a rotation prior parameter characterizing a gravity direction; receiving a scale prior parameter characterizing scale of the camera system; calculating a cost of a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system, such calculation being performed to optimize a rotation term and a translation term of the similarity transformation, where the cost of the cost function is selectively influenced by the rotation prior parameter and the scale prior parameter; determining a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost; and outputting an estimated scale and pose of the camera system based on the solved similarity transformation.

  2. The method of claim 1, wherein the threshold cost is a minimized cost for the cost function.

  3. The method of claim 1, wherein the rotation prior parameter and the scale prior parameter selectively influence the cost of the cost function by selectively imposing a penalty on the cost function.

  4. The method of claim 1, wherein the rotation prior parameter and the scale prior parameter are derived from measurements of one or more inertial sensors.

  5. The method of claim 4, wherein the cost function includes a rotation weight corresponding to the rotation prior parameter and a scale weight corresponding to the scale prior parameter.

  6. The method of claim 5, wherein the rotation weight and the scale weight are adjusted based on sensor noise of the one or more inertial sensors.

  7. The method of claim 6, wherein the rotation weight and the scale weight decrease as sensor noise increases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have less influence on the cost function as sensor noise increases, and wherein the rotation weight and the scale weight increase as sensor noise decreases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have more influence on the cost function as sensor noise decreases.

  8. The method of claim 6, wherein the rotation weight and the scale weight are set to zero based on sensor noise being greater than a threshold noise level such that the corresponding rotation prior parameter and the corresponding scale prior parameter have no influence on the cost function.

  9. The method of claim 1, wherein the camera data includes a collection of three-dimensional (3D) points of a 3D model of the scene.

  10. The method of claim 1, wherein the camera system includes a plurality of different cameras having different positions fixed relative to each other.

  11. The method of claim 1, wherein the camera system includes a single camera movable throughout the scene.

  12. A computing system comprising: one or more logic machines; one or more storage machines holding instructions executable by the one or more logic machines to: receive camera data for a scene acquired by one or more cameras of a camera system; receive a rotation prior parameter characterizing a gravity direction; receive a scale prior parameter characterizing scale of the camera system; calculate a cost of a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system, such calculation being performed to optimize a rotation term and a translation term of the similarity transformation, where the cost of the cost function is selectively influenced by the rotation prior parameter and the scale prior parameter; determine a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost; and output an estimated scale and pose of the camera system based on the solved similarity transformation.

  13. The computing system of claim 12, wherein the threshold cost is a minimized cost for the cost function.

  14. The computing system of claim 12, wherein the rotation prior parameter and the scale prior parameter selectively influence the cost of the cost function by selectively imposing a penalty on the cost of the cost function.

  15. The computing system of claim 12, wherein the rotation prior parameter and the scale prior parameter are derived from measurements of one or more inertial sensors.

  16. The computing system of claim 15, wherein the cost function includes a rotation weight corresponding to the rotation prior parameter and a scale weight corresponding to the scale prior parameter.

  17. The computing system of claim 16, wherein the rotation weight and the scale weight are adjusted based on sensor noise of the one or more inertial sensors.

  18. The computing system of claim 17, wherein the rotation weight and the scale weight decrease as sensor noise increases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have less influence on the cost function as sensor noise increases, and wherein the rotation weight and the scale weight increase as sensor noise decreases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have more influence on the cost function as sensor noise decreases.

  19. The computing system of claim 17, wherein the rotation weight and the scale weight are set to zero based on sensor noise being greater than a threshold noise level such that the corresponding rotation prior parameter and the corresponding scale prior parameter have no influence on the cost function.

  20. A scale and pose estimation method for a camera system, the method comprising: receiving camera data for a scene acquired by one or more cameras of the camera system; receiving a rotation prior parameter characterizing a gravity direction; receiving a scale prior parameter characterizing scale of the camera system, wherein the rotation prior parameter and the scale prior parameter are derived from measurements of one or more inertial sensors; calculating a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system, such calculation being performed to optimize a rotation term and a translation term of the similarity transformation, where the cost of the cost function is selectively influenced by the rotation prior parameter, a rotation weight corresponding to the rotation prior parameter, the scale prior parameter and a scale weight corresponding to the scale prior parameter; adjusting the rotation weight and the scale weight based on the sensor noise of the one or more inertial sensors; determining a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost; and outputting an estimated scale and pose of the camera system based on the solved similarity transformation.

Description

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent Application No. 62/925,605, filed Oct. 24, 2019, the entirety of which is hereby incorporated herein by reference.

BACKGROUND

[0002] Camera pose estimation, i.e., estimating the position and orientation of a camera system relative to a scene, is a central step in computer vision. For example, pose estimation may be utilized in computer vision applications, such as Simultaneous Localization and Mapping (SLAM), visual localization, augmented reality (AR), 3D mapping, and robotics. Such camera pose estimation may enable spatial registration between a camera coordinate system and a world coordinate system of the scene.

SUMMARY

[0003] A scale and pose estimation method for a camera system is disclosed. Camera data for a scene acquired by one or more cameras of the camera system is received. A rotation prior parameter characterizing a gravity direction is received. A scale prior parameter characterizing scale of the camera system is received. A cost of a cost function is calculated for a similarity transformation that is configured to encode a scale and pose of the camera system. The cost of the cost function is selectively influenced by the rotation prior parameter and the scale prior parameter. A solved similarity transformation is determined upon calculating a cost for the cost function that is less than a threshold cost. An estimated scale and pose of the camera system is output based on the solved similarity transformation.

[0004] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0005] FIG. 1 shows different example electronic devices including camera systems used to spatially register the electronic devices with real-world environments.

[0006] FIG. 2 shows an example pose and scale estimator implemented by a computing system.

[0007] FIG. 3 diagrammatically shows an example transformation of a world coordinate system to a camera coordinate system.

[0008] FIG. 4 is a flowchart of an example method for estimating pose and scale for a camera system.

[0009] FIG. 5 schematically shows an example computing system.

DETAILED DESCRIPTION

[0010] Many real-world applications in computer vision such as augmented reality (AR), three-dimensional (3D) mapping, and robotics require both fast and accurate estimation of camera pose and scale. Achieving high speed and maintaining high accuracy when estimating pose and scale of a camera system are often conflicting goals. Some conventional approaches for estimating scale and pose of a camera system produce estimations at a high speed, however such estimations may have lower accuracy. Other conventional approaches for estimating scale and pose of a camera system produce estimations with high accuracy, however such estimations may be produced at lower speeds that may not be feasible for many applications.

[0011] The present description is directed to an approach for estimating scale and absolute pose of a camera system in a manner that achieves both high speed and high accuracy. In particular, this approach exploits a priori knowledge of the solution by using rotation and scale prior parameters to bias the solution to more quickly obtain accurate estimations. Use of such rotation and scale prior parameters may accelerate the estimation process and improve scale and pose estimation accuracy relative to other approaches that do not use scale and rotation prior parameters.

[0012] Furthermore, in some implementations, the estimation approaches described herein may be configured to operate with selective and/or varying influence of the scale and rotation prior parameters on scale/pose estimation. For example, the contribution of each prior parameter may be flexibly weighed based on noisy input data, for example as might occur due to sensor noise. As such, the estimation approach can be robust to noisy sensor inputs, which can arise in AR, robotics, and other applications.

[0013] FIG. 1 shows aspects of different examples of electronic devices (100A-C) that each have a camera system (102A-C) that may employ scale and pose estimation. Device 100A is a smartphone that includes a camera system 102A. For example, the camera system 102A may be used to present an augmented reality experience on a display of the smartphone. Device 100B is an automated flying vehicle that includes a camera system 102B. For example, the camera system 102B may be used to provide automated flight control of the automated flying vehicle. Device 100C is a virtual-reality or augmented-reality headset that includes a camera system 102C. For example, the camera system 102A may be used to present a virtual reality or augmented reality experience on a display of the headset. The example scale and pose estimation approaches disclosed herein may be applicable to these and other camera systems.

[0014] FIG. 2 schematically shows an example computing system 200 configured to implement a generalized pose-and-scale estimator (or solver) 202. The estimator 202 may be implemented by the computing system as software, hardware, firmware, or any combination thereof. The estimator 202 may be configured to output pose and scale estimations for a camera system 204 including one or more cameras 206 based on camera data 208 received from the camera system 204. For example, the camera data 208 may include camera measurements of a collection of 3D points of a 3D model of a scene that are captured by one or more cameras of the camera system 204. In some examples, the estimator 202 may receive camera data in the form of an image or images from the camera system 204, and process the images to form a collection of 3D points of a 3D model of a scene. The estimator 202 may be configured to perform any suitable image processing operations to obtain camera data suitable for estimating scale and pose estimations of the camera system 204.

[0015] In some implementations, the camera system 204 may include a plurality of different cameras having different positions fixed relative to one another, and the camera data 208 may include data from each of the different cameras. Alternatively, in some implementations, the camera data 208 may include data from a single moving camera of the camera system 204 over time. The camera data 208 may include any suitable form of data derived from images captured by the camera(s) 206 of the camera system 204.

[0016] The estimator 202 further may be configured to receive a scale prior parameter 210 and a rotation prior parameter 212 as input. The scale prior parameter 210 characterizes scale of the camera system. The rotation prior parameter 212 characterizes a gravity direction. The scale prior parameter 210 and the rotation prior parameter 212 may inform the estimator 202 of behavior of the camera system 204 that may influence current estimations of scale and pose output by the estimator 202. These prior parameters may be derived from sensor data of the camera system 204. For example, the rotation prior parameter 212 can be derived from a gravity direction using measurements from one or more inertial sensors of an inertial measurement unit (IMU) 214. The IMU 214 may include one or more accelerometers, gyroscopes, and/or other motion sensors that are configured to provide an indication of gravity direction. Additionally, as an example, the scale prior parameter 210 can be obtained from a simultaneous localization and mapping (SLAM) system 216 of the camera system 204. The SLAM system 216 may be configured to keep a unit scale utilizing data from the IMU 214. The scale prior parameter 210 and the rotation prior parameter 212 may be provided from any suitable sensor of the camera system 204. In some examples, the scale prior parameter 210 and the rotation prior parameter 212 may be received by the estimator 202 prior to corresponding camera data 208, such that the scale prior parameter 210 and the rotation prior parameter 212 provide hints or suggestions of pose and orientation information for the camera system 204. In other examples, the scale prior parameter 210 and the rotation prior parameter 212 may be received by the estimator 202 substantially at the same time as the corresponding camera data 208. In yet other examples, the scale prior parameter 210 and the rotation prior parameter 212 may be received by the estimator 202 after the corresponding camera data 208 has been received by the estimator 202.

[0017] In some implementations, sensor noise data 218 optionally may be factored into the scale prior parameter 210 and/or the rotation prior parameter 212. The sensor noise data 218 may provide a quantifiable indication of sensor noise in operation of the IMU 214. In some examples, sensor noise data may correspond to operation of the IMU when the scale prior parameter 210 and the rotation prior parameter 212 are determined. In other words, the sensor noise data 218 may provide an indication of the reliability or accuracy of the scale prior parameter 210 and the rotation prior parameter 212 to the estimator 202.

[0018] The estimator 202 is configured to use the input data (e.g., camera data, scale and rotation prior parameters) to solve an optimization problem to estimate a probable pose/scale of the camera system 204. In one example, the estimator 202 is configured to calculate a cost of a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system 204.

[0019] FIG. 3 diagrammatically shows an example similarity transformation 300 for which the estimator 202 shown in FIG. 2 may be configured to solve. The similarity transformation may align a world coordinate system (W) and a camera coordinate system (Q) by estimating a rotation (R), translation (t), and scale(s) of the coordinate systems relative to one another. The inputs to the estimator include 1) camera data, for example a set of three dimensional points (p.sub.i) in the world coordinate system, 2) a set of camera positions (positioned at the end of the ray s.sub.0c.sub.i) in the multi-camera coordinate system, and 3) the gravity vectors (g) in the world coordinate system and the multi-camera coordinate system. In this example, the scale prior (s.sub.0) influences the length of the ray to properly locate the camera positions (ray length for proper positioning and adjusted scale=s.sub.0c.sub.i). The non-adjusted camera position(s) are denoted by ray (c.sub.i). The ray (r.sub.i) extends from the point of the camera position in the direction of the point (p.sub.i).

[0020] Returning to FIG. 2, the estimator 202 may be configured to solve the similarity transformation by calculating a cost of a cost function for the similarity transformation. For example, the cost function may include a generalized least squares cost function. The cost function used by the estimator 202 may be a modification of a generalized least squares cost function shown as Equation (1) shown below. Equation (1) is a generalized least-squares cost function that may be used to estimate pose and scale of a non-central camera. Equation (1) may be used to solve for the rotation (R), the translation (t), and the scale (s) that minimizes the error/cost. Equation (1) does not include rotation and scale priors

J(R, t, s, .alpha.)=.SIGMA..sub.i=1.sup.n.parallel..alpha..sub.ir.sub.i-(Rp.sub.i+t– sc.sub.i).parallel..sup.2. (1)

where r.sub.i is a unit-vector indicating the direction from the position of the camera c.sub.i to a 3D point p.sub.i; .alpha..sub.i is the depth of the point pi with respect to the camera position c.sub.i; .alpha. is a vector holding the depths; R .di-elect cons. SO(3) is a rotation matrix; t .di-elect cons. R.sup.3 is a translation vector; and s .di-elect cons. R is the scale. The pose-and-scale formulation shown in Eq. (1) accumulates the errors between the transformed i-th 3D point (Rp.sub.i+t-sc.sub.i) and the same point described with respect to the camera .alpha..sub.ir.sub.i. The rotation R, the translation t, and the scaled camera position sc.sub.i transform a 3D point from a world coordinate system to the coordinate system of the generalized camera.

[0021] In order to find the minimizer (R*; t*; s*;.alpha.*), Equation 1 is rewritten as a function that only depends on the rotation matrix. In this step, the translation t, scale s, and depth .alpha..sub.i can be written as a linear function of the rotation matrix R. Thus, it is possible to re-write the pose-and-scale least-squares cost formulation of Equation (1) as Equation (2) shown below:

J(R)=.SIGMA..sub.i-1.sup.n(R)r.sub.i-(Rp.sub.it(R)-s(R)c.sub.i).parallel- ..sup.2=vec(R).sup.TMvec(R), (2)

where vec(R) is a vectorized form of the rotation matrix, and M is a squared matrix capturing the constraints from the input 2D-3D correspondences; the dimensions of M depend on the vectorization and representation of vec(R). Given the cost function J(R), the optimal rotation R* is found by solving a polynomial system representing the constraint that the gradient .gradient..sub.qJ(R*)=0 is null with respect to the rotation-quaternion parameters q, and rotation-parameter constraints (e.g., ensuring a unit-norm quaternion).

[0022] Note that although the cost function is discussed herein in terms of solving for the minimizer, the cost function in some examples may be solved such that the cost is below a designated threshold cost.

[0023] In order to impose scale and rotation priors that influence the cost equation, Equation (1), is modified to include regularizer terms. Adding these regularizer terms leads to the least-squares cost function of Equation (3) shown below:

J’=J(R, t, s, .alpha.)+.lamda..sub.s(s.sub.0s).sup.2+.lamda..sub.g.parallel.g.sub.Q.tim- es.Rg.sub.W.parallel..sup.2 (3)

where s.sub.0 is the scale prior; g.sub.Q and g.sub.W are the gravity directions of the multi-camera setting and world, respectively; the symbol.times.represents the cross-product operator; and .lamda..sub.s and .lamda..sub.g are weights controlling the contribution of the scale and rotation priors, respectively. These weights (i.e., .lamda..sub.s and .lamda..sub.g) may be greater than or equal to zero.

[0024] These priors bias the cost function to penalize scale and rotation estimates that deviate from prior estimates. In particular, the scale prior regularizer term is an additional parameter that penalizes deviation from a prior assumption about scale. The scale is the reduction or enlargement factor that the points described in the camera coordinate system need to match the metric units of those points described in the world coordinate system. The scale prior regularizer term .lamda..sub.s(s.sub.0-s).sup.2 is configured to influence the cost by imposing a penalty on the cost when the queried scale s deviates from the scale prior s.sub.0. On the other hand, the rotation prior is an additional parameter that captures a notion of difference between a candidate solution and a prior assumption about gravity direction. The rotation prior regularizer term .lamda..sub.g.parallel..sub.Q.times.Rg.sub.W.parallel..sup.2 is configured to influence the cost by imposing a misalignment penalty on the cost when the transformed world gravity direction Rg.sub.W and the query gravity direction g.sub.Q are misaligned.

[0025] In some implementations, the estimator 202 may be configured to control the influence of the scale prior parameter and the rotation prior parameter on the cost function that determines the pose and scale estimations. In one example, each prior may have a corresponding scalar or trust weight (i.e., .lamda..sub.s and .lamda..sub.g) that may be set based on sensor noise data 218 of the IMU sensor 214 from which the prior was obtained. In some implementations, the weights may be predetermined based on empirical sensor data. In other implementations, the weights may be dynamically adjusted based on the sensor noise data 218. For example, as sensor noise dynamically increases, the weight dynamically decreases such that the corresponding prior parameter has less (or no) influence on the cost of the cost function and further the estimations 220. Likewise, as sensor noise dynamically decreases, the weight dynamically increases such that the corresponding prior parameter has greater influence on the cost of the cost function and further the estimations 220.

[0026] In order to solve for rotation/pose and scale, the cost J’ may be re-written as a function that only depends on the rotation matrix. To do so, it is mathematically convenient to use Equation (4) shown below.

x=[.alpha..sub.1 … .alpha..sub.ns t.sup.T].sup.T. (4)

The gradient evaluated at the optimal x* must satisfy the following constraint: .gradient..sub.xJ’|.sub.x=x*=0. From this constraint, the relationship shown in Equation (5) shown below is defined as

x = ( A T .times. A + P ) - 1 .times. A T .times. Wb + ( A T .times. A + P ) - 1 .times. Px 0 = [ U S V ] .times. Wb + .lamda. s .times. s o .times. l , ( 5 ) where A = [ r 1 c 1 - I r n .times. c n - I ] , b = [ p 1 p n ] ( 6 ) P = [ 0 n .times. n .lamda. s 0 3 .times. 3 ] , W = [ R R ] and x 0 = [ 0 n T s 0 0 3 T ] T . ##EQU00001##

(A.sup.TA+P).sup.-1 A.sup.T is partitioned into three matrices U, S, and V such that the depth, scale, and translation parameters are functions of U, S, and V, respectively. These matrices and the vector 1 can be computed in closed form by exploiting the sparse structure of the matrices A and P.

[0027] Equation (5) provides a linear relationship between the depth, scale, and translation and the rotation matrix. Consequently, these parameters are computed as a function of the rotation matrix in Equation (7) shown below

a.sub.i(R)=u.sub.i.sup.TWb+.lamda..sub.ss.sup.ol.sub.i

s(R)=SWb+.lamda.s.sub.ol.sub.n+1

t(R)=VWb+.lamda..sub.ss.sub.ol.sub.t. (7)

where u.sub.i.sup.T is the i-th row of matrix U, l.sub.j is the j-th entry of the vector 1, and l.sub.t corresponds to the last three entries of the vector 1.

[0028] In order to re-write the regularized least-squares cost function (i.e., Equation (3)) as clearly as possible, Equation (8) is formulated as shown below.

e.sub.i=a.sub.i(R)r.sub.i-(Rp.sub.i+t(R)-s(R)c.sub.i)=n.sub.i-k.sub.i

n.sub.i=u.sub.i.sup.TWbr.sub.i-Rp.sub.i-VWb+SWbc.sub.i

k.sub.i=.lamda..sub.ss.sub.0(l.sub.ir-l.sub.t+l.sub.n+1c.sub.i). (8)

The residual e.sub.i is divided into two terms: .eta..sub.i, the residual part considering the unconstrained terms; and k.sub.i the residual part considering the scale-prior-related terms. Note again that when .lamda..sub.s=0, k.sub.i becomes null and e.sub.i becomes the residual corresponding to the original cost function (i.e., Equation (1)).

[0029] Using the definitions from Equation (8) and the scale, depth, and translation relationships shown in Equation (7), the regularized least-squares cost function shown in Equation (3) can be re-written as Equation (9) shown below

.times. J ’ = J gDLS ’ + J s ’ + J g ’ = vec .function. ( R ) T .times. Mvec .function. ( R ) + 2 .times. d T .times. vec .function. ( R ) + k , ( 9 ) .times. where J gDLS ’ = i = 1 n .times. .times. e i T .times. e i = i = 1 n .times. .times. n i T .times. n i + 2 .times. k i T .times. n i + k i T .times. k i = vec .function. ( R ) T .times. M gDLS .times. vec .function. ( R ) + 2 .times. d gDLS T .times. vec .function. ( R ) + k gDLS ( 10 ) .times. J s ’ = .lamda. s .function. ( s 0 - S .function. ( R ) ) 2 = vec .function. ( R ) T .times. M sS .times. vect .function. ( R ) + 2 .times. d s T .times. vec .function. ( R ) + k s .times. J g ’ = .lamda. g || g Q .times. R gw .times. || 2 = ver .function. ( R ) T .times. M g .times. vec .function. ( R ) .times. M = M gDLS + M s + M g .times. d = d gDLS + d s .times. k = k gDLS + k s . ##EQU00002##

The parameters of Equation (9) (i.e., M.sub.gDLS, M.sub.s, M.sub.g, d.sub.gDLS, d.sub.s, k.sub.gDLS, and k.sub.s) can be computed in closed form and in O(n) time. Equation (9) generalizes the unconstrained quadratic function shown in Equation (1) when both scale and rotation priors are disabled, i.e., .lamda..sub.g=.lamda..sub.s=0, then J’(R)=J(R). Also, note that the weights .lamda..sub.g and .lamda..sub.s control the contribution of each of the priors independently. Such independent control allows the estimator 202 to flexibly adapt the cost function to many scenarios. For instance, these weights can be adjusted so that the cost function reflects the confidence of certain priors based on sensor noise, reduces the effect of noise present in the priors, and fully disables one or both of the priors from influencing the cost of the cost function.

[0030] Equations (4)-(8) may be used as a mechanism to rewrite the Equation (3) to only depend on a rotation matrix that is encoded in Equations (9) and (10). In particular, Equations (5)-(6) are used to compute depth, scale, and translation as a linear transformation of rotation.

[0031] Given that the prior-based pose-and-scale cost function (i.e., Equation (3)) depends only on the rotation matrix, R can be solved to minimize Equation (9). To achieve this step, the rotation matrix R can be represented using a quaternion q=[q.sub.1 q.sub.2 q.sub.3 q.sub.4]. To compute all the minimizers of Equation (9), a polynomial system may be built so that it encodes the first-order optimality conditions and the unit-norm-quaternion constraint shown below in Equation (11):

{ .differential. J ’ .differential. q j = 0 , .A-inverted. j .times. = 1 , , 4 q j .function. ( q T .times. q - 1 ) = 0 , .A-inverted. j .times. = 1 , , 4 . ( 11 ) ##EQU00003##

The polynomial system on q shown in Equation (11) encodes the unit-norm-quaternion constraint within Equation (12) shown below.

.differential. ( q T .times. q - 1 ) 2 .differential. q j = q j .function. ( q T .times. q - 1 ) = 0 , .A-inverted. j . ( 12 ) ##EQU00004##

Equation (12) yields efficient elimination templates and small action matrices, which delivers efficient polynomial solvers. The efficient polynomial solver adopted by the cost function leverages the rotation representation shown in Equation (13) shown below.

vec(R)=[q.sub.1.sup.2q.sub.2.sup.2q.sub.3.sup.2q.sub.4.sup.2q.sub.1q.sub- .2q.sub.1q.sub.3q.sub.1q.sub.4q.sub.2q.sub.3q.sub.2q.sub.4q.sub.3q.sub.4].- sup.T. (13)

Given this representation, the dimensions of the parameters of the regularized least-squares cost function shown in Equation (9) become M .di-elect cons. R10.times.10, d .di-elect cons. R10, and k .di-elect cons. R.

[0032] The cost function efficiently yields eight rotations after solving the polynomial system stated in Equation (11). After computing these rotations, the estimator 202 discards quaternions with complex numbers whose imaginary part falls below a certain threshold, and then recovers the depth, scale, and translation using Equation (7). Finally, the estimator 202 uses the computed similarity transformations to discard solutions that map the input 3D points behind any camera. By solving for the rotation matrix using Equations (9) and (10), the pose and scale may be estimated in a more quick and accurate manner than using Equation (2), which does not include rotation and scale priors.

[0033] As discussed above, the scale prior parameter 210 and the rotation prior parameter 212 may be configured to influence the cost of the cost function. The estimator 202 may be configured to perform the calculation to optimize at least a rotation term and a translation term of the similarity transformation. The estimator 202 may be further configured to perform the calculation to optimize the scale and depth terms as well, in some examples.

[0034] In some examples, the estimator 202 may determine a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost. The threshold cost may be set to any suitable cost to provide a suitably accurate estimation of scale and pose in a suitably quick timeframe. In some examples, the threshold cost may correspond to a minimized cost calculated for the cost function as discussed above.

[0035] Further, the estimator 202 may be configured to output the estimated scale and pose 220 of the camera system 204 based on the solved similarity transformation. The similarity transformation may include one or more rigid transformations (e.g., rotation, translation) followed by a dilation (e.g., reduce/enlarge by a scale factor). Such a similarity transformation (and/or corresponding output of the estimator 202) may be used to align two or more reconstructions of a 3D model of a scene (e.g., 3D point cloud) computed with different sensors, or to localize a rig of cameras within an existing 3D reconstruction, among other scenarios. The estimator 202 may be configured to output the pose and scale estimations 220 to any suitable processing logic of the camera system 204, the computing system 200, and/or any other suitable device.

[0036] In some implementations, the computing system 200 may include both the estimator 202 and the camera system 204 incorporated into a same device. For example, the estimator and the camera system may be incorporated into an augmented reality device worn by a user. In another example, the estimator and the camera system may be incorporated into a robotic device configured for autonomous movement. In other implementations, the camera system 204 may be incorporated into a different device than the estimator 202. For example, the computing system 200 may be remote from the computing system 204 and connected via a computer network. In some examples, the estimator 202 may be implemented by a service computing system that is configured to receive the scale and rotation priors and camera data from the camera system, and return the pose and scale estimations to the camera system or an associated computing device. In some examples, the service may be a hologram sharing service for augmented reality devices. The hologram sharing service may maintain a reference map, and a client camera system may send input data to the hologram sharing service such that the hologram sharing service can use the input data to localize queried images with the reference map. The hologram sharing service may send the calculated pose and scale estimations back to the client camera system so that holograms can be positioned correctly on an augmented reality display associated with the client camera system in a quick and accurate manner.

[0037] FIG. 4 shows an example method 400 for estimating pose and scale of a camera system including one or more cameras. For example, the method 400 may be performed by the estimator 202 shown in FIG. 2. At 402, rotation and scale prior parameters are received. The rotation prior parameter characterizes a gravity direction. The scale prior parameter may characterize scale, depth, and/or translation of the camera system. In some examples, the rotation prior parameter and the scale prior parameter are derived from measurements of one or more inertial sensors of the camera system or an associated computing device.

[0038] In some implementations where the estimator is configured to dynamically adjust the contribution of the rotation and scale prior parameters based on sensor noise of one or more inertial sensors, at 404, sensor noise data optionally may be factored into the scale prior parameter and/or the rotation prior parameter. At 406, camera data for a scene is received from the one or more cameras of the camera system. In some examples, the camera data may include a collection of three-dimensional (3D) points of a 3D model of the scene. In some examples, the camera data may include whole or partial image frames of the scene that are optionally processed to form a 3D point cloud of the scene.

[0039] In some examples, the camera system may include a plurality of different cameras having different positions fixed relative to each other and the camera data may correspond to each of the different cameras. In other examples, the camera system may include a single camera movable throughout the scene and the camera data may correspond to the single camera at different positions within the scene.

[0040] At 408, a cost of a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system is calculated. In one example, the cost may be calculated using Equations (9) and (10) shown described above. Such calculations may be performed to optimize a rotation term and a translation term of the similarity transformation. Further, the cost function includes the rotation prior parameter and the scale prior parameter such that the cost of the cost function is selectively influenced by the rotation prior parameter and the scale prior parameter. In some examples, the rotation prior parameter and the scale prior parameter selectively influence the cost of the cost function by selectively imposing a penalty on the cost of the cost function based on queried input data differing from prior data.

[0041] In some implementations, the cost function may include a rotation weight corresponding to the rotation prior parameter and a scale weight corresponding to the scale prior parameter. In such implementations, at 410, optionally the rotation weight and the scale weight may be dynamically adjusted based on sensor noise data factored into the scale prior parameter and/or the rotation prior parameter. As an example, the rotation weight and the scale weight may be dynamically decreased as sensor noise increases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have less influence on the cost function as sensor noise increases. As another example, the rotation weight and the scale weight may be dynamically increased as sensor noise decreases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have more influence on the cost function as sensor noise decreases. In some examples, either or both of the rotation weight and the scale weight may be dynamically set to zero based on sensor noise being greater than a threshold noise level such that the corresponding rotation prior parameter and the corresponding scale prior parameter have no influence on the cost function.

[0042] At 412, a solved similarity transformation is determined upon calculating a cost for the cost function that is less than a threshold cost. The threshold cost may be set to any suitable cost. In some examples, the threshold cost is a minimized cost for the cost function. At 414, an estimated scale and pose of the camera system is output based on the solved similarity transformation. The method 400 may be performed for each camera (i) of a plurality of cameras of a multi-camera system (or a single moving camera).

[0043] The herein described pose and scale estimation method can be broadly applicable to a variety of multi-camera or moving camera applications, such as in augmented reality and robotics applications. For instance, when trying to align a 3D reconstruction A (e.g., computed via a structure-from-motion algorithm) and a reconstruction B (e.g., computed with a SLAM system), the pose and scale method approach can be used to estimate a transformation that can align both reconstructions exploiting sensor information. This alignment reveals pose and scale information that enables accurate AR renderings and/or to localize precisely the position and orientation of a mobile device within an environment. The rotation and scale prior parameters provided by external sensors can be selectively used an input to quickly and accurately estimate pose and scale of a camera system. In some implementations, the contributions of these rotation and scale prior parameters can be selectively reduced or minimized based on the sensor noise of the sensor from which the prior parameters were obtained. In this way, the estimation method may be robust to noisy scale and gravity input data.

[0044] The methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as an executable computer-application program, a network-accessible computing service, an application-programming interface (API), a library, or a combination of the above and/or other compute resources.

[0045] FIG. 5 schematically shows a simplified representation of a computing system 500 configured to provide any to all of the compute functionality described herein. Computing system 500 may take the form of one or more personal computers, network-accessible server computers, tablet computers, home-entertainment computers, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), virtual/augmented/mixed reality computing devices, wearable computing devices, Internet of Things (IoT) devices, embedded computing devices, camera system, robotic device, autonomous vehicle, and/or other computing devices. For example, computing system 500 may take the form of computing system 200 and/or camera system 204 shown in FIG. 2.

[0046] Computing system 500 includes a logic subsystem 502 and a storage subsystem 504. Computing system 500 may optionally include a display subsystem 506, input subsystem 508, communication subsystem 510, and/or other subsystems not shown in FIG. 5.

[0047] Logic subsystem 502 includes one or more physical devices configured to execute instructions. For example, the logic subsystem may be configured to execute instructions that are part of one or more applications, services, or other logical constructs. The logic subsystem may include one or more hardware processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware devices configured to execute hardware or firmware instructions. Processors of the logic subsystem may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic subsystem optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely-accessible, networked computing devices configured in a cloud-computing configuration.

[0048] Storage subsystem 504 includes one or more physical devices configured to temporarily and/or permanently hold computer information such as data and instructions executable by the logic subsystem. When the storage subsystem includes two or more devices, the devices may be collocated and/or remotely located. Storage subsystem 504 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. Storage subsystem 504 may include removable and/or built-in devices. When the logic subsystem executes instructions, the state of storage subsystem 504 may be transformed–e.g., to hold different data.

[0049] Aspects of logic subsystem 502 and storage subsystem 504 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.

[0050] The logic subsystem and the storage subsystem may cooperate to instantiate one or more logic machines. As used herein, the term “machine” is used to collectively refer to the combination of hardware, firmware, software, instructions, and/or any other components cooperating to provide computer functionality. In other words, “machines” are never abstract ideas and always have a tangible form. A machine may be instantiated by a single computing device, or a machine may include two or more sub-components instantiated by two or more different computing devices. In some implementations a machine includes a local component (e.g., software application executed by a computer processor) cooperating with a remote component (e.g., cloud computing service provided by a network of server computers). The software and/or other instructions that give a particular machine its functionality may optionally be saved as one or more unexecuted modules on one or more suitable storage devices.

[0051] Non-limiting examples of training procedures for adjusting trainable parameters include supervised training (e.g., using gradient descent or any other suitable optimization method), zero-shot, few-shot, unsupervised learning methods (e.g., classification based on classes derived from unsupervised clustering methods), reinforcement learning (e.g., deep Q learning based on feedback) and/or generative adversarial neural network training methods, belief propagation, RANSAC (random sample consensus), contextual bandit methods, maximum likelihood methods, and/or expectation maximization. In some examples, a plurality of methods, processes, and/or components of systems described herein may be trained simultaneously with regard to an objective function measuring performance of collective functioning of the plurality of components (e.g., with regard to reinforcement feedback and/or with regard to labelled training data). Simultaneously training the plurality of methods, processes, and/or components may improve such collective functioning. In some examples, one or more methods, processes, and/or components may be trained independently of other components (e.g., offline training on historical data).

[0052] When included, display subsystem 506 may be used to present a visual representation of data held by storage subsystem 504. This visual representation may take the form of a graphical user interface (GUI). Display subsystem 506 may include one or more display devices utilizing virtually any type of technology. In some implementations, display subsystem may include one or more virtual-, augmented-, or mixed reality displays.

[0053] When included, input subsystem 508 may comprise or interface with one or more input devices. An input device may include a sensor device or a user input device. Examples of user input devices include a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition.

[0054] When included, communication subsystem 510 may be configured to communicatively couple computing system 500 with one or more other computing devices. Communication subsystem 510 may include wired and/or wireless communication devices compatible with one or more different communication protocols. The communication subsystem may be configured for communication via personal-, local- and/or wide-area networks.

[0055] In an example, a scale and pose estimation method for a camera system comprises receiving camera data for a scene acquired by one or more cameras of the camera system, receiving a rotation prior parameter characterizing a gravity direction, receiving a scale prior parameter characterizing scale of the camera system, calculating a cost of a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system, such calculation being performed to optimize a rotation term and a translation term of the similarity transformation, where the cost of the cost function is selectively influenced by the rotation prior parameter and the scale prior parameter, determining a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost, and outputting an estimated scale and pose of the camera system based on the solved similarity transformation. In this example and/or other examples, the threshold cost may be a minimized cost for the cost function. In this example and/or other examples, the rotation prior parameter and the scale prior parameter may selectively influence the cost of the cost function by selectively imposing a penalty on the cost function. In this example and/or other examples, the rotation prior parameter and the scale prior parameter may be derived from measurements of one or more inertial sensors. In this example and/or other examples, the cost function may include a rotation weight corresponding to the rotation prior parameter and a scale weight corresponding to the scale prior parameter. In this example and/or other examples, the rotation weight and the scale weight may be adjusted based on sensor noise of the one or more inertial sensors. In this example and/or other examples, the rotation weight and the scale weight may decrease as sensor noise increases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have less influence on the cost function as sensor noise increases, and the rotation weight and the scale weight may increase as sensor noise decreases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have more influence on the cost function as sensor noise decreases. In this example and/or other examples, the rotation weight and the scale weight may be set to zero based on sensor noise being greater than a threshold noise level such that the corresponding rotation prior parameter and the corresponding scale prior parameter have no influence on the cost function. In this example and/or other examples, the camera data may include a collection of three-dimensional (3D) points of a 3D model of the scene. In this example and/or other examples, the camera system may include a plurality of different cameras having different positions fixed relative to each other. In this example and/or other examples, the camera system may include a single camera movable throughout the scene.

[0056] In an example, a computing system comprises one or more logic machines, one or more storage machines holding instructions executable by the one or more logic machines to receive camera data for a scene acquired by one or more cameras of a camera system, receive a rotation prior parameter characterizing a gravity direction, receive a scale prior parameter characterizing scale of the camera system, calculate a cost of a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system, such calculation being performed to optimize a rotation term and a translation term of the similarity transformation, where the cost of the cost function is selectively influenced by the rotation prior parameter and the scale prior parameter, determine a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost, and output an estimated scale and pose of the camera system based on the solved similarity transformation. In this example and/or other examples, the threshold cost may be a minimized cost for the cost function. In this example and/or other examples, the rotation prior parameter and the scale prior parameter selectively influence the cost of the cost function by selectively imposing a penalty on the cost of the cost function. In this example and/or other examples, the rotation prior parameter and the scale prior parameter are derived from measurements of one or more inertial sensors. In this example and/or other examples, the cost function may include a rotation weight corresponding to the rotation prior parameter and a scale weight corresponding to the scale prior parameter. In this example and/or other examples, the rotation weight and the scale weight may be adjusted based on sensor noise of the one or more inertial sensors. In this example and/or other examples, the rotation weight and the scale weight may decrease as sensor noise increases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have less influence on the cost function as sensor noise increases, and the rotation weight and the scale weight may increase as sensor noise decreases such that the corresponding rotation prior parameter and the corresponding scale prior parameter have more influence on the cost function as sensor noise decreases. In this example and/or other examples, the rotation weight and the scale weight may be set to zero based on sensor noise being greater than a threshold noise level such that the corresponding rotation prior parameter and the corresponding scale prior parameter have no influence on the cost function.

[0057] In an example, a scale and pose estimation method for a camera system comprises receiving camera data for a scene acquired by one or more cameras of the camera system, receiving a rotation prior parameter characterizing a gravity direction, receiving a scale prior parameter characterizing scale of the camera system, wherein the rotation prior parameter and the scale prior parameter are derived from measurements of one or more inertial sensors, calculating a cost function for a similarity transformation that is configured to encode a scale and pose of the camera system, such calculation being performed to optimize a rotation term and a translation term of the similarity transformation, where the cost of the cost function is selectively influenced by the rotation prior parameter, a rotation weight corresponding to the rotation prior parameter, the scale prior parameter and a scale weight corresponding to the scale prior parameter, adjusting the rotation weight and the scale weight based on the sensor noise of the one or more inertial sensors, determining a solved similarity transformation upon calculating a cost for the cost function that is less than a threshold cost, and outputting an estimated scale and pose of the camera system based on the solved similarity transformation.

[0058] This disclosure is presented by way of example and with reference to the associated drawing figures. Components, process steps, and other elements that may be substantially the same in one or more of the figures are identified coordinately and are described with minimal repetition. It will be noted, however, that elements identified coordinately may also differ to some degree. It will be further noted that some figures may be schematic and not drawn to scale. The various drawing scales, aspect ratios, and numbers of components shown in the figures may be purposely distorted to make certain features or relationships easier to see.

[0059] It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.

您可能还喜欢...