This tutorial is to help those who have never used Lie Groups but want to use this useful tool. As the manifold topic is too large to cover in this single tutorial, we will concentrate those facts that are necessary for computing Jacobian matrix which will be used in future optimization. Some example codes will be given for each part of the tutorial. If you have any questions about this tutorial, please email to me.
A group is associated with some space and the operations that can applied on the elements in this space. We are familiar with 3D Euclidean space (vector space). For example, a=(1,2,3),b=(3,4,5) are elements of the vector space, the plus operator can be applied by c=a+b , which is also an element of the vector space. Obviously SO(3) is not in the vector space, as we cannot simply plus two rotation matrix. However, SO(3) and SE(3) belongs to the Lie Group, which has five properties. We use C for SO(3) and T for SE(3). Suppose C1,C2,C3∈SO(3) and T1,T2,T3∈SE(3) :
C,T are also a differential manifold. C1C2∈SO(3) and T1T2∈SE(3) .Associativity: (C1C2)C3 = C1(C2C3) . (T1T2)T3=T1(T2T3) Identity: C1I=IC1=C1 , T1I=IT1=T1 Invertibility: C−11∈SO(3) , T−11∈SE(3)Necessary and sufficient conditions fro SO(3): CTC=I,det(C)=1 , and for SE(3): T=[C,t0,1],where,C∈SO(3),t∈R3
Every Lie group associates with a Lie Algebras, which is in vector space V . More specifically, the Lie Algebras is skew matrix (a square real matrix (subspace) of vector space.), and with some operators Lie brackets.
Closure: [X,Y]∈V Bilinearity: [aX+bY,Z] = a[X,Z] +b[Y,Z]Alternating: [X,X] = 0Jacobi identity: [X,[Y,Z]]=[Z,[Y,X]]=[Y,[Z,X]] Above is the general form of Lie brackets for Lie Algebras. Next we will see two special Lie Algebras, so(3),se(3) .The Lie algebra associated with SO(3) is given by 1. Vector space: so(3)={Φ=ϕ∧∈R3×3|ϕ∈R3} 2. Field: R 3. Lie bracket: [Φ1,Φ2]=Φ1Φ2−Φ2Φ1 Based on Lie bracket defined above, we can easily tested that, the four properties (general Lie Algebras) holds.
The Lie algebra associated with SE(3) is given by 1. Vector space, se(3)={Ξ=ξ∧∈R4×4|ξ∈R6} 2. Field: R 3. Lie bracket: [Ξ1,Ξ2]=Ξ1Ξ2−Ξ1Ξ2 where
ξ∧=[ρϕ]∧=[ϕ∧,ρ0,1]∈R4×4We have defined Lie Group and Lie Algebras above, here we will study Exponential Map, that will connect those two concepts. \ The matrix exponential map and matrix logarithms are defined:
\begin{equation}\label{ep:exponentialA}\begin{split}exp(A) &= 1 + A + \frac{1}{2!}A^2 +\frac{1}{3!}A^3+... \\ln(A) &= \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}(A-1)^n\end{split}\end{equation}Let C∈SO(3),ϕ∧∈so(3) , according to Eq. ???
\begin{equation}\label{eq:em:RSo3}\begin{split}C &= exp(\phi^\wedge) = \sum_{n=1}^{\infty} \frac{1}{n!}(\phi^\wedge)^n\\\phi &= ln(C)^\vee\\\end{split}\end{equation} We provide a deeper perspective on the not unique ϕ problem in Section ??? appendix.\ A closed form solution for both cases are given by Cϕ=cos(ϕ)I+(1−cos(ϕ))aaT+sin(ϕ)a∧=acos(tr(C)−12)+2πm where ϕ=ϕa . while the unit vector a is the eigen vector, whose eigen value is 1.Similarly we define the following T∈SE(3),ξ∧∈se(3), according to Eq. ??? T=exp(ξ∧)ξ=∑n=0∞1n!(ξ∧)n=ln(T)∨ We also give a closed form solution for the above equations. Tρ=[C,r0,1]=J−1r where J is the left Jacobian matrix. \begin{equation}\label{eq:EM:Pse:jac}\begin{split}J &= \sum_{n=0}^{\infty} \frac{1}{(n+1)!}(\phi^\wedge)^n\\r &= J\rho\\\end{split}\end{equation}
The concept of “Jacobian” here is different from the one used in optimization.This Jacobian matrix is defined in Eq. ??? and can be used in two sceneries: transforming ξ to r in SE(3), and also this J can be used in derivatives.\ As we defined Jacobian matrix, we can give the closed form solution to it in the same way with SO(3). J=sin(ϕ)ϕI (1−sin(ϕ)ϕ)aaT 1−cos(ϕ)ϕa∧ In the same way, we can also give the expression of J−1,JTJ .
This section can be covered later.
For scalar values, we have
exp(a)exp(b)=exp(a+b) BCH can be used to do this job (of course the format is different for Lie groups). After some derivation we have exp(A+B)=lima→∞(exp(A/α)exp(B/α))αFor the special case of SO(3), we have
ln(C1C2)∨≈{J(ϕ2)−1ϕ1+ϕ2,ϕ1+J(−ϕ1)−1ϕ2,ifϕ1smallifϕ2smallSimilarly we can get
ln(T1T2)∨≈{J(ξ2)−1ξ1+ξ2,ξ1+J(−ξ1)−1ξ2,ifξ1smallifξ2smallWe define the distance between two SO(3)
ϕ12=ln(CT1C2)∨ Perturbing ϕ by a little results in a new Rotation matrix C′ ln(C′CT)≈ln(exp(Jlδϕ)∧CCT)=JlδϕThe distance
ξ12=ln(T−11T2)∨ The perturbing ln(T′TT)∨≈JδξWe can derive that
∂Cv∂ϕ=−(Cv)∧J If we have complex function u(x),withx=Cv , we have ∂u∂ϕ=∂u∂x∂x∂ϕ=−∂u∂x(Cv)∧J After some substuting, we have simpler case: ∂Cv∂ψ=−(Cv)∨ which has no J.Let T∈SE(3) , p∈HC , we can find that:
Tp=exp(ξ∧)Topp≈(1+ξ∧)Topp=Topp+ξ∧Topp=Topp+(Topp)⊙ξ So we can get the derivatives easily: ∂Tp∂ξ=(Topp)⊙Some background knowledge. The first equation is about the properties of skew synmetric matrix. Using this properties, we can simplify the high order on skew matrix to scalar numbers.
\begin{equation}\label{eq:Apwhy}\begin{split}w \in \mathbb{R}^3, (w^{\wedge})^3 &= - (w^Tw)w^{\wedge} \\sin(\theta) &= \frac{1}{1!}\theta + \frac{-1}{3!}\theta^3+...\\cos(\theta) & = 1+\frac{-1}{2!}\theta^2 + \frac{1}{4!}\theta^4+...\end{split}\end{equation} Let ϕ=ϕa,ϕ∈R,a∈R3,|a|=1 , refer to Eq. ??? and Eq. ??? we can expand the exponential map as exp(ϕ∧)=exp(ϕa∧)=cos(ϕ)I+(1−cos(ϕ))aaT+sin(ϕ)a∧ As ϕ is all inside the cos and sin functions, so the exp value will be satified with infinite number of ϕ+2πm,m∈uint . Note that rotation matrix is unique, but the 3-parameter values can not be unique, so there is a singularity problem in 3-parameter rotation matrix.Exponential map provide another way to see the determinant of rotation matrix. Here we introduce a theory in general matrix. For any square matrix A with complex elements, we have det(exp(A))=exp(tr(A)) In our case
det(C)=det(exp(ϕ∧))=exp(tr(ϕ∧))=exp(0)=1We give a simplified BA example here, where the 3D points are known as ground truth, and the pose is to be estimated. let K be the intrinsic matrix, PG,PL be the 3D points in global and local frame, u,v be the measured pixel points.
PLobj=R−1(PG−T)=KPL/PL(3)−[uv] Jacobian matrix. Derive on R and t of the objective functions. Let v=(PG−T) , δ is perturb value on R. We can define the derivative of PL on variables. R−1v∂PL∂δ∂PL∂tx=(exp(δ∧)Rop)−1v=R−1opexp(δ∧)−1v=R−1opexp(−δ∧)v≈R−1op(I−δ∧)v=R−1opv−R−1opδ∧v=R−1opv+R−1opv∧δ=R−1opv∧=R−1(−[1;0;0]); We then define the derivative of objective function f on variables. f′=K(PL)′PL(3)−PLPL(3)′(PL(3))2 Please refer to the code "run_so3_spBA.m" .In the above code, we use 3D points to do the BA, which is not convenient. In this part, we will give you a homogeneous coordinates (HC) example. You can refer to the lecture given by Prof. Cyrill Stachniss. The HC can be regarded as another cooridinates with axis and origin. So we can use SE(3) to represent the transformations
TPGPLPIobj=[R,t0,1]=TPL=T−1PG=Keye(3,4)PL=PI−PI(3)⎡⎣⎢uv1⎤⎦⎥ Jacobian matrix: we can use perturbation to derive derivatives on T, which is very straightforward. T−1p∂PL∂ξ∂PI∂ξ∂obj∂ξ=(exp(ξ∧)Top)−1p=T−1opexp(ξ∧)−1p=T−1opexp(−ξ∧)p≈T−1op(1−ξ∧)p=T−1opp−T−1opξ∧p=T−1opp−T−1opp⊙ξ=−T−1opp⊙=Keye(3,4)∂PL∂ξ=∂PI∂ξ−∂PI∂ξ(3)⎡⎣⎢uv1⎤⎦⎥ Amazing! Comparison: run : run_so3_spBA , it takes 20 iterations to converge to the optimal values. If we run run_se3_hc , it takes just 5 iterations to converge. So we can draw a conclusion that, it’s more efficient using Lie Group SE(3) to converge.