/ Home / math / Tensor / 02._Mathematical_Basics / 01._Tensor /

  •  [Go Up]
  •  image/

01. Tensor

A tensor is a generalisation of scalar fields, vector fields: fields with more indices.

In euclidean space

Rotations

Orthogonal transformations a, b and c of the group of orthogonal transformations in 3 dimensions O(3) must uphold the following laws:

a,b∈O(3)\Rightarrow a⋅b∈O(3) completeness.
a⋅(b⋅c)=(a⋅b)⋅c associativity.
∃e:a⋅e=e⋅a unit.
∃a^{-1}:a^{-1}⋅a=e inverse.

So to actually rotate a vector x by the transformation a, do:

\overline x_i=a_{i,j}⋅x_j

Or back:

x_j=(a^{-1})_{j,i}⋅\overline x_i

Rotations leave the length unchanged (we hope), so:

x_i⋅x_i=\overline{x}_j⋅\overline{x}_j
x_i⋅x_i=a_{j,l}⋅x_l⋅a_{j,m}⋅x_m
x_i⋅x_i=(a^T)_{l,j}⋅a_{j,m}⋅x_l⋅x_m

Because of the δ Tensor rule E_{l,m}⋅x_l⋅x_m=x_m⋅x_m, it follows that:

E_{l,m}⋅x_l⋅x_m=(a^T)_{l,j}⋅a_{j,m}⋅x_l⋅x_m
E_{l,m}=(a^T)_{l,j}⋅a_{j,m}

Because by definition this holds:

E_{l,m}=(a^{-1})_{l,j}⋅a_{j,m}

... it follows that:

a^{-1}=a^T

So what kinds of rotations are allowed?

\det E=(\det a^T)⋅(\det a)
\det E=(\det a)²
1=(\det a)²
(\det a)=\pm 1

... however, if (\det a)=-1, it will also mirror, which is probably not what you want.

Infinitesimal rotation

Sometimes, infinitesimally small rotations are useful:

For example, for rotation matrix a^z around the z axis, for an angle α, with J a matrix with very few ones and a lot of zeroes.

(a^z)_{i,j}(α)=E_{i,j}+α⋅J_{i,j}

A product of these is:

(a^z)_{i,j}⋅(a^z)_{j,l}=E_{i,l}+2⋅α⋅J_{i,l}+O(α²)

TODO graph.

a(φ⃗)=[a(÷{φ⃗}{N})]^N
\lim_{N\rightarrow ∞} (1+÷{φ⃗∙J⃗}{N})^N=e^{φ⃗∙J⃗}

Passive rotation

If you want to describe the vector x in terms of a new rotated (by a) coordinate system, you do:

x⃗=x_i⋅e⃗_i=\overline{x}_i⋅\overline{e⃗_i}
\overline{x}_i=a_{i,j}⋅x_j
a_{i,j}⋅\overline{e⃗_i}=e⃗_j

Tensors

0th level (scalar)

\overline{φ}(\overline{x⃗})=φ(x⃗)

(rotations shouldn't change physical laws).

So now let's see what the gradient does:

\overline{∂}_i(\overline{U}(\overline{x}_k))=a_{j,k}⋅∂_k(U(x_m))
\overline{x}_k=a_{k,m}⋅x_m

1st level (vector)

\overline{T}_i(\overline{x}_m)=a_{i,k}⋅T_k(x_n)
\overline{x}_m=a_{m,n}⋅x_n

2nd level

\overline{T}_{i,j}(\overline{x}_m)=a_{i,a}⋅a_{j,b}⋅T_{a,b}(x_n)
\overline{x}_m=a_{m,n}⋅x_n

nth level

\overline{T}_{i_1,i_2,...,i_c}(\overline{x}_m)=a_{i_1,j_1}⋅a_{i_2,j_2}⋅...a_{i_c,j_c}⋅T_{j_1,j_2,...,j_c}(x_n)
\overline{x}_m=a_{m,n}⋅x_n

Author: Danny (remove the ".nospam" to send)

Last modification on: Sat, 04 May 2024 .