On this page, we will work with expressions of the form
∂ u ‾ ∂ v ‾ \begin{aligned} \frac{\partial \underline{\boldsymbol{ u}}}{\partial \underline{\boldsymbol{ v}}} \end{aligned} ∂ v ∂ u that is, differentiation a tensor valued expression wrt. to a tensor. In this case, u ‾ = u i e ‾ i \underline{\boldsymbol{ u}}=u_i\underline{\boldsymbol{ e}}_{ i} u = u i e i and v ‾ = v i e ‾ i \underline{\boldsymbol{ v}}=v_i \underline{\boldsymbol{ e}}_{ i} v = v i e i . Working with constant orthonormal coordinate systems, we use that
∂ u ‾ ∂ v ‾ = ∂ u i ∂ v j e ‾ i ⊗ e ‾ j \begin{aligned} \frac{\partial \underline{\boldsymbol{ u}}}{\partial \underline{\boldsymbol{ v}}} = \frac{\partial u_i}{\partial v_j} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} \end{aligned} ∂ v ∂ u = ∂ v j ∂ u i e i ⊗ e j Here, constant implies that the base vectors are constant in space. Hence, it suffices to differentiate the coefficients as the derivative of each base vector is zero. In general, for an N N N th order y = y i 1 i 2 ⋯ i N e ‾ i 1 ⊗ e ‾ i 2 ⊗ ⋯ ⊗ e ‾ i N \boldsymbol{y} = y_{i_1 i_2 \cdots i_N}\underline{\boldsymbol{ e}}_{ i_1}\otimes\underline{\boldsymbol{ e}}_{ i_2}\otimes\cdots\otimes\underline{\boldsymbol{ e}}_{ i_N} y = y i 1 i 2 ⋯ i N e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i N and an M M M th order x = x j 1 j 2 ⋯ j M e ‾ j 1 ⊗ e ‾ j 2 ⊗ ⋯ ⊗ e ‾ j M \boldsymbol{x} = x_{j_1 j_2 \cdots j_M}\underline{\boldsymbol{ e}}_{ j_1}\otimes\underline{\boldsymbol{ e}}_{ j_2}\otimes\cdots\otimes\underline{\boldsymbol{ e}}_{ j_M} x = x j 1 j 2 ⋯ j M e j 1 ⊗ e j 2 ⊗ ⋯ ⊗ e j M , we have that
∂ y ∂ x = [ ∂ y i 1 i 2 ⋯ i N ∂ x j 1 j 2 ⋯ j M ] e ‾ i 1 ⊗ e ‾ i 2 ⊗ ⋯ ⊗ e ‾ i N ⊗ e ‾ j 1 ⊗ e ‾ j 2 ⊗ ⋯ ⊗ e ‾ j M \begin{aligned} \frac{\partial \boldsymbol{y}}{\partial \boldsymbol{x}} &= \left[\frac{\partial y_{i_1 i_2 \cdots i_N}}{\partial x_{j_1 j_2 \cdots j_M}}\right] \underline{\boldsymbol{ e}}_{ i_1}\otimes\underline{\boldsymbol{ e}}_{ i_2}\otimes\cdots\otimes\underline{\boldsymbol{ e}}_{ i_N}\otimes\underline{\boldsymbol{ e}}_{ j_1}\otimes\underline{\boldsymbol{ e}}_{ j_2}\otimes\cdots\otimes\underline{\boldsymbol{ e}}_{ j_M} \end{aligned} ∂ x ∂ y = [ ∂ x j 1 j 2 ⋯ j M ∂ y i 1 i 2 ⋯ i N ] e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i N ⊗ e j 1 ⊗ e j 2 ⊗ ⋯ ⊗ e j M Furthermore, since we can consider each free coefficient, e.g., u 1 u_1 u 1 , as a scalar value, we can apply basic calculus rules, such as the chain and product rules. Even when considering dummy (summation) indices, these rules hold (expand to see an example). Consider the following in 2d: The tensor
b \boldsymbol{ b} b is a function of the tensor
a \boldsymbol{ a} a , such that
b ( a ) = a a \boldsymbol{ b}(\boldsymbol{ a}) = \boldsymbol{ a}\boldsymbol{ a} b ( a ) = a a (i.e.
b i j = a i n a n j b_{ij} = a_{in}a_{nj} b ij = a in a nj ). We would like to differentiate
a b \boldsymbol{ a}\boldsymbol{ b} a b wrt.
a \boldsymbol{ a} a .
∂ a i m b m j ( a ) ∂ a k l = = ∂ [ a i 1 b 1 j ( a ) + a i 2 b 2 j ( a ) ] ∂ a k l , ( Expand dummy index summation ⇒ 2 4 individual scalar expressions, one for each i , j , k , l ) = ∂ a i 1 b 1 j ( a ) ∂ a k l + ∂ a i 2 b 2 j ( a ) ∂ a k l , ( Product rule ) = a i 1 ∂ b 1 j ( a ) ∂ a k l + ∂ a i 1 ∂ a k l b 1 j ( a ) + a i 2 ∂ b 2 j ( a ) ∂ a k l + ∂ a i 2 ∂ a k l b 2 j ( a ) , ( Chain rule ) = a i 1 ∂ a 1 n a n j ∂ a k l + δ i k δ 1 l b 1 j ( a ) + a i 2 ∂ a 2 n a n j ∂ a k l + δ i k δ 2 l b 2 j ( a ) = a i 1 [ a 1 n ∂ a n j ∂ a k l + ∂ a 1 n ∂ a k l a n j ] + δ i k δ 1 l b 1 j ( a ) + a i 2 [ a 2 n ∂ a n j ∂ a k l + ∂ a 2 n ∂ a k l a n j ] + δ i k δ 2 l b 2 j ( a ) = a i 1 [ a 1 n δ n k δ j l + δ 1 k δ n l a n j ] + δ i k δ 1 l b 1 j ( a ) + a i 2 [ a 2 n δ n k δ j l + δ 2 k δ n l a n j ] + δ i k δ 2 l b 2 j ( a ) = a i 1 [ a 1 k δ j l + δ 1 k a l j ] + δ i k δ 1 l b 1 j ( a ) + a i 2 [ a 2 k δ j l + δ 2 k a l j ] + δ i k δ 2 l b 2 j ( a ) = a i m [ a m k δ j l + δ m k a l j ] + δ i k δ m l b m j ( a ) , ( Identify as summation, reinstate dummy indices ) = a i m [ a m k δ j l + δ m k a l j ] + δ i k b l j ( a ) \begin{aligned} &\frac{\partial a_{im}b_{mj}(\boldsymbol{ a})}{\partial a_{kl}} =\\ &= \frac{\partial \left[a_{i1}b_{1j}(\boldsymbol{ a}) + a_{i2}b_{2j}(\boldsymbol{ a})\right]}{\partial a_{kl}}, \quad \left(\begin{matrix}\text{Expand dummy index summation} \Rightarrow 2^4 \text{ individual}\\ \text{scalar expressions, one for each }i,j,k,l\end{matrix}\right)\\ &= \frac{\partial a_{i1}b_{1j}(\boldsymbol{ a})}{\partial a_{kl}} + \frac{\partial a_{i2}b_{2j}(\boldsymbol{ a})}{\partial a_{kl}}, \quad (\text{Product rule})\\ &= a_{i1}\frac{\partial b_{1j}(\boldsymbol{ a})}{\partial a_{kl}} + \frac{\partial a_{i1}}{\partial a_{kl}}b_{1j}(\boldsymbol{ a}) + a_{i2}\frac{\partial b_{2j}(\boldsymbol{ a})}{\partial a_{kl}} + \frac{\partial a_{i2}}{\partial a_{kl}}b_{2j}(\boldsymbol{ a}), \quad (\text{Chain rule}) \\ &= a_{i1}\frac{\partial a_{1n}a_{nj}}{\partial a_{kl}} + \delta_{ik}\delta_{1l} b_{1j}(\boldsymbol{ a}) + a_{i2}\frac{\partial a_{2n}a_{nj}}{\partial a_{kl}} + \delta_{ik}\delta_{2l} b_{2j}(\boldsymbol{ a}) \\ &= a_{i1}\left[a_{1n}\frac{\partial a_{nj}}{\partial a_{kl}}+\frac{\partial a_{1n}}{\partial a_{kl}}a_{nj}\right] + \delta_{ik}\delta_{1l} b_{1j}(\boldsymbol{ a}) + a_{i2}\left[a_{2n}\frac{\partial a_{nj}}{\partial a_{kl}}+\frac{\partial a_{2n}}{\partial a_{kl}}a_{nj}\right] + \delta_{ik}\delta_{2l} b_{2j}(\boldsymbol{ a}) \\ &= a_{i1}\left[a_{1n}\delta_{nk}\delta_{jl}+\delta_{1k}\delta_{nl}a_{nj}\right] + \delta_{ik}\delta_{1l} b_{1j}(\boldsymbol{ a}) + a_{i2}\left[a_{2n}\delta_{nk}\delta_{jl}+\delta_{2k}\delta_{nl}a_{nj}\right] + \delta_{ik}\delta_{2l} b_{2j}(\boldsymbol{ a})\\ &= a_{i1}\left[a_{1k}\delta_{jl}+\delta_{1k}a_{lj}\right] + \delta_{ik}\delta_{1l} b_{1j}(\boldsymbol{ a}) + a_{i2}\left[a_{2k}\delta_{jl}+\delta_{2k}a_{lj}\right] + \delta_{ik}\delta_{2l} b_{2j}(\boldsymbol{ a}) \\ &= a_{im}\left[a_{mk}\delta_{jl}+\delta_{mk}a_{lj}\right] + \delta_{ik}\delta_{ml} b_{mj}(\boldsymbol{ a}), \quad \left( \begin{matrix} \text{Identify as summation,} \\ \text{reinstate dummy indices} \end{matrix} \right)\\ &= a_{im}\left[a_{mk}\delta_{jl}+\delta_{mk}a_{lj}\right] + \delta_{ik} b_{lj}(\boldsymbol{ a}) \end{aligned} ∂ a k l ∂ a im b mj ( a ) = = ∂ a k l ∂ [ a i 1 b 1 j ( a ) + a i 2 b 2 j ( a ) ] , ( Expand dummy index summation ⇒ 2 4 individual scalar expressions, one for each i , j , k , l ) = ∂ a k l ∂ a i 1 b 1 j ( a ) + ∂ a k l ∂ a i 2 b 2 j ( a ) , ( Product rule ) = a i 1 ∂ a k l ∂ b 1 j ( a ) + ∂ a k l ∂ a i 1 b 1 j ( a ) + a i 2 ∂ a k l ∂ b 2 j ( a ) + ∂ a k l ∂ a i 2 b 2 j ( a ) , ( Chain rule ) = a i 1 ∂ a k l ∂ a 1 n a nj + δ ik δ 1 l b 1 j ( a ) + a i 2 ∂ a k l ∂ a 2 n a nj + δ ik δ 2 l b 2 j ( a ) = a i 1 [ a 1 n ∂ a k l ∂ a nj + ∂ a k l ∂ a 1 n a nj ] + δ ik δ 1 l b 1 j ( a ) + a i 2 [ a 2 n ∂ a k l ∂ a nj + ∂ a k l ∂ a 2 n a nj ] + δ ik δ 2 l b 2 j ( a ) = a i 1 [ a 1 n δ nk δ j l + δ 1 k δ n l a nj ] + δ ik δ 1 l b 1 j ( a ) + a i 2 [ a 2 n δ nk δ j l + δ 2 k δ n l a nj ] + δ ik δ 2 l b 2 j ( a ) = a i 1 [ a 1 k δ j l + δ 1 k a l j ] + δ ik δ 1 l b 1 j ( a ) + a i 2 [ a 2 k δ j l + δ 2 k a l j ] + δ ik δ 2 l b 2 j ( a ) = a im [ a mk δ j l + δ mk a l j ] + δ ik δ m l b mj ( a ) , ( Identify as summation, reinstate dummy indices ) = a im [ a mk δ j l + δ mk a l j ] + δ ik b l j ( a ) The same result is achieved without expanding the dummy indices \textcolor{blue}{ \text{dummy indices}} dummy indices :
∂ a i m b m j ( a ) ∂ a k l = = a i m ∂ b m j ( a ) ∂ a k l + ∂ a i m ∂ a k l b m j ( a ) = a i m ∂ a m n a n j ∂ a k l + δ i k δ m l b m j ( a ) = a i m [ a m n ∂ a n j ∂ a k l + ∂ a m n ∂ a k l a n j ] + δ i k b l j ( a ) = a i m [ a m n δ n k δ j l + δ m k δ n l a n j ] + δ i k b l j ( a ) = a i m [ a m k δ j l + δ m k a l j ] + δ i k b l j ( a ) \begin{aligned} &\frac{\partial a_{i\textcolor{blue}{ m}}b_{\textcolor{blue}{ m}j}(\boldsymbol{ a})}{\partial a_{kl}} =\\ &= a_{i\textcolor{blue}{ m}}\frac{\partial b_{\textcolor{blue}{ m}j}(\boldsymbol{ a})}{\partial a_{kl}} + \frac{\partial a_{i\textcolor{blue}{ m}}}{\partial a_{kl}}b_{\textcolor{blue}{ m}j}(\boldsymbol{ a})\\ &= a_{i\textcolor{blue}{ m}}\frac{\partial a_{\textcolor{blue}{ mn}}a_{\textcolor{blue}{ n}j}}{\partial a_{kl}} + \delta_{ik}\delta_{\textcolor{blue}{ m}l} b_{\textcolor{blue}{ m}j}(\boldsymbol{ a})\\ &= a_{i\textcolor{blue}{ m}}\left[a_{\textcolor{blue}{ mn}}\frac{\partial a_{\textcolor{blue}{ n}j}}{\partial a_{kl}}+\frac{\partial a_{\textcolor{blue}{ mn}}}{\partial a_{kl}}a_{\textcolor{blue}{ n}j}\right] + \delta_{ik} b_{lj}(\boldsymbol{ a})\\ &= a_{i\textcolor{blue}{ m}}\left[a_{\textcolor{blue}{ mn}}\delta_{\textcolor{blue}{ n}k}\delta_{jl}+\delta_{\textcolor{blue}{ m}k}\delta_{\textcolor{blue}{ n}l}a_{\textcolor{blue}{ n}j}\right] + \delta_{ik} b_{lj}(\boldsymbol{ a})\\ &= a_{i\textcolor{blue}{ m}}\left[a_{\textcolor{blue}{ m}k}\delta_{jl}+\delta_{\textcolor{blue}{ m}k}a_{lj}\right] + \delta_{ik} b_{lj}(\boldsymbol{ a}) \end{aligned} ∂ a k l ∂ a i m b m j ( a ) = = a i m ∂ a k l ∂ b m j ( a ) + ∂ a k l ∂ a i m b m j ( a ) = a i m ∂ a k l ∂ a mn a n j + δ ik δ m l b m j ( a ) = a i m [ a mn ∂ a k l ∂ a n j + ∂ a k l ∂ a mn a n j ] + δ ik b l j ( a ) = a i m [ a mn δ n k δ j l + δ m k δ n l a n j ] + δ ik b l j ( a ) = a i m [ a m k δ j l + δ m k a l j ] + δ ik b l j ( a ) And for completeness, this is a 2 ⊗ ‾ I + a ⊗ ‾ a T + I ⊗ ‾ b T \boldsymbol{ a}^2 \overline{\otimes} \boldsymbol{ I} + \boldsymbol{ a}\overline{\otimes}\boldsymbol{ a}^{\mathrm{T}} + \boldsymbol{ I}\overline{\otimes}\boldsymbol{ b}^{\mathrm{T}} a 2 ⊗ I + a ⊗ a T + I ⊗ b T
If we consider a = f ( x ) = x b \boldsymbol{ a} = f(x) = x\boldsymbol{ b} a = f ( x ) = x b , then
∂ a ∂ x = ∂ x b i j ∂ x e ‾ i ⊗ e ‾ j = b i j e ‾ i ⊗ e ‾ j = b \begin{aligned} \frac{\partial \boldsymbol{ a}}{\partial x} = \frac{\partial x b_{ij}}{\partial x} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} = b_{ij} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} = \boldsymbol{ b} \end{aligned} ∂ x ∂ a = ∂ x ∂ x b ij e i ⊗ e j = b ij e i ⊗ e j = b because b i j b_{ij} b ij doesn't depend on x x x .
Let's first consider the differentiating a tensor wrt. itself. For a first-order tensor, we have
∂ u ‾ ∂ u ‾ = ∂ u i ∂ u j e ‾ i ⊗ e ‾ j ∂ u i ∂ u j = δ i j ∂ u ‾ ∂ u ‾ = I \begin{aligned} \frac{\partial \underline{\boldsymbol{ u}}}{\partial \underline{\boldsymbol{ u}}} &= \frac{\partial u_i}{\partial u_j} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j}\\ \frac{\partial u_i}{\partial u_j} &= \delta_{ij} \\ \frac{\partial \underline{\boldsymbol{ u}}}{\partial \underline{\boldsymbol{ u}}} &= \boldsymbol{ I} \end{aligned} ∂ u ∂ u ∂ u j ∂ u i ∂ u ∂ u = ∂ u j ∂ u i e i ⊗ e j = δ ij = I As ∂ u i / ∂ u j \partial u_i/\partial u_j ∂ u i / ∂ u j is 1 if i = j i=j i = j and 0 if i ≠ j i\neq j i = j .
If we now consider a 2nd order tensor, we have
∂ a ∂ a = ∂ a i j ∂ a k l e ‾ i ⊗ e ‾ j ⊗ e ‾ k ⊗ e ‾ l ∂ a i j ∂ a k l = δ i k δ j l ∂ a ∂ a = I \begin{aligned} \frac{\partial \boldsymbol{ a}}{\partial \boldsymbol{ a}} &= \frac{\partial a_{ij}}{\partial a_{kl}} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j}\otimes\underline{\boldsymbol{ e}}_{ k}\otimes\underline{\boldsymbol{ e}}_{ l} \\ \frac{\partial a_{ij}}{\partial a_{kl}} &= \delta_{ik}\delta_{jl}\\ \frac{\partial \boldsymbol{ a}}{\partial \boldsymbol{ a}} &= \textbf{\textsf{ I}} \end{aligned} ∂ a ∂ a ∂ a k l ∂ a ij ∂ a ∂ a = ∂ a k l ∂ a ij e i ⊗ e j ⊗ e k ⊗ e l = δ ik δ j l = I ∂ a i j / ∂ a k l \partial a_{ij}/\partial a_{kl} ∂ a ij / ∂ a k l is 1 only if i = k i=k i = k and j = l j=l j = l , otherwise, it is zero. In other words: ∂ a i j / ∂ a k l = δ i k δ j l \partial a_{ij}/\partial a_{kl}=\delta_{ik}\delta_{jl} ∂ a ij / ∂ a k l = δ ik δ j l .
To consider a more complicated example, we look at
∂ [ v ‾ a ] ∂ v ‾ = ∂ v k a k i ∂ v j e ‾ i ⊗ e ‾ j ∂ v k a k i ∂ v j = ∂ v k ∂ v j a k i = δ k j a k i = a j i ∂ [ v ‾ a ] ∂ v ‾ = a T \begin{aligned} \frac{\partial \left[\underline{\boldsymbol{ v}}\boldsymbol{ a}\right]}{\partial \underline{\boldsymbol{ v}}} &= \frac{\partial v_k a_{ki}}{\partial v_j} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} \\ \frac{\partial v_k a_{ki}}{\partial v_j} &= \frac{\partial v_k}{\partial v_j} a_{ki} = \delta_{kj} a_{ki} = a_{ji} \\ \frac{\partial \left[\underline{\boldsymbol{ v}}\boldsymbol{ a}\right]}{\partial \underline{\boldsymbol{ v}}} &= \boldsymbol{ a}^{\mathrm{T}} \end{aligned} ∂ v ∂ [ v a ] ∂ v j ∂ v k a ki ∂ v ∂ [ v a ] = ∂ v j ∂ v k a ki e i ⊗ e j = ∂ v j ∂ v k a ki = δ kj a ki = a ji = a T If we consider y = f ( a ) = a : a y = f(\boldsymbol{ a}) = \boldsymbol{ a}:\boldsymbol{ a} y = f ( a ) = a : a , then
∂ y ∂ a = ∂ a k l a k l ∂ a i j e ‾ i ⊗ e ‾ j = [ ∂ a k l ∂ a i j a k l + a k l ∂ a k l ∂ a i j ] e ‾ i ⊗ e ‾ j = [ δ k i δ l j a k l + a k l δ k i δ l j ] e ‾ i ⊗ e ‾ j = [ a i j + a i j ] e ‾ i ⊗ e ‾ j = 2 a i j e ‾ i ⊗ e ‾ j = 2 a \begin{aligned} \frac{\partial y}{\partial \boldsymbol{ a}} &= \frac{\partial a_{kl}a_{kl}}{\partial a_{ij}} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j}\\ &= \left[\frac{\partial a_{kl}}{\partial a_{ij}} a_{kl} + a_{kl} \frac{\partial a_{kl}}{\partial a_{ij}}\right]\underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} \\ &= \left[\delta_{ki}\delta_{lj} a_{kl} + a_{kl} \delta_{ki}\delta_{lj}\right]\underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} \\ &= \left[a_{ij} + a_{ij}\right]\underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} = 2a_{ij}\underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} = 2\boldsymbol{ a} \end{aligned} ∂ a ∂ y = ∂ a ij ∂ a k l a k l e i ⊗ e j = [ ∂ a ij ∂ a k l a k l + a k l ∂ a ij ∂ a k l ] e i ⊗ e j = [ δ ki δ l j a k l + a k l δ ki δ l j ] e i ⊗ e j = [ a ij + a ij ] e i ⊗ e j = 2 a ij e i ⊗ e j = 2 a Some operations wrt. the coordinates are so common that they have their own name and notation. The concept of a gradient, ∇ f \nabla f ∇ f , of a scalar function, f ( x ‾ ) f(\underline{\boldsymbol{ x}}) f ( x ) , is well known. In our notation, we would then have
grad ( f ) = ∂ f ∂ x ‾ = ∇ i f ( x ‾ ) e ‾ i \begin{aligned} \text{grad}(f) = \frac{\partial f}{\partial \underline{\boldsymbol{ x}}} = \nabla_i f(\underline{\boldsymbol{ x}}) \underline{\boldsymbol{ e}}_{ i} \end{aligned} grad ( f ) = ∂ x ∂ f = ∇ i f ( x ) e i And we will define the vector operator ∇ ‾ \underline{\boldsymbol{ \nabla}} ∇ as
∇ ‾ = ∇ i e ‾ i = ∂ ∂ x i e ‾ i \begin{aligned} \underline{\boldsymbol{ \nabla}} = \nabla_i \underline{\boldsymbol{ e}}_{ i} = \frac{\partial }{\partial x_{i}} \underline{\boldsymbol{ e}}_{ i} \end{aligned} ∇ = ∇ i e i = ∂ x i ∂ e i The gradient of higher order tensors is then possible to express as, e.g., v ‾ ⊗ ∇ ‾ \underline{\boldsymbol{ v}}\otimes\underline{\boldsymbol{ \nabla}} v ⊗ ∇ and a ⊗ ∇ ‾ \boldsymbol{ a}\otimes\underline{\boldsymbol{ \nabla}} a ⊗ ∇ .
As ∇ ‾ \underline{\boldsymbol{ \nabla}} ∇ is an operator, we must be explicit about what operand it is operating on by using brackets (expand for examples). To clarify what operand the gradient is acting on in a larger expression, it can be necessary to enclose the entire expression in brackets:
a b ⊗ ∇ ‾ \boldsymbol{ a} \boldsymbol{ b}\otimes\underline{\boldsymbol{ \nabla}} a b ⊗ ∇ : Not clear if the gradient is acting on b \boldsymbol{ b} b or the expression a b \boldsymbol{ a}\boldsymbol{ b} a b
a [ b ⊗ ∇ ‾ ] \boldsymbol{ a}\left[ \boldsymbol{ b}\otimes\underline{\boldsymbol{ \nabla}}\right] a [ b ⊗ ∇ ] : Gradient acting on b \boldsymbol{ b} b
[ a b ] ⊗ ∇ ‾ \left[\boldsymbol{ a}\boldsymbol{ b}\right]\otimes\underline{\boldsymbol{ \nabla}} [ a b ] ⊗ ∇ : Gradient acting on the expression a b \boldsymbol{ a}\boldsymbol{ b} a b
c [ a b ] ⊗ ∇ ‾ \boldsymbol{ c} \left[\boldsymbol{ a}\boldsymbol{ b}\right]\otimes\underline{\boldsymbol{ \nabla}} c [ a b ] ⊗ ∇ : Not clear if gradient is acting on a b \boldsymbol{ a}\boldsymbol{ b} a b or c [ a b ] \boldsymbol{ c}\left[\boldsymbol{ a}\boldsymbol{ b}\right] c [ a b ]
c [ [ a b ] ⊗ ∇ ‾ ] \boldsymbol{ c}\left[ \left[\boldsymbol{ a}\boldsymbol{ b}\right]\otimes\underline{\boldsymbol{ \nabla}}\right] c [ [ a b ] ⊗ ∇ ] : Gradient is acting on a b \boldsymbol{ a}\boldsymbol{ b} a b
In some cases, brackets are required also for regular expression, e.g. C = A : [ a ⊗ ‾ b ] ≠ D = [ A : a ] ⊗ ‾ b \textbf{\textsf{ C}}=\textbf{\textsf{ A}}:\left[\boldsymbol{ a}\overline{\otimes}\boldsymbol{ b}\right] \neq \textbf{\textsf{ D}}=\left[\textbf{\textsf{ A}}:\boldsymbol{ a}\right]\overline{\otimes}\boldsymbol{ b} C = A : [ a ⊗ b ] = D = [ A : a ] ⊗ b ( C i j k l = A i j m n a m k b n l ≠ D i j k l = A i k m n a m n b j l \textsf{ C}_{ ijkl}=\textsf{ A}_{ ijmn}a_{mk}b_{nl}\neq\textsf{ D}_{ ijkl}=\textsf{ A}_{ ikmn}a_{mn}b_{jl} C ijk l = A ijmn a mk b n l = D ijk l = A ikmn a mn b j l ). However, it is more often required when working with the ∇ ‾ \underline{\boldsymbol{ \nabla}} ∇ operator: It's always better to add an extra bracket to be extra clear and avoid mistakes.
The divergence, div ( v ) \text{div}(\boldsymbol{ v}) div ( v ) , can also be more generally defined by using the ∇ ‾ \underline{\boldsymbol{ \nabla}} ∇ operator as e.g.
Divergence of 1st order tensor: v ‾ ⋅ ∇ ‾ \underline{\boldsymbol{ v}}\cdot\underline{\boldsymbol{ \nabla}} v ⋅ ∇
Divergence of 2nd order tensor: a ⋅ ∇ ‾ \boldsymbol{ a}\cdot\underline{\boldsymbol{ \nabla}} a ⋅ ∇
Divergence of higher order tensors is not common. As for the gradient, brackets are crucial to ensure that we know which operand (tensor) ∇ ‾ \underline{\boldsymbol{ \nabla}} ∇ is operating on.
The curl of a vector field, v ‾ ( x ‾ ) \underline{\boldsymbol{ v}}(\underline{\boldsymbol{ x}}) v ( x ) , is defined as
curl ( v ‾ ) = − v ‾ × ∇ ‾ = − ∂ v i ∂ x j ε i j k e ‾ k \begin{aligned} \text{curl}(\underline{\boldsymbol{ v}}) &= - \underline{\boldsymbol{ v}}\times\underline{\boldsymbol{ \nabla}} = - \frac{\partial v_i}{\partial x_j} \varepsilon_{ijk} \underline{\boldsymbol{ e}}_k \end{aligned} curl ( v ) = − v × ∇ = − ∂ x j ∂ v i ε ijk e k This operation is common in fluid mechanics to find the rotation of a velocity field, v ‾ \underline{\boldsymbol{ v}} v .
It is also possible to define the curl for higher order tensors. Here we use the definition from Rubin (2000) :
curl ( a ) = − a × ∇ ‾ = − ∂ a i j ∂ x k ε j k l e ‾ i ⊗ e ‾ l \begin{aligned} \text{curl}(\boldsymbol{ a}) &= - \boldsymbol{ a}\times\underline{\boldsymbol{ \nabla}} = - \frac{\partial a_{ij}}{\partial x_k} \varepsilon_{jkl} \underline{\boldsymbol{ e}}_i \otimes \underline{\boldsymbol{ e}}_l \end{aligned} curl ( a ) = − a × ∇ = − ∂ x k ∂ a ij ε jk l e i ⊗ e l which is the same as for vectors. An important property of the curl of a the gradient of a vector is
− [ u ‾ ⊗ ∇ ‾ ] × ∇ ‾ = 0 \begin{aligned} - \left[ \underline{\boldsymbol{ u}}\otimes\underline{\boldsymbol{ \nabla}}\right]\times\underline{\boldsymbol{ \nabla}} = \boldsymbol{ 0} \end{aligned} − [ u ⊗ ∇ ] × ∇ = 0 Actually, there are many different definitions of the curl for higher order tensors in the literature. For the curl of a second order tensor,
curl ( a ) \text{curl}(\boldsymbol{ a}) curl ( a ) , it can be written as
curl ( a ) = ε o p j ∂ a i p ∂ x o e ‾ i ⊗ e ‾ j \begin{aligned} \text{curl}(\boldsymbol{ a}) = \varepsilon_{opj} \frac{\partial a_{ip}}{\partial x_o} \underline{\boldsymbol{ e}}_{ i}\otimes\underline{\boldsymbol{ e}}_{ j} \end{aligned} curl ( a ) = ε o p j ∂ x o ∂ a i p e i ⊗ e j In the different variations, it could have the opposite sign,
a \boldsymbol{ a} a could be transposed, or the result could be transposed. In many use cases, the sign, and whether or not the result is transposed, is not critical. However, definitions that have
a \boldsymbol{ a} a transposed do not fulfill the important identity that
− [ u ‾ ⊗ ∇ ‾ ] × ∇ ‾ = 0 - \left[ \underline{\boldsymbol{ u}}\otimes\underline{\boldsymbol{ \nabla}}\right]\times\underline{\boldsymbol{ \nabla}} = \boldsymbol{ 0} − [ u ⊗ ∇ ] × ∇ = 0 and should be avoided!