0%

Vector Calculus(向量微积分)

Vector Calcuclus

Vectors and Parametric Curves

Basic

Definition (Scalar, Point, Bi-point, Vector)

Scalar
A scalar αRα ∈ \R is simply a real number.
 
Point, Bi-point
A point rR2r ∈ \R^2 is an ordered pair of real numbers, r=(x,y)r = (x, y) with xRx ∈ \R and yRy ∈ \R. Here the first coordinate x stipulates the location on the horizontal axis and the second coordinate y stipulates the location on the vertical axis. Given two points r and r′ in R2\R^2 the directed line segment with departure point r and arrival point r′ is called the bi-point r, r′ and is denoted by [r,r′]. We say that r is the tail of the bi-point [r, r′] and that r′ is its head. The Euclidean length or norm of bi-point [a, b] is simply the distance between a and b and it is denoted by [a,b]=(a1b1)2+(a2b2)2||[a,b]|| = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2}
 
Vector
A vector aR2\vec a ∈ \R^2 is a codification of movement of a bi-point: given the bi-point [r, r′], we associate to it the vector rr=[xxyy]\overrightarrow {rr^′} = \begin{bmatrix}x' - x \\ y′ - y\end{bmatrix} stipulating a movement of xxx' - x units from (x, y) in the horizontal axis and of yyy' - y units from the current position in the vertical axis.The zero vector 0=[00]\vec 0 = \begin{bmatrix}0 \\ 0\end{bmatrix} indicates no movement in either direction.

Let u0\vec u \ne \vec 0. Put Ru={λu:λR}\R\vec u = \{\lambda\vec u: \lambda \in \R\} and let aR2a \in \R^2, the affine line with direction vector u=[u1u2]\vec u = \begin{bmatrix}u_1 \\ u_2\end{bmatrix} and passing through a is the set of points on the plane.

a+Ru={(xy)R2:x=a1+tu1,y=a2+tu2,tR}a + \R\vec u = \{\begin{pmatrix} x\\y \end{pmatrix} \in \R^2: x = a_1 + tu_1, y = a_2 + tu_2, t \in \R\}

that is, the affine line is the Cartesian line with slope u2u1\frac{u_2}{u_1}, Conversely, if y=mx+ky = mx + k is the equation of a Cartesian line, then

(xy)=[1m]t+(0k)\begin{pmatrix} x\\y \end{pmatrix} = \begin{bmatrix}1\\m\end{bmatrix}t + \begin{pmatrix} 0\\k \end{pmatrix}

that is, every Cartesian line is also an affine line and one may take the vector [1m]\begin{bmatrix}1\\m\end{bmatrix} as its direction vector.

Let xR2\vec x \in \R^2 and yR2\vec y \in \R^2. Their scalar product (dot product, inner product) is defined and denoted by xy=x1y1+x2y2\vec x \cdot \vec y = x_1y_1 + x_2y_2

Consider now two arbitrary vectors in R2,x,y\R^2 , \vec x, \vec y , say. Under which conditions can we write an arbitrary vector v\vec v on the plane as a linear combination of x and y , that is, when can we find scalars a, b such that v=ax+by\vec v = a \vec x + b \vec y ?

v=ax+by    v1=ax1+by1,v2=ax2+by2    a=v1y2v2y1x1y2x2y1,b=x1v2x2v1x1y2x2y1\vec v = a \vec x + b \vec y \iff v_1 = ax_1 + by_1, v_2 = ax_2 + by_2\iff a = \frac{v_1y_2 - v_2y_1}{x_1y_2 - x_2y_1} , b = \frac{x_1v_2 - x_2v_1}{x_1y_2 - x_2y_1}
The above expressions for a and b make sense only if x1y2x2y1x_1y_2 \ne x_2y_1.

So given two vectors in R2,x,y\R^2 , \vec x, \vec y , an arbitrary vector v\vec v can be written as the linear combination v=ax+by,aR,bR\vec v = a \vec x + b \vec y, a \in \R, b \in \R if and only if x\vec x is not parallel to y\vec y.

Geometric Transformations in two dimensions

We now are interested in the following fundamental functions of sets on the plane: translations, scalings (stretching or shrinking) reflexions about the axes, and rotations about the origin.

Translate
A function Tv:R2R2T_{\vec v}: \R^2 \to \R^2 is said to be a translation if it is of the form Tv(x)=x+vT_{\vec v}(x) = x + \vec v, where v\vec v is a fixed vector on the plane.A translation simply shifts an object on the plane rigidly (that is, it does not distort it shape or re-orient it), to a copy of itself a given amount of units from where it was.
 
Scale
A function Sa,b:R2R2S_{a,b}: \R^2 \to \R^2 is said to be a scaling if it is of the form Sa,b(r)=(axby),a,bR+S_{a,b}(r) = \begin{pmatrix}ax\\by\end{pmatrix}, a,b \in \R^+
 
Reflexion
A function RH:R2R2R_H: \R^2 \to \R^2 is said to be a reflexion about the y-axis or horizontal reflexion if it is of the form RH(r)=(xy)R_H(r) = \begin{pmatrix}-x\\y\end{pmatrix}
A function RV:R2R2R_V: \R^2 \to \R^2 is said to be a reflexion about the x-axis or vertical reflexion if it is of the form RV(r)=(xy)R_V(r) = \begin{pmatrix}x\\-y\end{pmatrix}
A function RO:R2R2R_O: \R^2 \to \R^2 is said to be a reflexion about the orgin if it is of the form RO(r)=(xy)R_O(r) = \begin{pmatrix}x\\--y\end{pmatrix}
 
Rotation
A function Rθ:R2R2R_\theta: \R^2 \to \R^2 is said to be a levogyrate rotation about the origin by the angle θ measured from the positive x-axis if Rθ(r)=(xcosθysinθxsinθ+ycosθ)R_\theta(r) = \begin{pmatrix}xcos\theta - ysin\theta \\ xsin\theta + ycos\theta \end{pmatrix}
 
linear transformation
A function L:R2R2L: \R^2 \to \R^2 is said to be a linear transformation from R2\R^2 to R2\R^2 if for all points a, b on the plane and every scalar λ, it is verified that L(a+b)=L(a)+L(b),L(λa)=λL(b)L(a+b) = L(a) + L(b), L(\lambda a) = \lambda L(b)
 
affine transformation
A function A:R2R2A : \R^2 \to \R^2 is said to be an affine transformation from R2\R^2 to R2\R^2 if there exists a linear transformation L:R2R2L: \R^2 \to \R^2 and a fixed vector vR2\vec v ∈ \R^2 such that for all points xR2x ∈ \R^2 it is verified that A(x)=L(x)+vA(x) = L(x) + \vec v

Let L:R2R2L : \R^2 \to \R^2 be a linear transformation. The matrix ALA_L associated to L is the 2 × 2, (2rows, 2columns) array whose columns are in this order L((10))L\begin{pmatrix}\begin{pmatrix}1\\0\end{pmatrix}\end{pmatrix} and L((01))L\begin{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}\end{pmatrix}

Determinants in two dimensions

The determinant of the 2 × 2 matrix [acbd]\begin{bmatrix}a & c\\b & d\end{bmatrix} is det[acbd]=adbcdet\begin{bmatrix}a & c\\b & d\end{bmatrix} = ad - bc.

Consider now a simple quadrilateral with vertices r1=(x1,y1),r2=(x2,y2),r3=(x3,y3),r4=(x4,y4)r_1 = (x_1, y_1), r_2 = (x_2, y_2), r_3 = (x_3, y_3), r_4 = (x_4, y_4), listed in counterclockwise order. This quadrilateral is spanned by the vectors

r1r2=[x2x1y2y1],r1r4=[x4x1y4y1]\overrightarrow{r_1r_2} = \begin{bmatrix}x_2 -x_1\\y_2-y_1\end{bmatrix}, \overrightarrow{r_1r_4} = \begin{bmatrix}x_4 -x_1\\y_4-y_1\end{bmatrix}

and hence, its area is given by

A=det[x2x1x4x1y2y1y4y1]=D(r2r1,r4r1)A = det\begin{bmatrix}x_2 - x_1 & x_4 - x_1 \\ y_2 - y_1 & y_4 - y_1\end{bmatrix} = D(\vec r_2 - \vec r_1, \vec r_4 - \vec r_1)

We conclude that the area of a quadrilateral with vertices (x1,y1),(x2,y2),(x3,y3),(x4,y4)(x_1, y_1), (x_2, y_2), (x_3, y_3), (x_4, y_4), listed in counter clockwise order is

12(det(x1x2y1y2)+det(x2x3y2y3)+det(x3x4y3y4)+det(x4x1y4y1))\frac{1}{2}(det\begin{pmatrix}x_1 & x_2\\y_1 & y_2\end{pmatrix} + det\begin{pmatrix}x_2 & x_3\\y_2 & y_3\end{pmatrix} + det\begin{pmatrix}x_3 & x_4\\y_3 & y_4\end{pmatrix} + det\begin{pmatrix}x_4 & x_1\\y_4 & y_1\end{pmatrix} )

In general, we have the following theorem.

Let (x1,y1),(x2,y2),...,(xn,yn)(x_1, y_1), (x_2, y_2), ..., (x_n, y_n) be the vertices of a simple (non- crossing) polygon, listed in counterclockwise order. Then its area is given by

12(det(x1x2y1y2)+det(x2x3y2y3)+...+det(xn1xnyn1yn)+det(xnx1yny1))\frac{1}{2}(det\begin{pmatrix}x_1 & x_2\\y_1 & y_2\end{pmatrix} + det\begin{pmatrix}x_2 & x_3\\y_2 & y_3\end{pmatrix} + ... + det\begin{pmatrix}x_{n-1} & x_{n}\\y_{n-1} & y_n\end{pmatrix} + det\begin{pmatrix}x_n & x_1\\y_n & y_1\end{pmatrix})

Parametric Curves on the Plane

Let [a;b]R[a; b] ⊆ \R. A parametric curve representation r of a curve Γ is a function r:[a;b]Rr : [a; b] \to \R , with

r(t)=(x(t)y(t))r(t) = \begin{pmatrix}x(t)\\y(t)\end{pmatrix}

and such that r([a;b])=Γr([a; b]) = Γ. r(a) is the initial point of the curve and r(b) its terminal point.

As the determinants in two dimensions corrensponding the area. SΔ=12det[xx+Δxyy+Δy]=12(xΔyyΔx)S_\Delta =\frac{1}{2} det\begin{bmatrix}x & x + \Delta x \\ y & y + \Delta y\end{bmatrix} = \frac{1}{2} (x\Delta y - y\Delta x)
So for the parametric curves which can be seen as N(N)N(N\to\infty) points. So we have

Spc=12Γ(xdyydx)S_{pc} = \frac{1}{2}\oint_\Gamma(xdy - ydx)

Vectors in Space

The 3-dimensional Cartesian Space is defined and denoted by R3={r=(x,y,z):xR,yR,zR}\R^3 = \{r = (x,y,z): x\in \R, y \in \R, z \in R\}
The dot product of two vectors a\vec a and b\vec b in R3\R^3 is ab=a1b1+a2b2+a3b3\vec a \cdot \vec b = a_1b_1 + a_2b_2 + a_3b_3

Let u\vec u and v\vec v be linearly independent vectors. The parametric equation of a plane containing the point a, and parallel to the vectors uu and vv is given by

ra=pu+qv\vec r - \vec a = p\vec u + q\vec v

in the other way, the equation of the plane in space can be written in the form ax+by+cz=dax + by + cz = d, which is the product of any v=(x+d1,y+d2,z+d3)\vec v = (x + d_1,y+d_2,z+d_3) and fixed p=(a,b,c)\vec p = (a, b, c) with condition ad1+bd2+cd3=0ad_1 + bd_2 + cd_3 = 0

Cross Product

We now define the standard cross product in R3\R^3 as a product satisfying the following properties.

Let x,y,z\vec x, \vec y, \vec z be vectors in R3\R^3 ,and let αRα∈R be a scalar. The cross product ×× is a closed binary operation satisfying

Anti-commutativity: x×y=(y×x)\vec x \times \vec y = -(\vec y \times \vec x)
Bilinearity: (x+z)×y=x×y+z×y(\vec x + \vec z) \times \vec y = \vec x \times \vec y + \vec z \times \vec y
Scalar homogeneity: (ax)×y=x×(ay)=a(x×y)(a\vec x) \times \vec y = \vec x \times (a\vec y) = a(\vec x \times \vec y)
Zeror Rule: x×x=0\vec x \times \vec x = \vec 0
Right-hand Rule: i×j=k,j×k=i,k×i=j\vec i \times \vec j = \vec k, \vec j \times \vec k = \vec i, \vec k \times \vec i = \vec j

Let x=[x1x2x3]\vec x = \begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} and y=[y1y2y3]\vec y = \begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix} be vectors in R3\R^3 . Then

x×y=(x2y3x3y2)i(x1y3x3y1)j+(x1y2x2y1)k\vec x\times \vec y = (x_2y_3 - x_3y_2)\vec i - (x_1y_3 - x_3y_1)\vec j + (x_1y_2 - x_2y_1)\vec k

With the definition, it’s easy to get that

1: x(x+y) and y(x+y)\vec x \perp (\vec x + \vec y) \space and \space \vec y \perp (\vec x + \vec y)
2: a×(b+c)=(ac)b(ab)c\vec a \times (\vec b + \vec c) = (\vec a \cdot \vec c)\vec b - (\vec a \cdot \vec b)\vec c
3: denote the convex angle between two vector x,y\vec x , \vec y is θ\theta, then x×y=xysinθ||\vec x \times \vec y|| = ||\vec x|||\vec y||sin\theta

corolary: Two non-zero vectors x,y\vec x, \vec y , satisfy x×y=0\vec x × \vec y = \vec 0 if and only if they are parallel.

Let a,b,c\vec a , \vec b , \vec c be linearly independent vectors in R3\R^3 . The signed volume of the parallelepiped spanned by them is (a×b)c(\vec a × \vec b)• \vec c.

Matrices in three dimensions

We will briefly introduce 3 × 3 matrices. Most of the material will flow like that for 2 × 2 matrices.

A linear transformation T:R3R3T : \R^3 \to \R^3 is a function such that T(a+b)=T(a)+T(b),T(λa)=λT(a)T(a + b) = T(a) + T(b), T(λa) = λT(a), for all points a, b in R3\R^3 and all scalars λ. Such a linear transformation has a 3×3 matrix representation whose columns are the vectors T(i),T(j),T(k)T (i), T (j), T (k).

Determinants in three dimensions

Since thanks to Theorem, the signed volume of the parallelepiped spanned by them is (a×b)c(\vec a × \vec b)• \vec c, we define the determinant of A, det A, to be

D(a,b,c)=det[a1b1c1a2b2c2a3b3c3]=a(b×c)D(\vec a, \vec b, \vec c) = det\begin{bmatrix}a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{bmatrix} = \vec a \cdot (\vec b \times \vec c)

Spherical Trigonometry

Given a point (x, y, z) in Cartesian coordinates, its spherical coordinates are given by

x=ρcosθsinφ,y=ρsinθsinφ,z=ρcosφx = ρcosθsinφ, y = ρsinθsinφ,z = ρcosφ

Here φ is the polar angle, measured from the positive z-axis, and θ is the azimuthal angle, measured from the positive x-axis. By convention, 0 ≤ θ ≤ 2π and 0 ≤ φ ≤ π.

Spherical coordinates are extremely useful when considering regions which are symmetric about a point.

Canonical Surfaces

In this section we consider various surfaces that we shall periodically encounter in subsequent sec- tions. Just like in one-variable Calculus it is important to identify the equation and the shape of a line, a parabola, a circle, etc., it will become important for us to be able to identify certain families of often-occurring surfaces. We shall explore both their Cartesian and their parametric form. We remark that in order to parametrise curves (“one-dimensional entities”) we needed one parameter, and that in order to parametrise surfaces we shall need to parameters.

Parametric Curves in Space

Let [a;b]R[a; b] ⊆ \R. A parametric curve representation r of a curve ΓΓ is a function r:[a;b]R3r : [a; b] \to \R^3

r(t)=(x(t)y(t)z(t))r(t) = \begin{pmatrix}x(t) \\ y(t) \\z(t) \end{pmatrix}

and such that r([a;b])=Γr([a; b]) = Γ. r(a) is the initial point of the curve and r(b) its terminal point. A curve is closed if its initial point and its final point coincide. The trace of the curve r is the set of all images of r, that is, ΓΓ. The length of the curve is
Γdr\int_\Gamma||d\vec r||

Multidimensional Vectors

We briefly describe space in n-dimensions. The ideas expounded earlier about the plane and space carry almost without change.

Rn\R^n is the n-dimensional space, the collection Rn={(x1x2xn):xkR}\R^n = \{ \begin{pmatrix}x_1\\x_2\\\vdots\\x_n\end{pmatrix}: x_k \in \R \}

Given vectors a,b\vec a , \vec b of Rn\R^n , their dot product is ab=k=1nakbk\displaystyle \vec a \cdot \vec b = \sum_{k=1}^n a_kb_k

Cauchy-Bunyakovsky-Schwarz Inequality: Let x\vec x and y\vec y be any two vectors in Rn\R^n, then we have xyxy.|\vec x \cdot \vec y | ≤ ||\vec x||\vec y|| ..
The form of the Cauchy-Bunyakovsky-Schwarz most useful to us will be k=1nxkyk(k=1nxk2)12(k=1nyk2)12\displaystyle |\sum_{k=1}^nx_ky_k|\le(\sum_{k=1}^nx_k^2)^{\frac{1}{2}}(\sum_{k=1}^ny_k^2)^{\frac{1}{2}}

Differentiation

Multivariable Functions

Let ARnA ⊆ \R^n . For most of this course, our concern will be functions of the form f:ARmf: A\to\R^m
If m=1m=1,we say that ff is a scalar field. If m2m≥2,we say that ff is a vector field.

Definition of the Derivative

Let ARnA⊆\R^n.A function f:ARmf:A\to\R^m is said to be differentiable at aAa∈A if there is a linear transformation, called the derivative of f at a, Da(f):RnRmD_a(f): \R^n\to\R^m such that

limxa=f(x)f(a)Da(f)(xa)xa=0\displaystyle \lim_{x\to a} = \frac{||f(x) - f(a) - D_a(f)(x-a)||}{||x - a||} = 0

The Jacobi Matrix

We now establish a way which simplifies the process of finding the derivative of a function at a given point.
Let ARn,f:ARmA⊆\R^n ,f :A\to \R^m ,and put

f(x)=[f1(x1,x2,,xn)f2(x1,x2,,xn)fm(x1,x2,,xn)]\displaystyle f(x) = \begin{bmatrix}f_1(x_1, x_2, \cdots ,x_n) \\ f_2(x_1, x_2, \cdots ,x_n) \\ \vdots \\ f_m(x_1, x_2, \cdots ,x_n) \\ \end{bmatrix}

Here fi:RnRf_i: \R^n \to \R. The partial derivative fixj(x)\frac{\partial f_i}{\partial x_j}(x) is defined as

fixj(x)=limh0fi(x1,x2,,xj+h,,xn)fi(x1,x2,,xj,,xn)h\displaystyle \frac{\partial f_i}{\partial x_j}(x) = \lim_{h\to 0}\frac{f_i(x_1, x_2, \cdots, x_j + h, \cdots, x_n) - f_i(x_1, x_2, \cdots, x_j, \cdots, x_n) }{h}

whenever this limit exists.

To find partial derivatives with respect to the j-th variable, we simply keep the other variables fixed and differentiate with respect to the j-th variable.

Then each partial derivative fixj(x)\frac{\partial f_i}{\partial x_j}(x) exists, and the matrix representation of Dx(f)D_x(f) with respect to the standard bases of Rn\R^n and Rm\R^m is the Jacobi matrix

f(x)=[f1x1(x)f1x2(x)f1xn(x)f2x1(x)f2x2(x)f2xn(x)fnx1(x)fnx2(x)fnxn(x)]\displaystyle f'(x) = \begin{bmatrix} \frac{\partial f_1}{\partial x_1}(x) & \frac{\partial f_1}{\partial x_2}(x) & \cdots & \frac{\partial f_1}{\partial x_n}(x) \\ \frac{\partial f_2}{\partial x_1}(x) & \frac{\partial f_2}{\partial x_2}(x) & \cdots & \frac{\partial f_2}{\partial x_n}(x) \\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial f_n}{\partial x_1}(x) & \frac{\partial f_n}{\partial x_2}(x) & \cdots & \frac{\partial f_n}{\partial x_n}(x)\end{bmatrix}

for eample, the function f(x,y)=(xy+yz,logexy)f(x,y) = (xy + yz, log_exy), then jacobi matrix is f(x,y)=[yx+zy1x1y0]f'(x,y) = \begin{bmatrix}y & x+z & y \\ \frac{1}{x} & \frac{1}{y} & 0 \end{bmatrix}

Gradients and Directional Derivatives

A function f:xRnf(x)Rmf: \bold x \in \R^n \to f(\bold x) \in \R^m is called a vector field.
If m=1m = 1, it is called a scalar field.

Definition

Let f:RnRxf(x)f:\begin{matrix} \R^n & \to & \R \\ \bold x & \to & f(\bold x) \end{matrix} be a scalar field.
The gradient of ff is the vector defined and denoted by f(x)=[fx1(x)fx2(x)fxn(x)]\displaystyle \nabla f(\bold x) = \begin{bmatrix}\frac{\partial f}{\partial x_1}(\bold x) \\ \frac{\partial f}{\partial x_2}(\bold x) \\ \vdots \\ \frac{\partial f}{\partial x_n}(\bold x) \end{bmatrix}

The gradient operator is the operator =[x1x2xn]\displaystyle \nabla = \begin{bmatrix}\frac{\partial}{\partial x_1} \\ \frac{\partial}{\partial x_2} \\ \vdots \\ \frac{\partial}{\partial x_n}\end{bmatrix}

Definition

Let f:RnRnxf(x)f:\begin{matrix} \R^n & \to & \R^n \\ \bold x & \to & f(\bold x) \end{matrix} be a vector field with f(x)=[f1(x)f2(x)fn(x)]f(\bold x) = \begin{bmatrix}f_1(\bold x) \\ f_2(\bold x) \\ \vdots \\ f_n(\bold x)\end{bmatrix}
The divergence of f is defined and denoted by divf(x)=f(x)=f1x1(x)+f2x2(x)++fnxn(x)\displaystyle div f(\bold x) = \nabla \cdot f(\bold x) = \frac{\partial f_1}{\partial x_1}(\bold x) + \frac{\partial f_2}{\partial x_2}(\bold x) + \cdots + \frac{\partial f_n}{\partial x_n}(\bold x)

Extrema (and Hessian matrix)

We now turn to the problem of finding maxima and minima for vector functions. As in the one-variable case, the derivative will provide us with information about the extrema, and the “second derivative” will provide us with information about the nature of these extreme points.

To define an analogue for the second derivative, let us consider the following. Let ARnA ⊂ \R^n and f:ARmf : A \to \R^m be differentiable on A. We know that for fixed x0Ax0 ∈ A, Dx0(f)D_{x_0}(f) is a linear transformation from Rn\R^n to Rm\R^m. This means that we have a function

T:AL(Rn,Rm)xDx(f)\displaystyle T:\begin{matrix}A & \to & L(\R^n, \R^m) \\ x & \to & D_x(f)\end{matrix}

where L(Rn,Rm)L(\R^n, \R^m) denotes the space of linear transformations from Rn\R^n to Rm\R^m. Hence, if we differentiate T at x0x_0 again, we obtain a linear transformation Dx0(T)=Dx0(Dx0(f))=Dx02(f)D_{x_0}(T) = D_{x_0}(D_{x_0}(f)) = D_{x_0}^2(f) from Rn\R^n to L(Rn,Rm)L(\R^n, \R^m). Hence, given x1Rnx_1 ∈ \R^n, we have Dx02(f)(x1)L(Rn,Rm)D_{x_0}^2(f)(x_1) ∈ L (\R^n, \R^m). Again, this means that given $x_2 ∈ \R^n, D_{x_0}^2(f)(x_1))(x_2) ∈ \R^m $. Thus the function

Bx0:Rn×RnL(Rn,Rm)(x1,x2)Dx02(f)(x1,x2)B_{x_0}:\begin{matrix}\R^n \times \R^n & \to & L(\R^n, \R^m) \\ (x_1, x_2) & \to & D_{x_0}^2(f)(x_1, x_2)\end{matrix}

is well defined, and linear in each variable x1x_1 and x2x_2, that is, it is a bilinear function.

Theorem:

Let ARnA ⊆ \R^n be an open set, and f:ARf : A \to \R be twice differentiable on A. Then the matrix of Dx2(f):Rn×RnRD_{x}^2(f) : \R^n × \R^n \to \R with respect to the standard basis is given by the Hessian matrix:

Hxf=[2fx1x1(x)2fx1x2(x)2fx1xn(x)2fx2x1(x)2fx2x2(x)2fx2xn(x)2fxnx1(x)2fxnx2(x)2fxnxn(x)]H_xf = \begin{bmatrix} \frac{\partial ^2f}{\partial x_1 \partial x_1}(x) & \frac{\partial ^2f}{\partial x_1 \partial x_2}(x) & \cdots & \frac{\partial ^2f}{\partial x_1 \partial x_n}(x) \\ \frac{\partial ^2f}{\partial x_2 \partial x_1}(x) & \frac{\partial ^2f}{\partial x_2 \partial x_2}(x) & \cdots & \frac{\partial ^2f}{\partial x_2 \partial x_n}(x) \\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial ^2f}{\partial x_n \partial x_1}(x) & \frac{\partial ^2f}{\partial x_n \partial x_2}(x) & \cdots & \frac{\partial ^2f}{\partial x_n \partial x_n}(x) \end{bmatrix}

Lagrange Multipliers

Integration

[Todo]

Reference

Multivariable Geometry and Vector Calculus By David A. SANTOS
Vector Calculus By Susan Jane Colley