Gradient

Revision as of 17:55, 4 September 2012 by WikiBot (talk | contribs) (Robot: Automated text replacement (-{{WikiDoc Cardiology Network Infobox}} +, -<references /> +{{reflist|2}}, -{{reflist}} +{{reflist|2}}))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
File:Gradient2.svg
In the above two images, the scalar field is in black and white, black representing higher values, and its corresponding gradient is represented by blue arrows.

In vector calculus, the gradient of a scalar field is a vector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change.

A generalization of the gradient, for functions on a Banach space which have vectorial values, is the Jacobian.

Interpretations of the gradient

Consider a room in which the temperature is given by a scalar field <math>T</math>, so at each point <math>(x,y,z)</math> the temperature is <math>T(x,y,z)</math> (we will assume that the temperature does not change in time). Then, at each point in the room, the gradient of <math>T</math> at that point will show the direction in which the temperature rises most quickly. The magnitude of the gradient will determine how fast the temperature rises in that direction.

Consider a hill whose height above sea level at a point <math>(x, y)</math> is <math>H(x, y)</math>. The gradient of <math>H</math> at a point is a vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.

The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Consider again the example with the hill and suppose that the steepest slope on the hill is 40%. If a road goes directly up the hill, then the steepest slope on the road will also be 40%. If instead, the road goes around the hill at an angle with the uphill direction (the gradient vector), then it will have a shallower slope. For example, if the angle between the road and the uphill direction, projected onto the horizontal plane, is 60°, then the steepest slope along the road will be 20% which is 40% times the cosine of 60°.

This observation can be mathematically stated as follows. If the hill height function <math>H</math> is differentiable, then the gradient of <math>H</math> dotted with a unit vector gives the slope of the hill in the direction of the vector. More precisely, when <math>H</math> is differentiable the dot product of the gradient of H with a given unit vector is equal to the directional derivative of H in the direction of that unit vector.

Formal definition

The gradient (or gradient vector field) of a scalar function <math>f(x)</math> with respect to a vector variable <math>x = (x_1,\dots,x_n)</math> is denoted by <math>\nabla f</math> or <math>\vec{\nabla} f</math> where <math>\nabla</math> (the nabla symbol) denotes the vector differential operator, del. The notation <math>\operatorname{grad}(f)</math> is also used for the gradient.

By definition, the gradient is a vector field whose components are the partial derivatives of <math>f</math>. That is:

<math> \nabla f = \left(\frac{\partial f}{\partial x_1 }, \dots, \frac{\partial f}{\partial x_n } \right). </math>

(Here the gradient is written as a row vector, but it is often taken to be a column vector; note also that when a function has a time component, the gradient often refers simply to the vector of its spatial derivatives only.)

The dot product <math>(\nabla f)_x\cdot v</math> of the gradient at a point x with a vector v gives the directional derivative of f at x in the direction v. It follows that the gradient of f is orthogonal to the level sets of f. This also shows that, although the gradient was defined in terms of coordinates, it is actually invariant under orthogonal transformations, as it should be, in view of the geometric interpretation given above.

Because the gradient is orthogonal to level sets, it can be used to construct a vector normal to a surface. Consider any manifold that is one dimension less than the space it is in (i.e., a surface in 3D, a curve in 2D, etc.). Let this manifold be defined by an equation e.g. F(x, y, z) = 0 (i.e., move everything to one side of the equation). We have now turned the manifold into a level set. To find a normal vector, we simply need to find the gradient of the function F at the desired point.

The gradient is an irrotational vector field and line integrals through a gradient field are path independent and can be evaluated with the gradient theorem. Conversely, an irrotational vector field in a simply connected region is always the gradient of a function.

Expressions for the gradient in 3 dimensions

The form of the gradient depends on the coordinate system used.

In Cartesian coordinates, the above expression expands to

<math>\nabla f(x, y, z) =

\left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}\right).</math>

In cylindrical coordinates:

<math>\nabla f(\rho, \theta, z) = \left(

\frac{\partial f}{\partial \rho}, \frac{1}{\rho}\frac{\partial f}{\partial \theta}, \frac{\partial f}{\partial z} \right) </math>

(where <math>\theta</math> is the azimuthal angle and <math>z</math> is the axial coordinate).

In spherical coordinates:

<math>\nabla f(r, \theta, \phi) = \left(

\frac{\partial f}{\partial r}, \frac{1}{r}\frac{\partial f}{\partial \theta}, \frac{1}{r \sin\theta}\frac{\partial f}{\partial \phi}\right) </math>

(where <math>\phi</math> is the azimuth angle and <math>\theta</math> is the zenith angle).

Example

For example, the gradient of the function in Cartesian coordinates

<math>f(x,y,z)= \ 2x+3y^2-\sin(z)</math>

is:

<math>\nabla f= \left(

\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}\right)

= \left( 2, 6y, -\cos(z)\right).

</math>

The gradient and the derivative or differential

Linear approximation to a function

The gradient of a function <math>f </math> from the Euclidean space <math>\mathbb{R}^n</math> to <math>\mathbb{R}</math> at any particular point x0 in <math>\mathbb{R}^n</math> characterizes the best linear approximation to f at x0. The approximation is as follows:

<math> f(x) \approx f(x_0) + (\nabla f)_{x_0}\cdot(x-x_0) </math>

for <math>x</math> close to <math>x_0</math>, where <math>(\nabla f)_{x_0}</math> is the gradient of f computed at <math>x_0</math>, and the dot denotes the dot product on <math>\mathbb{R}^n</math>. This equation is equivalent to the first two terms in the multi-variable Taylor Series expansion of f at x0.

The differential or (exterior) derivative

The best linear approximation to a function <math>f : \mathbb{R}^n \to \mathbb{R}</math> at a point <math>x</math> in <math>\mathbb{R}^n</math> is a linear map from <math>\mathbb{R}^n</math> to <math>\mathbb{R}</math> which is often denoted by <math>\mathrm{d}f_x</math> or <math>Df(x)</math> and called the differential or (total) derivative of <math>f</math> at <math>x</math>. The gradient is therefore related to the differential by the formula

<math> (\nabla f)_x\cdot v = \mathrm d f_x(v)</math>

for any <math>v \in \mathbb{R}^n</math>. The function <math>\mathrm{d}f</math>, which maps <math>x</math> to <math>\mathrm{d}f_x</math>, is called the differential or exterior derivative of <math>f</math> and is an example of a differential 1-form.

If <math>\mathbb{R}^n</math> is viewed as the space of (length <math>n</math>) column vectors (of real numbers), then one can regard <math>\mathrm{d}f</math> as the row vector

<math> \mathrm{d}f = \left( \frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n}\right) </math>

so that <math>\mathrm{d}f_x(v)</math> is given by matrix multiplication. The gradient is then the corresponding column vector, i.e., <math>\nabla f = \mathrm{d} f^T</math>.

The covariance of the gradient

The differential is more natural than the gradient because it is invariant under all coordinate transformations (or diffeomorphisms), whereas the gradient is only invariant under orthogonal transformations (because of the implicit use of the dot product in its definition). Because of this, it is common to blur the distinction between the two concepts using the notion of covariant and contravariant vectors. From this point of view, the components of the gradient transform covariantly under changes of coordinates, so it is called a covariant vector field, whereas the components of a vector field in the usual sense transform contravariantly. In this language the gradient is the differential, as a covariant vector field is the same thing as a differential 1-form.[1]

^ Unfortunately this confusing language is confused further by differing conventions. Although the components of a differential 1-form transform covariantly under coordinate transformations, differential 1-forms themselves transform contravariantly (by pullback) under diffeomorphism. For this reason differential 1-forms are sometimes said to be contravariant rather than covariant, in which case vector fields are covariant rather than contravariant.

The gradient on Riemannian manifolds

For any smooth function f on a Riemannian manifold (M,g), the gradient of f is the vector field <math>\nabla f</math> such that for any vector field <math>X</math>,

<math>g(\nabla f, X ) = \partial_X f, \qquad \text{i.e.,}\quad g_x((\nabla f)_x, X_x ) = (\partial_X f) (x)</math>

where <math>g_x( \cdot, \cdot )</math> denotes the inner product of tangent vectors at x defined by the metric g and <math>\partial_X f</math> (sometimes denoted X(f)) is the function that takes any point xM to the directional derivative of f in the direction X, evaluated at x. In other words, in a coordinate chart <math>\varphi</math> from an open subset of M to an open subset of Rn, <math>(\partial_X f)(x)</math> is given by:

<math>\sum_{j=1}^n X^{j} (\varphi(x)) \frac{\partial}{\partial x_{j}}(f \circ \varphi^{-1}) \Big|_{\varphi(x)},</math>

where Xj denotes the jth component of X in this coordinate chart.

So, the local form of the gradient takes the form:

<math> \nabla f= g^{ik}\frac{\partial f}{\partial x^{k}}\frac{\partial}{\partial x^{i}}.</math>

Generalizing the case M=Rn, the gradient of a function is related to its exterior derivative, since <math>(\partial_X f) (x) = df_x(X_x)</math>. More precisely, the gradient <math>\nabla f</math> is the vector field associated to the differential 1-form df using the musical isomorphism <math>\sharp=\sharp^g\colon T^*M\to TM</math> (called "sharp") defined by the metric g. The relation between the exterior derivative and the gradient of a function on Rn is a special case of this in which the metric is the flat metric given by the dot product.

See also

References

  1. Theresa M. Korn; Korn, Granino Arthur. Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review. New York: Dover Publications. pp. 157–160. ISBN 0-486-41147-8.


bs:Gradijent bg:Градиент ca:Gradient cs:Gradient de:Gradient (Mathematik) et:Gradient eo:Gradiento (matematiko) fa:گرادیان ko:기울기 (벡터) id:Gradien is:Stigull it:Gradiente he:גרדיאנט lt:Gradientas nl:Gradiënt (wiskunde) simple:Gradient sk:Gradient sl:Gradient fi:Gradientti sv:Gradient uk:Градієнт

Template:WH Template:WS