Articles

6.3: Linear Differential Operators - Mathematics


Your calculus class became much easier when you stopped using the limit definition of the derivative, learned the power rule, and started using linearity of the derivative operator.

Example 64

Let (V) be the vector space of polynomials of degree 2 or less with standard addition and scalar multiplication.

[V = {a_{0}cdot1 + a_{1}x + a_{2} x^{2} | a_{0},a_{1},a_{2} in Re } onumber]

Let (frac{d}{dx} colon V ightarrow V) be the derivative operator. The following three equations, along with linearity of the derivative operator, allow one to take the derivative of any 2nd degree polynomial:

[
frac{d}{dx} 1=0,~frac{d}{dx}x=1,~frac{d}{dx}x^{2}=2x,. onumber
]

In particular

[
frac{d}{dx} (a_{0}cdot1 + a_{1}x + a_{2} x^{2}) =
a_{0}frac{d}{dx}cdot1 + a_{1} frac{d}{dx} x + a_{2} frac{d}{dx} x^{2}
= 0+a_{1}+2a_{2}. onumber
]

Thus, the derivative acting any of the infinitely many second order polynomials is determined by its action for just three inputs.


Linear differential operator

Here $ a _ dots i _ > $ are functions with values in the same field, called the coefficients of $ A $. If the coefficients take values in the set of $ ( t imes s ) $- dimensional matrices over $ k $, then the linear differential operator $ A $ is defined on vector-valued functions $ u = ( u _ <1>dots u _ ) $ and transforms them into vector-valued functions $ v = ( v _ <1>dots v _ ) $. In the case $ n = 1 $ it is called a linear ordinary differential operator, and in the case $ n > 1 $ it is called a linear partial differential operator.

Let $ X $ be a differentiable manifold and let $ E $ and $ F $ be finite-dimensional vector bundles on $ X $( all of class $ C ^ infty $, cf. Vector bundle). Let $ widetilde ightarrow widetilde $ be the sheaves (cf. Sheaf) of germs of sections of these bundles of the corresponding smoothness class. A linear differential operator in the wide sense $ A: E ightarrow F $ is a sheaf mapping $ widetilde ightarrow widetilde $ satisfying the following condition: Every point $ x in X $ has a coordinate neighbourhood $ U $ within which the bundles are trivial, while the mapping

$ A : Gamma ( U , E ) ightarrow Gamma ( U , F ) , $

where $ Gamma ( U , E ) $ is the space of sections of $ E $ over $ U $, acts according to (1), in which local coordinates $ x _ <1>dots x _ $ and the trivializations

$ E mid _ cong U imes k ^ , F mid _ cong U imes k ^ $

are used. The smallest number $ m $ such that (1) is suitable at all points $ x in X $ is called the order of the linear differential operator $ A $. For example, every non-zero connection on $ E $ is a linear differential operator $ d : E ightarrow E otimes Omega ^ <1>( X) $ of the first order. Another equivalent definition of a linear differential operator $ A : E ightarrow F $ is the following: It is a linear operator $ A : Gamma ( X , E ) ightarrow Gamma ( X , F ) $ satisfying the condition $ supp Au subset supp u $, where $ supp u $ is the support of $ u $.

A linear differential operator can be defined on wider function spaces. For example, if a positive metric is defined on $ X $ and a scalar product is defined on the bundles $ E $ and $ F $, then the spaces of square-integrable sections of these bundles are defined. A linear differential operator defined by the local expressions (1) determines a linear unbounded operator $ A : L _ <2>( E) ightarrow L _ <2>( F ) $. Under certain weak assumptions the latter may be closed as an operator on Hilbert spaces. This closure is also called a linear differential operator. In a similar way one can construct an operator that acts on Sobolev spaces or on spaces of more general scales.

A linear differential operator of class $ C ^ infty $ can be extended to an operator on spaces of generalized sections. Such an extension can be constructed by means of a formally adjoint operator. Let $ E ^ prime $ be the bundle dual to $ E $( that is, $ E ^ prime = mathop < m Hom>( E , I ) $, where $ I $ is the trivial one-dimensional bundle) and let $ Omega $ be the bundle of differential forms on $ X $ of maximal degree. There is defined a bilinear mapping

$ ( cdot , cdot ) _ : Gamma ( X, E) imes Gamma _ <0>( X , E ^ prime otimes Omega ) ightarrow k , $

which involves integration over $ X $. Here $ Gamma _ <0>( cdot ) $ is the space of sections with compact support. The formula

uniquely defines a linear operator

$ <> ^ A : Gamma _ <0>( X , F ^ prime otimes Omega ) ightarrow Gamma _ <0>( X , E ^ prime otimes Omega ) . $

It is induced by the linear differential operator $ <> ^ A : F ^ < prime >otimes Omega ightarrow E ^ prime otimes Omega $ which inside the coordinate neighbourhood $ U $ has the expression

$ <> ^ A u = sum (- 1) ^ + dots + i _ > frac + dots + i _ > ( <> ^ a _ dots i _ > u ) > ^ > dots partial x _ ^ > > , $

if the bundle $ Omega $ is trivialized by the choice of the section $ d x _ <1>wedge dots wedge d x _ $. The linear differential operator $ <> ^ A $ is said to be formally adjoint with respect to $ A $.

In the space $ Gamma _ <0>( X , E ^ prime otimes Omega ) $ convergence is defined according to the following rule: $ f _ ightarrow f $ if the union of the supports of the sections $ f _ $ belongs to a compact set and if in any coordinate neighbourhood $ U subset X $ over which there is a trivialization of $ E $, the vector-valued functions $ f _ $ converge uniformly to $ f $ together with all partial derivatives with respect to local coordinates. The space of all linear functionals is called the space of generalized sections of $ E $ and is denoted by $ D ^ prime ( E) $. The operator $ <> ^ A $ takes convergent sequences to convergent sequences and therefore generates an adjoint operator $ D ^ prime ( E) ightarrow D ^ prime ( F ) $. The latter coincides with $ A $ on the subspace $ Gamma ( X , E ) $ and is called the extension of the given linear differential operator to the space of generalized sections. One also considers other extensions of linear differential operators, to spaces of generalized sections of infinite order, to the space of hyperfunctions, etc.

A linear differential operator of infinite order is understood to be an operator that acts in some space of analytic functions (sections) and is defined by (1), in which the summation is over an infinite set of indices $ i _ <1>dots i _ , dots $.

The following property characterizes linear differential operators. A sequence $ < f _ > subset Gamma ( X , E ) $ is said to converge to a section $ f $ if $ f _ $ tends uniformly to $ f $ together with all partial derivatives in any coordinate neighbourhood that has compact closure. A linear operator $ A: Gamma _ <0>( X, E) ightarrow Gamma ( X, F ) $ that takes convergent sequences to convergent sequences is a linear differential operator of order at most $ m $ if and only if for any $ f , g in C ^ infty ( X) $ the function

$ ag <2 >mathop < m exp>( - i lambda g ) A ( f mathop < m exp>( i lambda g ) ) $

is a polynomial in the parameter $ lambda $ of degree at most $ m $. If this condition is replaced by the assumption that (2) is represented by an asymptotic power series, then one obtains a definition of a linear pseudo-differential operator.

Suppose that the manifold $ X $ and also the bundles $ E $ and $ F $ are endowed with a $ G $- structure, where $ G $ is a group. Then the action of this group on any linear differential operator $ A : E ightarrow F $ is defined by the formula

A linear differential operator $ A $ is said to be invariant with respect to $ G $ if $ g ^ <*>( A) = A $ for all $ g in G $.

A bundle of jets is an object dual to the space of a linear differential operator. Again suppose that $ E $ is a vector bundle on a manifold $ X $ of class $ C ^ infty $. A bundle of $ m $- jets of sections of $ E $ is a vector bundle $ J _ ( E) $ on $ X $ whose fibre over a point $ x $ is equal to $ widetilde _ / widetilde _ ( m) $, where $ widetilde _ $ is a fibre of the bundle $ widetilde $ of germs of sections of $ E $ and $ widetilde _ ( m) $ is the subspace of this fibre consisting of germs of sections for which all derivatives up to order $ m $ inclusive vanish at $ x $. The linear differential operator $ d _ : E ightarrow J _ ( E) $ that acts according to the rule: the value of the section $ d _ ( u) $ at $ x $ is equal to the image of the section $ u $ in the quotient space $ widetilde _ / widetilde _ ( m) $, is said to be universal. Next, suppose that $ F $ is a bundle on $ X $ and that $ A : J _ ( E) ightarrow F $ is a bundle homomorphism, that is, a linear differential operator of order zero. The composite

$ ag <3 >E ightarrow ^ < > J _ ( E) ightarrow ^ < a >F $

is a linear differential operator of order at most $ m $. Conversely, every linear differential operator of order at most $ m $ can be represented uniquely as a composition (3).

The symbol (principal system) of a linear differential operator $ A : E ightarrow F $ is the family of linear mappings

depending on a point $ ( x , xi ) $ of the cotangent bundle $ T ^ <*>( X) $. They act according to the formula $ e ightarrow a ( xi ^ e ) / m ! $, where $ a $ is the homomorphism involved in (3), $ e in widetilde _ $, and $ xi ^ e $ is the element of $ J _ ( E) _ $ equal to the image of $ f ^ < m >e $, where $ f $ is the germ of a function of class $ C ^ infty $ such that $ f ( x) = 0 $, $ d f ( x) = xi $. If $ A $ has the form (1), then

where $ xi _ <1>dots xi _ $ are the coordinates in a fibre of the bundle $ T ^ <*>( U) cong U imes k ^ $ thus, the symbol is a form of degree $ m $, homogeneous in $ xi $. In accordance with this construction of the symbol one introduces the concept of a characteristic. A characteristic of a linear differential operator $ A $ is a point $ ( x , xi ) in T ^ <*>( X) $ at which the symbol $ sigma _ $ has non-zero kernel.

The classification adopted in the theory of linear differential operators refers mainly to linear differential operators that act in bundles of the same dimension, in fact to operators of the form (1) where the coefficients are square matrices. A linear differential operator is said to be elliptic if it does not have real characteristics $ ( x , xi ) $ with $ xi eq 0 $( cf. also Elliptic partial differential equation). This class is characterized by the best local properties of solutions of the equation $ Au = w $, and also by the fact that boundary value problems in bounded domains are well-posed. The class of hyperbolic linear differential operators is also distinguished by a condition imposed only on the characteristics (cf. Hyperbolic partial differential equation). The property of being hyperbolic is closely connected with the well-posedness of the Cauchy problem with non-analytic data. The class of linear differential operators of principal type is specified by a condition imposed only on the symbol (cf. Principal type, partial differential operator of). A theory of local solvability and smoothness of solutions has been developed for such operators. The class of parabolic linear differential operators is distinguished by a condition related not only to the symbol but also to some lower-order terms (cf. Parabolic partial differential equation). Typical for parabolic linear differential operators are the mixed problem and the Cauchy problem with conditions at infinity. The class of hypo-elliptic linear differential operators is specified by the following informal condition: Every a priori generalized solution of the equation $ Au = w $ with right-hand side from $ C ^ infty $ itself belongs to $ C ^ infty $. A number of formal conditions on the expression (1) that guarantee that the operator is hypo-elliptic are known.

Apart from these fundamental types of linear differential operators, one sometimes talks about linear differential operators of mixed or variable type (cf. also Mixed-type differential equation), of linear differential operators of composite type, etc. One also considers problems in unbounded domains with conditions at infinity, boundary value problems with a free boundary, problems of spectral theory, problems of optimal control, etc.

A complex of linear differential operators is a sequence of linear differential operators

$ E ^ <*>: dots ightarrow E _ ightarrow ^ < > E _ 1 ightarrow ^ < 1 > E _ 2 ightarrow dots $

in which $ A _ 1 A _ = 0 $ for all $ k $. The cohomology of a complex of linear differential operators $ E ^ <*>$ is the cohomology of the complex of vector spaces $ Gamma ( X , E ^ <*>) $. Let $ H ^ $ be the cohomology of this complex at the $ k $- th term. The sum $ sum ( - 1 ) ^ mathop < m dim>H ^ $ is called the index of the complex of linear differential operators. Thus, the index of an elliptic complex of linear differential operators (that is, such that only finitely many $ E _ $ are non-zero, and the complex formed by the symbols of the linear differential operators $ A _ $ is exact at all points $ ( x , xi ) in T ^ <*>( X), $ $ xi eq 0 $) is finite in the case of compact $ X $, and the search for formulas that express the index of such a complex in terms of its symbol is the content of a number of investigations that combine the theory of linear differential operators with algebraic geometry and algebraic topology (see Index formulas).

$ ag <4 >D ( F) ightarrow ^ < > D ( E) ightarrow ^ < P >M ( A) ightarrow 0 , $

and the $ O ( X) $- submodules $ M _ equiv p ( D _ ( E)) $, $ k = 0 , 1 dots $ form an increasing filtration in $ M ( A) $. The graded $ O ( X) $- module

$ mathop < m gr>M ( A) = oplus _ < 0 >^ infty M _ / M _ 1 , M _ <->1 = 0 , $

is called the symbol module of the linear differential operator $ A $. Since for any $ k $ and $ l $ the action of $ D _ $ on $ M ( A) $ takes $ M _ $ into $ M _ k $, in $ mathop < m gr>M ( A) $ there is a structure of a graded module over the graded algebra $ mathop < m gr>D equiv oplus _ <0>^ infty D _ / D _ 1 $. The annihilator of this module is a homogeneous ideal in $ mathop < m gr>D $. The characteristic manifold of the operator $ A $ is the set of zeros of this ideal. Since the algebra $ mathop < m gr>D $ is isomorphic to the symmetric algebra of the tangent bundle $ T ( X) $, the characteristic manifold is canonically imbedded in $ T ^ <*>( X) $, and its intersection with every fibre is an algebraic cone.

If the manifold $ X $ and the given bundles have real or complex analytic structure, then the characteristic manifold coincides with the set of roots of the ideal $ mathop < m gr>( mathop < m ann>M ( A)) $. In this case it is a closed analytic subset of $ T ^ <*>( X) $, and if it is not empty its dimension is at least $ mathop < m dim>X $. In the case when this dimension is equal to $ mathop < m dim>X $, the linear differential operator $ A $ is said to be maximally overdetermined, or holonomic.

The formal theory of general linear differential operators is concerned with the concepts of formal integrability and the resolvent. The property of formal integrability, formalized in the dual terminology of jets, is equivalent to the condition that the $ O ( X) $- module $ mathop < m gr>M ( A) $ is locally free. The resolvent of a linear differential operator $ A $ is understood to be the sequence, extending (4),

$ dots ightarrow D ( F _ <1>) ightarrow ^ < ^ prime > D ( F ) mathop ightarrow limits ^ < > D ( E) ightarrow M ( A) , $

in which all the $ A _ $, $ k = 1 , 2 dots $ are linear differential operators. In particular, $ A _ <1>$ is called the compatibility operator for $ A $. Formal integrability ensures the local existence of the resolvent.

In the literature use is made of the terms "overdetermined" and "underdetermined" for systems of differential equations however, there is no satisfactory general definition. The following could serve as an approximation to such a definition: There is a non-zero linear differential operator $ B $ such that $ BA = 0 $( overdetermination), $ AB = 0 $( underdetermination). For example, the linear differential operator $ d $ equal to the restriction of the operator of exterior differentiation to forms of degree $ k $ on a manifold $ X $ of dimension $ n $ is underdetermined for $ k > 0 $, overdetermined for $ k < n $ and holonomic for $ k = 0 $.

The main problems studied for general linear differential operators are the following: The solvability of an equation with right-hand side $ Au = w $ if a compatibility condition $ A _ <1>u = 0 $ is satisfied the possibility of extending solutions of the equation $ Au = 0 $ to a larger domain (an effect connected with overdetermination) and the representation of the general solution in terms of a solution of special form. The last problem can be stated more specifically for invariant operators, for example for linear differential operators in $ mathbf R ^ $ with constant or periodic coefficients: To describe a representation of a group $ G $ in the space of solutions as an integral (in some sense) over all indecomposable subrepresentations. In determining operators with constant coefficients such a representation is specified by an integral with respect to exponents (exponential representation), and for operators with periodic coefficients by an integral with respect to Floquet-generalized solutions.

Linear differential operators are also defined on arbitrary algebraic structures. Let $ R $ be a commutative ring and let $ E $ and $ F $ be $ R $- modules. A mapping of sets $ A : E ightarrow F $ is called a linear differential operator of order at most $ m $ if it is additive and for any element $ a in R $ the mapping $ aA- Aa $ is a linear differential operator of order at most $ m- 1 $. A linear differential operator of order at most $ - 1 $ means the zero mapping. In particular, a linear differential operator of order zero is a homomorphism of $ R $- modules, and conversely. Every derivation (cf. Derivation in a ring) $ v : R ightarrow F $ is a linear differential operator of the first order (or equal to zero). If $ R $ is an algebra over a field $ k $, then a linear differential operator over $ R $ is a linear differential operator over the ring $ R $ that is a $ k $- linear mapping. Such a linear differential operator has a number of the formal properties of ordinary linear differential operators. If $ R $ is the algebra of all formal power series over $ k $ or the algebra of convergent power series over $ k $, and if $ E $ and $ F $ are free $ R $- modules of finite type, then every linear differential operator $ A : E ightarrow F $ of order at most $ m $ can be written uniquely in the form (1).

Let $ ( X , ) $ be a ringed space and let $ E $ and $ F $ be $ $- modules. A linear differential operator $ A : E ightarrow F $ is any sheaf morphism that acts in the fibres over every point $ x in X $ like a linear differential operator over the ring (algebra) $ _ $. Linear differential operators that act in modules or sheaves of modules have been used in a number of questions in algebraic geometry.


2. Linear differential equations with an unbounded operator.

Suppose that $ A _ <0>( t) $ is invertible for every $ t $, so that (1) can be solved for the derivative and takes the form

and suppose that here $ A ( t) $ is an unbounded operator in a space $ E $, with dense domain of definition $ D ( A ( t) ) $ in $ E $ and with non-empty resolvent set, and suppose that $ f ( t) $ is a given function and $ u ( t) $ an unknown function, both with values in $ E $.

Even for the simplest equation $ dot = Au $ with an unbounded operator, solutions of the Cauchy problem $ u ( 0) = u _ <0>$ need not exist, they may be non-unique, and they may be non-extendable to the whole semi-axis, so the main investigations are devoted to the questions of existence and uniqueness of the solutions. A solution of the equation $ dot = Au $ on the interval $ [ 0, T ] $ is understood to be a function that takes values in $ D ( A) $, is differentiable on $ [ 0, T ] $ and satisfies the equation. Sometimes this definition is too rigid and one introduces the concept of a weak solution as a function that has the same properties on $ ( 0 , T ] $ and is only continuous at $ 0 $.

Suppose that the operator $ A $ has a resolvent

$ R ( lambda , A ) = ( A - lambda I ) ^ <->1 $

for all sufficiently large positive $ lambda $ and that

Then the weak solution of the problem

is unique on $ [ 0 , T - h ] $ and can be branched for $ t = T - h $. If $ h = 0 $, then the solution is unique on the whole semi-axis. This assertion is precise as regards the behaviour of $ R ( lambda , A ) $ as $ lambda ightarrow infty $.

If for every $ u _ <0>in D ( A) $ there is a unique solution of the problem (10) that is continuously differentiable on $ [ 0 , T ] $, then this solution can be extended to the whole semi-axis and can be represented in the form $ u ( t) = U ( t) u _ <0>$, where $ U ( t) $ is a strongly-continuous semi-group of bounded operators on $ [ 0 , infty ) $, $ U ( 0) = I $, for which the estimate $ | U ( t) | leq M e ^ $ holds. For the equation to have this property it is necessary and sufficient that

$ ag <11 >| ( lambda - omega ) ^ R ^ ( lambda , A ) | leq M $

for all $ lambda > omega $ and $ m = 1 , 2 dots $ where $ M $ does not depend on $ lambda $ and $ m $. These conditions are difficult to verify. They are satisfied if $ | ( lambda - omega ) R ( lambda , A ) | leq 1 $, and then $ | U ( t) | leq e ^ $. If $ omega = 0 $, then $ U ( t) $ is a contraction semi-group. This is so if and only if $ A $ is a maximal dissipative operator. If $ u _ <0> otin D ( A) $, then the function $ U ( t) u _ <0>$ is not differentiable (in any case for $ t = 0 $) it is often called the generalized solution of (10). Solutions of the equation $ dot = Au $ can be constructed as the limit, as $ n ightarrow infty $, of solutions of the equation $ dot = A _ u $ with bounded operators, under the same initial conditions. For this it is sufficient that the operators $ A _ $ commute, converge strongly to $ A $ on $ D ( A) $ and that

If the conditions (11) are satisfied, then the operators $ A _ = - nI - n ^ <2>R ( lambda , A ) $( Yosida operators) have these properties.

Another method for constructing solutions of the equation $ dot = A u $ is based on Laplace transformation. If the resolvent of $ A $ is defined on some contour $ Gamma $, then the function

$ ag <12 >u ( t) = - frac<1> <2 pi i >intlimits _ Gamma e ^ R ( lambda , A ) u _ <0>d lambda $

formally satisfies the equation

$ dot = A u + frac<1> <2 pi i >intlimits _ Gamma e ^ d lambda u _ <0>. $

If the convergence of the integrals, the validity of differentiation under the integral sign and the vanishing of the last integral are ensured, then $ u ( t) $ satisfies the equation. The difficulty lies in the fact that the norm of the resolvent cannot decrease faster than $ | lambda | ^ <->1 $ at infinity. However, on some elements it does decrease faster. For example, if $ R ( lambda , A ) $ is defined for $ mathop < m Re>lambda geq alpha $ and if

$ | R ( lambda , A ) | leq M | lambda | ^ , k geq - 1 , $

for sufficiently large $ | lambda | $, then for $ Gamma = ( - i infty , i infty ) $ formula (12) gives a solution for any $ u _ <0>in D ( A ^ <[ k ] + 3 >) $. In a "less good" case, when the previous inequality is satisfied only in the domain

(weakly hyperbolic equations), and $ Gamma $ is the boundary of this domain, one obtains a solution only for an $ u _ <0>$ belonging to the intersection of the domains of definition of all powers of $ A $, with definite behaviour of $ | A ^ u _ <0>| $ as $ n ightarrow infty $.

Significantly weaker solutions are obtained in the case when $ Gamma $ goes into the left half-plane, and one can use the decrease of the function $ | e ^ | $ on it. As a rule, the solutions have increased smoothness for $ t > 0 $. If the resolvent is bounded on the contour $ Gamma $: $ mathop < m Re>lambda = - psi ( | mathop < m Im>lambda | ) $, where $ psi ( au ) $ is a smooth non-decreasing concave function that increases like $ mathop < m ln> au $ at $ infty $, then for any $ u _ <0>in E $ the function (12) is differentiable and satisfies the equation, beginning with some $ t _ <0>$ as $ t $ increases further, its smoothness increases. If $ psi ( au ) $ increases like a power of $ au $ with exponent less than one, then the function (12) is infinitely differentiable for $ t > 0 $ if $ psi ( au ) $ increases like $ au / mathop < m ln> au $, then $ u ( t) $ belongs to a quasi-analytic class of functions if it increases like a linear function, then $ u ( t) $ is analytic. In all these cases it satisfies the equation $ dot = A u $.

The existence of the resolvent on contours that go into the left half-plane may be obtained, by using series expansion, from the corresponding estimates on vertical lines. If for $ mathop < m Re>lambda geq gamma $,

$ ag <13 >| R ( lambda , A ) | leq M ( 1 + | mathop < m Im>lambda | ) ^ <- eta >, 0 < eta < 1 , $

then for every $ u _ <0>in D ( A) $ there is a solution of problem (10). All these solutions are infinitely differentiable for $ t > 0 $. They can be represented in the form $ u ( t) = U ( t) u _ <0>$, where $ U ( t) $ is an infinitely-differentiable semi-group for $ t > 0 $ having, generally speaking, a singularity at $ t = 0 $. For its derivatives one has the estimates

If the estimate (13) is satisfied for $ eta = 1 $, then all generalized solutions of the equation $ dot = Au $ are analytic in some sector containing the positive semi-axis.

The equation $ dot = Au $ is called an abstract parabolic equation if there is a unique weak solution on $ [ 0 , infty ] $ satisfying the initial condition $ u ( 0) = u _ <0>$ for any $ u _ <0>in E $. If

$ ag <14 >| R ( lambda , A ) | leq M | lambda - omega | ^ <->1 extrm < for >mathop < m Re>lambda > omega , $

then the equation is an abstract parabolic equation. All its generalized solutions are analytic in some sector containing the positive semi-axis, and

where $ C $ does not depend on $ u _ <0>$. Conversely, if the equation has the listed properties, then (14) is satisfied for the operator $ A $.

If problem (10) has a unique weak solution for any $ u _ <0>in D ( A) $ for which the derivative is integrable on every finite interval, then these solutions can be represented in the form $ u ( t) = U ( t) u _ <0>$, where $ U ( t) $ is a strongly-continuous semi-group on $ ( 0 , infty ) $, and every weak solution of the inhomogeneous equation $ dot = Av + f ( t) $ with initial condition $ v ( 0) = 0 $ can be represented in the form

$ ag <15 >v ( t) = intlimits _ < 0 >^ < t >U ( t- s ) f ( s) ds . $

The function $ v ( t) $ is defined for any continuous $ f ( t) $, hence it is called a generalized solution of the inhomogeneous equation. To ensure that it is differentiable, one imposes smoothness conditions on $ f ( t) $, and the "worse" the semi-group $ U ( t) $, the "higher" these should be. Thus, under the previous conditions, (15) is a weak solution of the inhomogeneous equation if $ f ( t) $ is twice continuously differentiable if (11) is satisfied, then (15) is a solution if $ f ( t) $ is continuously differentiable if (13) is satisfied with $ eta > 2/3 $, then $ v ( t) $ is a weak solution if $ f ( t) $ satisfies a Hölder condition with exponent $ gamma > 2 ( 1 - 1/ eta ) $. Instead of smoothness of $ f ( t) $ with respect to $ t $ one can require that the values of $ f ( t) $ belong to the domain of definition of the corresponding power of $ A $.

For an equation with variable operator

$ ag <16 >dot = A ( t) u , 0 leq t leq T , $

there are some fundamental existence and uniqueness theorems about solutions (weak solutions) of the Cauchy problem $ u ( s) = u _ <0>$ on the interval $ s leq t leq T $. If the domain of definition of $ A ( t) $ does not depend on $ t $,

if the operator $ A ( t) $ is strongly continuous with respect to $ t $ on $ D ( A) $ and if

$ | lambda R ( lambda , A ( t) ) | leq 1 $

for $ lambda > 0 $, then the solution of the Cauchy problem is unique. Moreover, if $ A ( t) $ is strongly continuously differentiable on $ D ( A) $, then for every $ u _ <0>in D ( A) $ a solution exists and can be represented in the form

where $ U ( t , s ) $ is an evolution operator with the following properties:

1) $ U ( t , s ) $ is strongly continuous in the triangle $ T _ Delta $: $ 0 leq s leq t leq T $

2) $ U ( t , s ) = U ( t , au ) U ( au , s ) $, $ 0 leq s leq au leq t leq T $, $ U ( s , s ) = I $

3) $ U ( t , s ) $ maps $ D ( A) $ into itself and the operator

is bounded and strongly continuous in $ T _ Delta $

4) on $ D ( A) $ the operator $ U ( t , s ) $ is strongly differentiable with respect to $ t $ and $ s $ and

The construction of the operator $ U ( t , s ) $ is carried out by approximating $ A ( t) $ by bounded operators $ A _ ( t) $ and replacing the latter by piecewise-constant operators.

In many important problems the previous conditions on the operator $ A ( t) $ are not satisfied. Suppose that for the operator $ A ( t) $ there are constants $ M $ and $ omega $ such that

$ | R ( lambda , A ( t _ ) ) dots R ( lambda , A ( t _ <1>) ) | leq M ( lambda - omega ) ^ <->k $

for all $ lambda > omega $, $ 0 leq t _ <1>leq dots leq t _ leq T $, $ k = 1 , 2, . . . $. Suppose that in $ E $ there is densely imbedded a Banach space $ F $ contained in all the $ D ( A ( t) ) $ and having the following properties: a) the operator $ A ( t) $ acts boundedly from $ F $ to $ E $ and is continuous with respect to $ t $ in the norm as a bounded operator from $ F $ to $ E $ and b) there is an isomorphism $ S $ of $ F $ onto $ E $ such that

where $ B ( t) $ is an operator function that is bounded in $ E $ and strongly measurable, and for which $ | B ( t) | $ is integrable on $ [ 0 , T ] $. Then there is an evolution operator $ U ( t , s ) $ having the properties: 1) 2) 3') $ U ( t , s ) F subset F $ and $ U ( t , s ) $ is strongly continuous in $ F $ on $ T _ Delta $ and 4') on $ F $ the operator $ U ( t , s ) $ is strongly differentiable in the sense of the norm of $ E $ and $ partial U / partial t = A ( t) U $, $ partial U / partial s = - U A ( s) $. This assertion makes it possible to obtain existence theorems for the fundamental quasi-linear equations of mathematical physics of hyperbolic type.

The method of frozen coefficients is used in the theory of parabolic equations. Suppose that, for every $ t _ <0>in [ 0 , T ] $, to the equation $ dot = A ( t _ <0>) u $ corresponds an operator semi-group $ U _ ) > ( t) $. The unknown evolution operator formally satisfies the integral equations

$ + intlimits _ < s >^ < t >U _ ( t - s ) [ A ( au ) - A ( t) ] U ( au , s ) d au , $

$ + intlimits _ < s >^ < t >U ( t , au ) [ A ( au ) - A ( s) ] U _ ( au - s ) d au . $

When the kernels of these equations have weak singularities, one can prove that the equation has solutions and also that $ U ( t , s ) $ is an evolution operator. The following statement has the most applications: If

$ D ( A ( t) ) equiv D ( A) , | R ( lambda , A ( t) ) | < M ( 1 + | lambda | ) ^ <->1 $

for $ mathop < m Re>lambda geq 0 $ and

$ | [ A ( t) - A ( s) ] A ^ <->1 ( 0) | leq C | t - s | ^ ho $

(a Hölder condition), then there is an evolution operator $ U ( t , s ) $ that gives a weak solution $ U ( t , s ) u _ <0>$ of the Cauchy problem for every $ u _ <0>in E $. Uniqueness of the solution holds under the single condition that the operator $ A ( t) A ^ <->1 ( 0) $ is continuous (in a Hilbert space). An existence theorem similar to the one given above holds for the operator $ A ( t) $ with a condition of type (13) and for a certain relation between $ eta $ and $ ho $.

The assumption that $ D ( A ( t) ) $ is constant does not make it possible in applications to consider boundary value problems with boundary conditions depending on $ t $. Suppose that

$ | R ( lambda , A ( t) ) | leq M ( 1 + | lambda | ) ^ <->1 , mathop < m Re>lambda > 0 $

$ left | frac 1 ( t) >

- frac 1 ( s) > ight | leq K | t - s | ^ alpha , 0 < alpha < 1 $

$ left | frac partial R ( lambda , A ( t) ) ight | leq N | lambda | ^ < ho - 1 >, 0 leq ho leq 1 , $

in the sector $ | mathop < m arg>lambda | leq pi - phi $, $ phi < pi / 2 $ then there is an evolution operator $ U ( t , s ) $. Here it is not assumed that $ D ( A ( t) ) $ is constant. There is a version of the last statement adapted to the consideration of parabolic problems in non-cylindrical domains, in which $ D ( A ( t) ) $ for every $ t $ lies in some subspace $ E ( t) $ of $ E $.

The operator $ U ( t , s ) $ for equation (16) formally satisfies the integral equation

$ ag <17 >U ( t , s ) = I + intlimits _ < s >^ < t >A ( au ) U ( au , s ) d au . $

Since $ A ( t) $ is unbounded, this equation cannot be solved by the method of successive approximation (cf. Sequential approximation, method of). Suppose that there is a family of Banach spaces $ E _ alpha $, $ 0 leq alpha leq 1 $, having the property that $ E _ eta subset E _ alpha $ and $ | x | _ alpha leq | x | _ eta $ for $ alpha < eta $. Suppose that $ A ( t) $ is bounded as an operator from $ E _ eta $ to $ E _ alpha $:

and that $ A ( t) $ is continuous with respect to $ t $ in the norm of the space of bounded operators from $ E _ eta $ to $ E _ alpha $. Then in this space the method of successive approximation for equation (17) will converge for $ | t - s | leq ( eta - alpha ) ( Ce ) ^ <->1 $. In this way one can locally construct an operator $ U ( t , s ) $ as a bounded operator from $ E _ eta $ to $ E _ alpha $. In applications this approach gives theorems of Cauchy–Kovalevskaya type (cf. Cauchy–Kovalevskaya theorem).

For the inhomogeneous equation (9) with known evolution operator, for the equation $ dot = A ( t) u $ the solution of the Cauchy problem is formally written in the form

$ u ( t) = U ( t , s ) u _ <0>+ intlimits _ < s >^ < t >U ( t , au ) f ( au ) d au . $

This formula can be justified in various cases under certain smoothness conditions on $ f ( t) $.


Differential operator


A generalization of the concept of a differentiation operator. A differential operator (which is generally discontinuous, unbounded and non-linear on its domain) is an operator defined by some differential expression, and acting on a space of (usually vector-valued) functions (or sections of a differentiable vector bundle) on differentiable manifolds or else on a space dual to a space of this type. A differential expression is a mapping $ lambda $ of a set $ Omega $ in the space of sections of a vector bundle $ xi $ with base $ M $ into the space of sections of a vector bundle $ eta $ with the same base such that for any point $ p in M $ and arbitrary sections $ f , g in Omega $ the coincidence of their $ k $- jets (cf. Jet) at $ p $ entails the coincidence of $ lambda f $ and $ lambda g $ at that point. The smallest number $ k $ which meets this condition for all $ p in M $ is said to be the order of the differential expression and the order of the differential operator defined by this expression.

A differential operator on a manifold $ M $ without boundary often proves to be an extension of an operator which is defined in a natural manner by a fixed differential expression on some set, open in an appropriate topology, of infinitely (or sufficiently often) differentiable sections of a given vector bundle $ xi $ with base $ M $, and thus permits a natural extension to the case of sheaves of germs of sections of differentiable vector bundles. A differential operator $ L $ on a manifold $ M $ with boundary $ partial M $ is often defined as an extension of an analogous operator which is naturally defined by a differential expression on the set of differentiable functions (or sections of a vector bundle), the restrictions of which to $ partial M $ lie in the kernel of some differential operator $ l $ on $ partial M $( or satisfies some other conditions definable by some requirements to be satisfied in the domain of values of an operator $ l $ on the restrictions of the functions from the domain of definition of $ L $, such as inequalities) the differential operator $ l $ is said to define the boundary conditions for the differential operator $ L $. Linear differential operators on spaces dual to spaces of functions (or sections) are defined as operators dual to the differential operators of the above type on these spaces.

Examples.

1) Let $ F $ be a real-valued function of $ k+ 2 $ variables $ x , y _ <0>dots y _ $, defined in some rectangle $ Delta = I imes J _ <0> imes dots imes J _ $ the differential expression

$ D u = F left ( x , u , frac dots frac u > > ight ) $

(where $ F $ usually satisfies some regularity conditions such as measurability, continuity, differentiability, etc.) defines a differential operator $ D $ on the manifold $ I $, the domain of definition $ Omega $ of which consists of all functions $ u in C ^ ( I ) $ satisfying the condition $ u ^ <(>i) ( x) in J _ $ for $ i = 1 , 2 ,dots $. If $ F $ is continuous, $ D $ may be considered as an operator on $ C ( I) $ with domain of definition $ Omega $ the differential operator $ D $ is said to be a general ordinary differential operator. If $ F $ depends on $ y _ $, the order of $ D $ is $ k $. $ D $ is said to be quasi-linear if it depends linearly on $ y _ $ it is linear if $ F $ depends linearly on $ y _ <0>dots y _ $ it is said to be linear with constant coefficients if $ F $ is independent of $ x $ and if $ D $ is a linear differential operator. The remaining differential operators are said to be non-linear. If certain conditions as to the regularity of $ F $ are satisfied, a quasi-linear operator may be extended to a differential operator from one Sobolev space into another.

2) Let $ x = ( x ^ <1>dots x ^ ) $ run through a domain $ $ in $ mathbf R ^ $, let $ F = ( x , u , D ^ <(>n) ( u) ) $ be a differential expression defined by a real-valued function $ F $ on the product of $ $ and some open rectangle $ omega $, where $ D ^ <(>n) ( u) $ is a set of partial derivatives of the type $ D ^ alpha u = partial ^ + dots + alpha _ > u / ( partial x ^ <1>) ^ > dots ( partial x ^ ) ^ > $, where $ alpha _ <1>+ dots + alpha _ leq n $, and, as in example 1), let the function $ F $ satisfy certain regularity conditions. The differential operator defined by this expression on the space of sufficiently often differentiable functions on $ $ is known as a general partial differential operator. As in example 1), one defines non-linear, quasi-linear and linear partial differential operators and the order of a partial differential operator a differential operator is said to be elliptic, hyperbolic or parabolic if it is defined by a differential expression of the respective type. One sometimes considers functions $ F $ depending on derivatives of all orders (e.g. as their formal linear combination) such differential expressions, although not defining a differential operator in the ordinary sense, can nevertheless be brought into correspondence with certain operators (e.g. on spaces of germs of analytic functions), and are known as differential operators of infinite order.

3) The previous examples may be extended to include the complex-valued case or the case of functions with values in a locally compact, totally disconnected field and (at least in the case of linear differential operators) even to a more general situation (cf. Differential algebra).

4) Systems of differential expressions define differential operators on spaces of vector functions. For example, the Cauchy–Riemann differential operator, defined by the expression $ < partial u / partial x - partial v / partial y, partial u / partial y + partial v / partial x >$, converts the space of pairs of harmonic functions on the plane into itself.

In the definition of a differential operator and of its generalizations one often employs (besides ordinary derivatives) generalized derivatives, which appear in a natural manner when considering extensions of differential operators defined on differentiable functions, and weak derivatives, related to the transition to the adjoint operator. Moreover, derivatives of fractional and negative orders appear when the differentiation is defined by means of a Fourier transform (or some other integral transform), applicable to the domain of definition and range of such a generalized differential operator (cf. Pseudo-differential operator). This is done in order to obtain the simplest possible representation of the corresponding differential operator of a function $ F $ and to attain a reasonable generality in the formulation of problems and satisfactory properties of the objects considered. In this way, a functional or operational calculus is obtained, extending the correspondence between the differentiation operator and the operator of multiplication by the independent variable as realized in the Fourier transform.

Problems in the theory of differential equations — such as problems of existence, uniqueness, regularity, continuous dependence of the solutions on the initial data or on the right-hand side, the explicit form of a solution of a differential equation defined by a given differential expression — are readily interpreted in the theory of operators as problems on the corresponding differential operator defined on suitable function spaces — viz. as problems on kernels, images, the structure of the domain of definition of a given differential operator $ L $ or of its extension, continuity of the inverse of the given differential operator and explicit construction of this inverse operator. Problems of the approximation of solutions and of the construction of approximate solutions of differential equations are also readily generalized and improved as problems on the corresponding differential operators, viz. — selection of natural topologies in the domain of definition and in the range such that the operator $ L $( if the solutions are unique) realizes a homeomorphism of the domains of definition and ranges in these topologies (this theory is connected with the theory of interpolation and scales (grading) of function spaces, in particular for linear and quasi-linear differential operators). Another example is the selection of differential operators close to a given operator in some definite sense (which makes it possible by using appropriate topologies in the space of differential operators, to justify methods of approximation of equations, such as the regularization and the penalty method, and iterated regularization methods). The theory of differential operators makes it possible to apply classical methods in the theory of operators, e.g. the theory of compact operators, and the method of contraction mappings in various existence and uniqueness theorems for differential equations, in the theory of bifurcation of solutions and in non-linear eigen value problems. Other applications utilize a natural order structure present in function spaces on which a differential operator is defined (in particular, the theory of monotone operators), or use methods of linear analysis (the theory of duality, convex sets, dual or dissipative operators). Again, variational methods and the theory of extremal problems or the presence of certain supplementary structures (e.g. complex, symplectic, etc.) can be used in order to clarify the structure of the kernel and range of the differential operator, i.e. to obtain information on the solution space of the respective equations. Many problems connected with differential expressions necessitate a study of differential inequalities, which are closely connected with multi-valued differential operators.

Thus, the theory of differential operators makes it possible to eliminate a number of difficulties involved in the classical theory of differential equations. The utilization of various extensions of classical differential operators leads to the concept of generalized solutions of the corresponding differential equations (which necessarily proved to be classical in several cases connected with, say, elliptic problems), while the utilization of the linear structure makes it possible to introduce the concept of weak solutions of differential equations. In choosing a suitable extension of a differential operator as defined by a differential expression, a priori estimates of solutions connected with such an expression are of importance, since they permit one to identify function spaces on which the extended operator is continuous or bounded.

Moreover, the theory of differential operators also makes it possible to formulate and solve many new problems, which are qualitatively different from the classical problems in the theory of differential equations. Thus, in the study of non-linear operators it is of interest to study the structure of the set of its stationary points and the action of the operator in a neighbourhood of them, as well as the classification of these singular points, and the stability of the type of the singular point when the respective differential operator is perturbed. Other subjects of interest in the theory of linear differential operators are the description and the study of the spectrum of a differential operator, the calculation of its index, the structure of invariant subspaces of the differential operator, the harmonic analysis of a given differential operator (in particular, the decomposition, which requires a preliminary study of the completeness of the system of eigen functions and associated functions). There is also the study of linear and non-linear perturbations of a given differential operator. These results are of special interest for elliptic differential operators generated by symmetric differential expressions in the context of the theory of self-adjoint operators on a Hilbert space (in particular, in the spectral theory of these operators and the theory of extensions of symmetric operators). The theory of various hyperbolic and parabolic (not necessarily linear) differential operators is connected with the theory of groups and semi-groups of operators on locally convex spaces.

Next to the linear class of differential operators, perhaps the most intensively studied class are differential operators which are either invariant or which vary according to a specific law when certain transformations constituting a group (or a semi-group) $ G $ are acting in their domain of definition, and hence also on the differential expression. These include, for instance, invariant differential operators connected with the representations of a group $ G $ the covariant derivative or, more generally, differential operators on spaces of differentiable tensor fields, where $ G $ is the group of all diffeomorphisms (the so-called atomization) many examples of operators in theoretical physics, etc. Such functional-geometric methods are also useful in the study of differential operators with so-called hidden symmetry (see, for example, Korteweg–de Vries equation).


Contents

The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:

When taking higher, nth order derivatives, the operator may be written:

The derivative of a function f of an argument x is sometimes given as either of the following:

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

One of the most frequently seen differential operators is the Laplacian operator, defined by

Another differential operator is the Θ operator, or theta operator, defined by [1]

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:

In n variables the homogeneity operator is given by

As in one variable, the eigenspaces of Θ are the spaces of homogeneous polynomials.

In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:

Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.

The differential operator del, also called nabla, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's equations. In three-dimensional Cartesian coordinates, del is defined as

Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.

Given a linear differential operator T

the adjoint of this operator is defined as the operator T ∗ > such that

where the notation ⟨ ⋅ , ⋅ ⟩ is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product.

Formal adjoint in one variable Edit

In the functional space of square-integrable functions on a real interval (a, b) , the scalar product is defined by

where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as x → a and x → b , one can also define the adjoint of T by

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When T ∗ > is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

Several variables Edit

If Ω is a domain in R n , and P a differential operator on Ω, then the adjoint of P is defined in L 2 (Ω) by duality in the analogous manner:

for all smooth L 2 functions f, g. Since smooth functions are dense in L 2 , this defines the adjoint on a dense subset of L 2 : P * is a densely defined operator.

Example Edit

The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form

L u = − ( p u ′ ) ′ + q u = − ( p u ″ + p ′ u ′ ) + q u = − p u ″ − p ′ u ′ + q u = ( − p ) D 2 u + ( − p ′ ) D u + ( q ) u . u+(-p')Du+(q)u.!>

This property can be proven using the formal adjoint definition above.

This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

The same constructions can be carried out with partial derivatives, differentiation with respect to different variables giving rise to operators that commute (see symmetry of second derivatives).

Ring of univariate polynomial differential operators Edit

If R is a ring, let R ⟨ D , X ⟩ be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DXXD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring R ⟨ D , X ⟩ / I . This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form X a D b mod I < ext< mod >>I> . It supports an analogue of Euclidean division of polynomials.

Ring of multivariate polynomial differential operators Edit

for all 1 ≤ i , j ≤ n , where δ is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring R ⟨ D 1 , … , D n , X 1 , … , X n ⟩ / I ,ldots ,D_,X_<1>,ldots ,X_ angle /I> .

This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form X 1 a 1 … X n a n D 1 b 1 … D n b n ^>ldots X_^<>>D_<1>^>ldots D_^<>>> .

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle J k (E). In other words, there exists a linear mapping of vector bundles

where j k : Γ(E) → Γ(J k (E)) is the prolongation that associates to any section of E its k-jet.

This just means that for a given section s of E, the value of P(s) at a point xM is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.

Relation to commutative algebra Edit

An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions f 0 , … , f k ∈ C ∞ ( M ) ,ldots ,f_in C^(M)> we have

[ f , P ] ( s ) = P ( f ⋅ s ) − f ⋅ P ( s ) .

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

  • In applications to the physical sciences, operators such as the Laplace operator play a major role in setting up and solving partial differential equations.
  • In differential topology, the exterior derivative and Lie derivative operators have intrinsic meaning.
  • In abstract algebra, the concept of a derivation allows for generalizations of differential operators, which do not require the use of calculus. Frequently such generalizations are employed in algebraic geometry and commutative algebra. See also Jet (mathematics).
  • In the development of holomorphic functions of a complex variablez = x + i y, sometimes a complex function is considered to be a function of two real variables x and y. Use is made of the Wirtinger derivatives, which are partial differential operators:

The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800. [2]


Classics in Applied Mathematics

Don't let the title fool you! If you are interested in numerical analysis, applied mathematics, or the solution procedures for differential equations, you will find this book useful. Because of Lanczos' unique style of describing mathematical facts in nonmathematical language, Linear Differential Operators also will be helpful to nonmathematicians interested in applying the methods and techniques described.

Originally published in 1961, this Classics edition continues to be appealing because it describes a large number of techniques still useful today. Although the primary focus is on the analytical theory, concrete cases are cited to forge the link between theory and practice. Considerable manipulative skill in the practice of differential equations is to be developed by solving the 350 problems in the text. The problems are intended as stimulating corollaries linking theory with application and providing the reader with the foundation for tackling more difficult problems.

Lanczos begins with three introductory chapters that explore some of the technical tools needed later in the book, and then goes on to discuss interpolation, harmonic analysis, matrix calculus, the concept of the function space, boundary value problems, and the numerical solution of trajectory problems, among other things. The emphasis is constantly on one question: “What are the basic and characteristic properties of linear differential operators?”

In the author's words, this book is written for those “to whom a problem in ordinary or partial differential equations is not a problem of logical acrobatism, but a problem in the exploration of the physical universe. To get an explicit solution of a given boundary value problem is in this age of large electronic computers no longer a basic question. But of what value is the numerical answer if the scientist does not understand the peculiar analytical properties and idiosyncrasies of the given operator? The author hopes that this book will help in this task by telling something about the manifold aspects of a fascinating field.”

In one of the (unfortunately lost) comedies of Aristophanes the Voice of the Mathematician appeared, as it descended from a snow-capped mountain peak, pronouncing in a ponderous sing-song—and words which to the audience sounded like complete gibberish—his eternal Theorems, Lemmas, and Corollaries. The laughter of the listeners was enhanced by the implication that in fifty years' time another Candidate of Eternity would pronounce from the same snow-capped mountain peak exactly the same theorems, although in a modified but scarcely less ponderous and incomprehensible language.

Since the days of antiquity it has been the privilege of the mathematician to engrave his conclusions, expressed in a rarefied and esoteric language, upon the rocks of eternity. While this method is excellent for the codification of mathematical results, it is not so acceptable to the many addicts of mathematics, for whom the science of mathematics is not a logical game, but the language in which the physical universe speaks to us, and whose mastery is inevitable for the comprehension of natural phenomena.

In his previous books the author endeavoured to establish a more discursive manner of presentation in which the esoteric shorthand formulation of mathematical deductions and results was replaced by a more philosophic exposition, putting the emphasis on ideas and concepts and their mutual interrelations, rather than on the mere manipulation of formulae. Our symbolic mechanism is eminently useful and powerful, but the danger is ever-present that we become drowned in a language which has its well-defined grammatical rules but eventually loses all content and becomes a nebulous sham. Hence the author's constant desire to penetrate below the manipulative surface and comprehend the hidden springs of mathematical equations.

To the author's surprise this method (which, of course, is not his monopoly) was well received and made many friends and few enemies. It is thus his hope that the present book, which is devoted to the fundamental aspects of the theory of Linear Differential Operators, will likewise find its adherents. The book is written at advanced level but does not require any specific knowledge which goes beyond the boundaries of the customary introductory courses, since the necessary tools of the subject are developed as the narration proceeds.


Differential and Integral Calculus on Manifolds

5.2.2 Differential operators and point distributions

(I) D ifferential operators Let B be a pure q-dimensional manifold that is locally compact and countable at infinity and MB, NB two complex vector bundles of finite ranks m and n, respectively ( section 3.4.1 , Definition 3.22 ). The space Γ(Β, M) of sections of class C ∞ of M is a Fréchet nuclear space, like ℰ U itself whenever U is an open subset of ℝ q ([P2], sections 4.3.1 (I) and 4.3.2 (III)). Hence, this space is separable ([P2], section 3.11.3(I)).

Definition 5.5

A linear differential operator of class Cfrom M into N is a continuous linear mapping P : fP. f from Γ(Β, M) into Γ(Β, N) that satisfies the following condition:

(L) For every open subset U of Β and every morphic section f ∈ Γ(Β, M) such that f |U = 0, we have (P.f)|U = 0.

The condition (L) expresses the local nature of the operator P. Write Diff (B M, N) for the set of these differential operators this is an ℰ B -module.

The local trivialization condition (V) of the vector bundles M and N ( Definition 3.22 (i)) implies that, for every bΒ, there exists an open neighborhood U of b that is the domain of a chart c = (U, ξ, q) of Β over which these two fibers can be identified with the trivial bundles U × ℂ m and U × ℂ n , respectively. Hence, for every section f ∈ Γ(Β, M), there exist a mapping g V ∈ ℰ V , with V = ξ (U), and a linear differential operator Q : ℰ V m → ℰ V n such that both squares of the following diagram commute (the rows of this diagram are not compositions):

We say that Q is the local expression of P corresponding to the chart c (and the local trivializations specified above). Given the topology of ℰ V ([P2], section 4.3.1 (I)), with the notation of section 1.2.4 (IV) , the operator Q is of the form

where xAα (x)(x = ξ (b)) is a mapping of class C ∞ from V = ξ (U) into Hom ℂ m ℂ n ≅ ℂ m × n (exercise*: see [DIE 93] , Volume 3, (17.13.3)). The order of the differential operator P at the point b is defined as the greatest integer | α | such that Aα ≠ 0.

If M and N are both equal to the trivial bundle B × ℂ , then Γ(Β, M) and Γ(Β, N) can both be identified with ℰ B , in which case Diff (B M, N) is simply written as Diff(B).

(II) S heaf of differential operators For every bΒ and every open neighborhood U of b, let M |U and N |U be the vector bundles induced by M and N, respectively, on U ( section 3.3.1 , Lemma-Definition 3.4 (4)). Let h ∈ ℰ B be a mapping such that supp(h) ⊂ U and h is equal to 1 in a neighborhood WU of b (the existence of such a function follows from Theorem 2.13 and Corollary 2.17 ). Let f ∈ Γ(U, M) and P ∈ Diff (B M, N) h.f, extended by 0 outside of supp (h), is an element of Γ(Β M). Hence, we can form P. (h.f). This quantity is independent of h, and fP. (h.f) is called the restriction P |U ∈ Diff (U M |U, N |U) of P to U.

Let ℰ be the sheaf of rings U ↦ ℰ U . The mapping U ↦ Diff(U M |U, N |U) is clearly a sheaf of ℰ -Modules ([P2], section 5.3.1 ).

(III) P oint distributions Let P ∈ Diff (B). For every bΒ, f ↦ (P.f) (b) is a distribution with support in <b>, written as P (b). We say that it is a point distribution at b, and so Diff (B) is said to be a field of point distributions. The local expression (see (I)) of a point distribution at b of order p is

The set of point distributions at b is an ℰ B -module, written as T b ∞ B , and T ∞ B = ⊕ b ∈ B T b ∞ B is the ℰ B -module of distributions with finite support in B. The above shows that T ∞ B = Γ B Diff B . We have the following result ( [SCH 66] , Chapter 3 , section 10, Theorem 35):

Any distribution on ℝ n whose support is contained in <0>is a finite linear combination of the Dirac distribution and its derivatives.

We can extend the notion of a finitely supported distribution to the case where Β is a Banach K -manifold of class C r ( [BOU 82a] , section 13). It might seem tempting to define a compactly supported distribution more generally as a continuous linear form on ℰ B but this would require us to define a “good” locally convex topology on the latter space, which is surprisingly difficult (see [KRE 76] ).


Book Description

Aims to construct the inverse problem theory for ordinary non-self-adjoint differential operators of arbitary order on the half-line and on a finite interval. The book consists of two parts: in the first part the author presents a general inverse problem of recovering differential equations with integrable coefficients when the behaviour of the spectrum is arbitrary. The Weyl matrix is introduced and studied as a spectral characteristic. The second part of the book is devoted to solving incomplete inverse problems when a priori information about the operator or its spectrum is available and these problems are significant in applications.


1 Answer 1

The two notions are equivalent in the characteristic zero (smooth! as pointed out by Mariano in the comments) case. The reason they're equivalent basically boils down to the Leibniz rule: $xpartial_x-partial_xx=1$ in the ring of differential operators. Here's a sketch of the proof for the case of the $n$-dimensional Weyl algebra, defined by $W_2=klangle x_1,dots,x_n,y_1,dots,y_n angle/([x_i,y_i]-1,[x_i,x_j],[y_i,y_j]),$ which is the ring of differential operators on $A=k[x_1,dots,x_n]$:

Let $Tin mathrm_k(A)$ such that the $m+1$-fold commutator with any $m+1$ elements of $A$ is zero, but the $m$-fold commutator is not. Pick the following basis of $W$ as a left $A$-algebra: $y_1^y_2^cdots y_n^$. Use the list of basis vectors such that $sum i_j=n$ to determine the $A$-coefficients of each of these terms in $T$ by setting the first $i_1$ of $a_j$ to be $x_1$, and so forth. Subtract the resulting linear combination of differential operators from $T$ to obtain a differential operator of order $leq n-1$, and repeat. Eventually you have $T$ written as an element of the subalgebra of $mathrm_k(A)$ generated by $A$ and $mathrm(A)$.


Watch the video: Coupled oscillators. Lecture 46. Differential Equations for Engineers (December 2021).