There is no commentSelect some text and then click Comment, or simply add a comment to this page from below to start a discussion.
Contraction Mapping
Theorem (Contraction Mapping Principle in [MH]). Let be a given mapping such that there is a constant with for all . Then (is continuous and) has a unique fixed point; that is, there exists a unique point such that .
Actually, we can replace by any complete metric space.
Theorem (Contraction mapping in [DD] or [R]). If is a contraction mapping on a complete metric space , then there is exactly one solution of .
Recall theorem in p.113 [MH], is a Banach space. Thus, is a complete metric space.
Now, we take a closer look at contraction mapping theorem.
Converse. is non-closed in , but for all contraction mapping has unique fixed point. Please see Elekes (2011) for more detail.
Weak. Consider given by
Since by the mean value theorem
But clearly has no fixed points.
Application of Contraction Mapping
Solve linear equation (Jacobi & Gaussian Seidel Method).
Consider the linear system where and . We write where is diagonal, is lower triangular, and is upper triangular. Then, . Thus, it's sufficient to convert to the problem of finding fixed point.
On the other hands, an matrix is said to be diagonally dominant if for each row the sum of the absolute values of the off-diagonal terms is less than the absolute value of the diagonal term. If is diagonally dominant, show that
Theorem. Show the Jacobi scheme converges to a solution of
Proof. Define and by diagonally dominant, we know . Focus on -component of
Theorem. The Gauss-Seidel scheme converges to a solution of
Proof. Prove it by yourself.
Volterra integral equation
Consider Volterra integral equation
Concrete example (p.117 in [MH]). Show that the method of successive approximations applied to leads to the usual formula for . Hence, this sequence converges .
Theorem. If then has a unique solution on .
Proof. See the blackboard.
Example (Ex. 5.6.5 in [MH]). Convert to an integral equation and set up an iteration scheme to solve it. Let . By , we have . Let initial function . Hence, this sequence converges .
More on integral equations
Theorem (Ex. 5.26 in [MH]). Let be a continuous real-valued function on the square , and assume for each . Let be continuous. Prove that there is a unique continuous real-valued function on such that
Proof. Prove it by yourself.
Proposition. Let be a continuous function. The unique solution of the boundary value problem is given by where
Proof. Skip!
Let's summarize integral equations which we mentioned.
Concrete example. (Ex. 7.5.3 in [MH]) Consider the IVP for first-order system ODE Solve it directly, , and get . By IC, we have for . However, is also a solution for all .
Since is not good enough, the solution is not unique. If is continuous, then by Peano existence theorem, there exist at least one solution for all IC.
Back to , define a operator If is a solution to , then is fixed point of .
The problem of finding a solution to transforms to a problem of finding a fixed-point of .
Concrete example. (Ex. 7.5.5 in [MH]) Find a function satisfying with initial value . Define . Then By induction, we have which converges to
Existence and uniqueness of IVP for ODE
Theorem (locally existence for ODEs). Let . Let be a given continuous mapping. Let . Suppose there is a constant such that for all . Let . Then there is a unique continuously differentiable map and such that () holds.
Note that only need to be locally Lipschitz continuous. Moreover, is the Euclidean norm on instead of norm.
Proof. See the blackboard.
Example
Example (p.221 in [MH] and p.310 in [DD]) Consider IVP for ODE , . Let be undetermined. Now Thus, . Also, , so
Since is not involved we can just choose large enough so that it does not interfere, say, . Then, by the theorem, we must choose
This will work for any choice of . For example, if we let we get a time of existence , i.e..
On the other hand, solve by separation of variable , and we obtain, on . Therefore, we can re-consider the new IVP for ODE for . Again, let , To maximize , let , which yields . Hence, , i.e.. Continues this process, we can get and as .
Example (Cont'd Ex. 7.5.3 in [MH] and p.312 in [DD]). Finally, we show is not Lipschitz for , and .