Content of present website is being moved to . Registration of will be discontinued on 2020-08-14.
Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Printable PDF file
I. Basic math.
II. Pricing and Hedging.
III. Explicit techniques.
IV. Data Analysis.
V. Implementation tools.
VI. Basic Math II.
VII. Implementation tools II.
1. Calculational Linear Algebra.
A. Quadratic form minimum.
B. Method of steepest descent.
C. Method of conjugate directions.
D. Method of conjugate gradients.
E. Convergence analysis of conjugate gradient method.
F. Preconditioning.
G. Recursive calculation.
H. Parallel subspace preconditioner.
2. Wavelet Analysis.
3. Finite element method.
4. Construction of approximation spaces.
5. Time discretization.
6. Variational inequalities.
VIII. Bibliography
Notation. Index. Contents.

Convergence analysis of conjugate gradient method.

unning procedure ( Conjugate gradients ) for all $n$ steps is not always feasible. In addition, numerical errors are likely to destroy orthogonality of MATH For these reasons it makes sense to study convergence of the procedure ( Conjugate gradients ).

According to the formula ( Conjugate gradient residue selection ) MATH According to the formula ( Error and residual 2 ) we then have MATH for some $k$ -th degree polynomial $p_{k}.$ Furthermore, by tracing the recipe ( Conjugate gradients ) directly we see that MATH . The conjugate gradient acts to minimize MATH by manipulating coefficients of $p_{k}$ . Thus, using notation of the definition ( Positive definite inner product ), MATH Since $A$ is a positive definite symmetric matrix, there exists a set of eigenvectors MATH and positive eigenvalues MATH . Then MATH for some numbers MATH . We have MATH MATH Therefore MATH To evaluate the $\min\max$ combination we aim to apply the proposition ( Minimum norm optimality of Chebyshev polynomials ). We are dealing with the minimization over MATH and the proposition is formed for ( Minimum norm optimality of Chebyshev polynomials ) MATH . However, a review of the proof of that proposition shows that normalization is irrelevant to the argument. We would still have optimality at a polynomial that differ from a Chebyshev polynomial by multiplicative constant. We do, however, need to transform from $\max_{p}$ to MATH . MATH where MATH , MATH and MATH . Next, we map MATH to MATH : MATH MATH We now apply the proposition ( Minimum norm optimality of Chebyshev polynomials ). MATH where the constant $c$ is the scale to insure MATH The $T_{k}^{0}$ is a scale of $T_{k}$ given by MATH with the consequences MATH Hence, we rewrite MATH or, by taking square root and replacing $c$ , MATH We introduce the quantity

MATH (Condition number)
then MATH We use the proposition ( Chebyshev polynomials calculation ): MATH MATH MATH Therefore MATH As $k$ grows the first term grows and the second term vanishes. MATH Then MATH


(Convergence of conjugate gradient method) Let $A$ be a positive definite symmetric matrix, $\kappa$ given by the formula ( Condition number ) and $e_{k}$ is given by the formula ( Error and residual ). Then the procedure of the summary ( Conjugate gradients ) has convergence rate MATH

Notation. Index. Contents.

Copyright 2007