Content of present website is being moved to www.lukoe.com/finance . Registration of www.opentradingsystem.com will be discontinued on 2020-08-14.
 I. Basic math.
 II. Pricing and Hedging.
 III. Explicit techniques.
 IV. Data Analysis.
 V. Implementation tools.
 VI. Basic Math II.
 VII. Implementation tools II.
 1 Calculational Linear Algebra.
 A. Quadratic form minimum.
 B. Method of steepest descent.
 C. Method of conjugate directions.
 D. Method of conjugate gradients.
 E. Convergence analysis of conjugate gradient method.
 F. Preconditioning.
 G. Recursive calculation.
 H. Parallel subspace preconditioner.
 2 Wavelet Analysis.
 3 Finite element method.
 4 Construction of approximation spaces.
 5 Time discretization.
 6 Variational inequalities.
 VIII. Bibliography
 Notation. Index. Contents.

## Convergence analysis of conjugate gradient method. unning procedure ( Conjugate gradients ) for all steps is not always feasible. In addition, numerical errors are likely to destroy orthogonality of For these reasons it makes sense to study convergence of the procedure ( Conjugate gradients ).

According to the formula ( Conjugate gradient residue selection ) According to the formula ( Error and residual 2 ) we then have for some -th degree polynomial Furthermore, by tracing the recipe ( Conjugate gradients ) directly we see that . The conjugate gradient acts to minimize by manipulating coefficients of . Thus, using notation of the definition ( Positive definite inner product ), Since is a positive definite symmetric matrix, there exists a set of eigenvectors and positive eigenvalues . Then for some numbers . We have  Therefore To evaluate the combination we aim to apply the proposition ( Minimum norm optimality of Chebyshev polynomials ). We are dealing with the minimization over and the proposition is formed for ( Minimum norm optimality of Chebyshev polynomials ) . However, a review of the proof of that proposition shows that normalization is irrelevant to the argument. We would still have optimality at a polynomial that differ from a Chebyshev polynomial by multiplicative constant. We do, however, need to transform from to . where , and . Next, we map to :  We now apply the proposition ( Minimum norm optimality of Chebyshev polynomials ). where the constant is the scale to insure The is a scale of given by with the consequences Hence, we rewrite or, by taking square root and replacing , We introduce the quantity (Condition number)
then We use the proposition ( Chebyshev polynomials calculation ):   Therefore As grows the first term grows and the second term vanishes. Then Proposition

(Convergence of conjugate gradient method) Let be a positive definite symmetric matrix, given by the formula ( Condition number ) and is given by the formula ( Error and residual ). Then the procedure of the summary ( Conjugate gradients ) has convergence rate Notation. Index. Contents.
 Copyright 2007