I. Basic math.
 II. Pricing and Hedging.
 III. Explicit techniques.
 IV. Data Analysis.
 V. Implementation tools.
 VI. Basic Math II.
 VII. Implementation tools II.
 1 Calculational Linear Algebra.
 B. Method of steepest descent.
 C. Method of conjugate directions.
 E. Convergence analysis of conjugate gradient method.
 F. Preconditioning.
 G. Recursive calculation.
 H. Parallel subspace preconditioner.
 2 Wavelet Analysis.
 3 Finite element method.
 4 Construction of approximation spaces.
 5 Time discretization.
 6 Variational inequalities.
 VIII. Bibliography
 Notation. Index. Contents.

## Parallel subspace preconditioner.

he reference is [Xu] .

Let be a decomposition of : Let be a symmetric positive definite operator.

We define operators and via the relationships

 (Definition of Q i)
 (Definition of P i)
 (Definition of A i)
For we have Thus
 (Projection permuation)

Let be a symmetric positive definite operator that (on motivational level) almost inverts : In context of the section ( Preconditioning ) we introduce the operator

 (Parallel subspace preconditioner)

We introduce the numbers as follows.

Proposition

There exists a smallest number such that for any there is some decomposition satisfying the estimate

 (Definition of K0)

Proof

The statement is a consequence of positive definiteness of : Thus, for the formula ( Definition of K0 ) to fail for any decomposition at least one operator has to be unbounded. But this is impossible because all are symmetric positive definite.

Proposition

There exists a smallest number such that for any index set and any finite collections , , we have

 (Definition of K1)
where we use the convenience notation

Proof

The statement is a consequence of positive definiteness of each :

We introduce the notation and note

Proposition

(Parallel subspace preconditioner property) For an operator given by the formula ( Parallel subspace preconditioner ) we have See the formula ( Condition number ) for notation .

Proof

According to the formula ( Definition of K1 ), or Hence,

We now estimate . We use the proposition ( Cauchy inequality for scalar product 2 ). The above sum is a scalar product. We use proposition ( Cauchy inequality for scalar product 1 ). We use the proposition ( Definition of K0 ) and the formula ( Definition of P_i ). Thus or Therefore, The statement now follows from and .

Proposition

(Estimate for K0)

1. Assume that for any there is a decomposition such that for some constant . Then 2. Assume that for any there is a decomposition such that for some constant . Then

Proof

We estimate Hence We use the formula ( Definition of A_i ). The proof of the statement 2 is very similar.

Definition

(Strengthened Cauchy-Schwartz inequality) We define the matrix where

 (Definition of Epsilon)

Proposition

(Magnitude of matrix Epsilon)

1. .

2. If then .

Proof

. Hence, if then and are -orthogonal and . Hence (2).

To prove (1) we use the proposition ( Cauchy inequality for scalar product 2 ): hence

Proposition

(Estimate for K1 one)

1. .

2. .

3. If for some then .

Proof

We compare the formula ( Definition of K1 ): with the formula ( Definition of Epsilon ): We sum the last inequality in : The RHS looks like a scalar product . Hence, (1).

The statements (2),(3) are consequence of the proposition ( Gershgorin circle theorem ) and the proposition ( Magnitude of matrix Epsilon )-1. The (3) requires some calculation:

Proposition

(Estimate for K1 two) For any index set ,

Proof

is tedious and straightforward.

 Notation. Index. Contents.