Content of present website is being moved to www.lukoe.com/finance . Registration of www.opentradingsystem.com will be discontinued on 2020-08-14.
Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Services
Author
Printable PDF file
I. Basic math.
II. Pricing and Hedging.
III. Explicit techniques.
IV. Data Analysis.
V. Implementation tools.
VI. Basic Math II.
VII. Implementation tools II.
1. Calculational Linear Algebra.
A. Quadratic form minimum.
B. Method of steepest descent.
C. Method of conjugate directions.
D. Method of conjugate gradients.
E. Convergence analysis of conjugate gradient method.
F. Preconditioning.
G. Recursive calculation.
H. Parallel subspace preconditioner.
2. Wavelet Analysis.
3. Finite element method.
4. Construction of approximation spaces.
5. Time discretization.
6. Variational inequalities.
VIII. Bibliography
Notation. Index. Contents.

Parallel subspace preconditioner.


he reference is [Xu] .

Let MATH be a decomposition of $\QTR{cal}{R}^{n}$ : MATH Let MATH be a symmetric positive definite operator.

We define operators MATH and MATH via the relationships

MATH (Definition of Q i)
MATH (Definition of P i)
MATH (Definition of A i)
For MATH we have MATH Thus
MATH (Projection permuation)

Let MATH be a symmetric positive definite operator that (on motivational level) almost inverts $A_{j}$ : MATH In context of the section ( Preconditioning ) we introduce the operator

MATH (Parallel subspace preconditioner)

We introduce the numbers $K_{0},K_{1}$ as follows.

Proposition

There exists a smallest number $K_{0}$ such that for any $x$ there is some decomposition MATH MATH satisfying the estimate

MATH (Definition of K0)

Proof

The statement is a consequence of positive definiteness of $A$ : MATH Thus, for the formula ( Definition of K0 ) to fail for any decomposition MATH at least one operator $R_{j}^{-1}$ has to be unbounded. But this is impossible because all $R_{j}$ are symmetric positive definite.

Proposition

There exists a smallest number $K_{1}$ such that for any index set MATH and any finite collections MATH , $x_{j}\in X_{j}$ , $y_{j}\in X_{j}$ we have

MATH (Definition of K1)
where we use the convenience notation MATH

Proof

The statement is a consequence of positive definiteness of each $T_{i}$ : MATH MATH

We introduce the notation MATH and note MATH

Proposition

(Parallel subspace preconditioner property) For an operator $B$ given by the formula ( Parallel subspace preconditioner ) we have MATH See the formula ( Condition number ) for notation $\kappa$ .

Proof

According to the formula ( Definition of K1 ), MATH or MATH Hence, MATH

We now estimate MATH . MATH MATH We use the proposition ( Cauchy inequality for scalar product 2 ). MATH MATH The above sum is a scalar product. We use proposition ( Cauchy inequality for scalar product 1 ). MATH We use the proposition ( Definition of K0 ) and the formula ( Definition of P_i ). MATH Thus MATH or MATH Therefore, MATH The statement now follows from $\left( \$\right) $ and MATH .

Proposition

(Estimate for K0)

1. Assume that for any MATH there is a decomposition MATH such that MATH for some constant $C_{0}$ . Then MATH 2. Assume that for any MATH there is a decomposition MATH such that MATH for some constant $C_{0}$ . Then MATH

Proof

We estimate MATH Hence MATH We use the formula ( Definition of A_i ). MATH The proof of the statement 2 is very similar.

Definition

(Strengthened Cauchy-Schwartz inequality) We define the matrix MATH where

MATH (Definition of Epsilon)

Proposition

(Magnitude of matrix Epsilon)

1. MATH .

2. If $P_{i}P_{j}=0$ then $\varepsilon_{ij}=0$ .

Proof

MATH . Hence, if $P_{i}P_{j}=0$ then $X_{i}$ and $X_{j}$ are MATH -orthogonal and MATH . Hence (2).

To prove (1) we use the proposition ( Cauchy inequality for scalar product 2 ): MATH MATH hence MATH

Proposition

(Estimate for K1 one)

1. MATH .

2. MATH .

3. If MATH for some MATH then MATH .

Proof

We compare the formula ( Definition of K1 ): MATH with the formula ( Definition of Epsilon ): MATH We sum the last inequality in $i,j$ : MATH The RHS looks like a scalar product MATH . MATH Hence, (1).

The statements (2),(3) are consequence of the proposition ( Gershgorin circle theorem ) and the proposition ( Magnitude of matrix Epsilon )-1. The (3) requires some calculation: MATH MATH MATH

Proposition

(Estimate for K1 two) For any index set MATH , MATH

Proof

is tedious and straightforward.





Notation. Index. Contents.


















Copyright 2007