I. Wavelet calculations.
 II. Calculation of approximation spaces in one dimension.
 III. Calculation of approximation spaces in one dimension II.
 IV. One dimensional problems.
 V. Stochastic optimization in one dimension.
 1 Review of variational inequalities in maximization case.
 2 Penalized problem for mean reverting equation.
 3 Impossibility of backward induction.
 4 Stochastic optimization over wavelet basis.
 A. Choosing probing functions.
 B. Time discretization of penalty term.
 C. Implicit formulation of penalty term.
 D. Smooth version of penalty term.
 E. Solving equation with implicit penalty term.
 F. Removing stiffness from penalized equation.
 G. Mix of backward induction and penalty term approaches I.
 H. Mix of backward induction and penalty term approaches I. Implementation and results.
 I. Mix of backward induction and penalty term approaches II.
 J. Mix of backward induction and penalty term approaches II. Implementation and results.
 K. Review. How does it extend to multiple dimensions?
 VI. Scalar product in N-dimensions.
 VII. Wavelet transform of payoff function in N-dimensions.
 VIII. Solving N-dimensional PDEs.

## Mix of backward induction and penalty term approaches II.

e continue research of the previous section ( Mix of backward induction and penalty term approaches I. Implementation and results ).

We start from a function and aim to construct

For an initial , form the set s.t.

Calculate

Find

Set where the normalization parameter is derived from the requirements Thus

We calculate the components. Let then

We apply the operation to and obtain Let then where is -th row of the matrix , transposed into a column.

Summary

(Constructing maximum) We start from columns and s.t. and construct a column s.t. via the following procedure.

Choose an initial scale s.t. the distance between any two singular roots of is greater than . Form the set s.t. where

1. Calculate where

2. Find and exclude from all s.t. and exclude . If is empty then set and

3. Calculate where

4. Set where and is -th row of the matrix , transposed into a column.

5. Exit or go to 1.

The procedure is adapted to parallel architecture because one can subtract several functions with non-overlapping support. Most intensive pieces of calculation may be pre-calculated.

An adaptive extension of the procedure would involve selecting and from two different classes. Indeed, should be adapted to subtract biggest piece from the solution. The functions should be designed not to allow a change of sign.