I. Wavelet calculations.
 II. Calculation of approximation spaces in one dimension.
 III. Calculation of approximation spaces in one dimension II.
 IV. One dimensional problems.
 V. Stochastic optimization in one dimension.
 1 Review of variational inequalities in maximization case.
 2 Penalized problem for mean reverting equation.
 3 Impossibility of backward induction.
 4 Stochastic optimization over wavelet basis.
 A. Choosing probing functions.
 B. Time discretization of penalty term.
 C. Implicit formulation of penalty term.
 D. Smooth version of penalty term.
 E. Solving equation with implicit penalty term.
 F. Removing stiffness from penalized equation.
 G. Mix of backward induction and penalty term approaches I.
 H. Mix of backward induction and penalty term approaches I. Implementation and results.
 I. Mix of backward induction and penalty term approaches II.
 J. Mix of backward induction and penalty term approaches II. Implementation and results.
 K. Review. How does it extend to multiple dimensions?
 VI. Scalar product in N-dimensions.
 VII. Wavelet transform of payoff function in N-dimensions.
 VIII. Solving N-dimensional PDEs.

## Time discretization of penalty term. n this section we consider time discretization for the expression The depends on a function that comes from the equation where participates. Theory of numerical methods for ODE have a similar problem and an effective solution. We review the theory.

Consider the ODE The time evolution happens backwards from to . Taylor decomposition applies to and in all arguments. The is an input value.

We introduce a time mesh and a mesh function defined recursively Thus We proceed to estimate the magnitude of the difference for all . We calculate We subtract and use : For a general time step we calculate   thus Then assuming that all are of generally the same magnitude.

Following recipes of Runge-Kutta technique, we introduce a better approximation as follows: We seek parameters that deliver the smallest difference .

We calculate the evolution equation for : Therefore (Evolution of y)
We use Taylor expansion for : and substitute the time derivatives using the defining equation for : and put all together: (Evolution of u)
By comparing the formulas ( Evolution of y ) and ( Evolution of u ), we require We set then  At initial time step we get and then the higher order propagates.

The crucial requirement is smoothness of . In our case, see the expression for , the function would jump. Hence, the expression is not small for those values of and on opposite sides of the jump. Given the nature of the problem, it would be typical.

One might try to replace the operation within the function with some smooth function with similar properties: The function must be zero where is zero. But then, to be smooth, it must increase gently on the other side of the area. One could argue that it would hurt the purpose of the penalty term, dampening convergence of the procedure and requiring higher . But then, one could also argue that this is what we want: we are not certain about desired strength of the penalty term, thus, we would like to employ a graduate procedure of high order.

In the following chapter ( Smooth version of penalty term ) we will point out a reason why, in fact, we must do such modification.