Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities

I. Wavelet calculations.
II. Calculation of approximation spaces in one dimension.
III. Calculation of approximation spaces in one dimension II.
IV. One dimensional problems.
V. Stochastic optimization in one dimension.
1. Review of variational inequalities in maximization case.
2. Penalized problem for mean reverting equation.
3. Impossibility of backward induction.
4. Stochastic optimization over wavelet basis.
VI. Scalar product in N-dimensions.
VII. Wavelet transform of payoff function in N-dimensions.
VIII. Solving N-dimensional PDEs.
Downloads. Index. Contents.

Review of variational inequalities in maximization case.

he chapter ( Variational inequalities ) contains a recipe for stochastic minimization of a cost function. Such was a convention of the original source. In financial problems we would like to maximize income. Hence, we briefly review the logic to trace changes $\min$ to $\max$ and plus to minus.

We do calculation in context and with notation of the section ( Optimal stopping time problem ). The definitions of $X_{t}$ , $\tau$ , $\theta$ , MATH , MATH are exactly the same. We introduce MATH differently: MATH We have the same calculation leading to the equation MATH inside the non-stopping area. Exactly one of the equalities $u=\psi$ or $0=f+u_{t}+Lu$ holds at all times. Since we are maximizing the payoff, we would not stop unless MATH Thus, MATH

If the stopping does not occur then MATH thus MATH and MATH


(Free boundary problem 2 summary) The function MATH satisfies on MATH the conditions MATH where exactly one of the inequalities is strict at all times, thus MATH

Let MATH then

MATH (Free boundary problem 2)

The weak formulation for the equation MATH is MATH where the minus comes from integration by parts and the expression for $B$ is given in the definition ( Bilinear form B 2 ).

At the beginning of the chapter ( Variational inequalities ) we remarked that we need variational inequalities to conduct numerical stochastic optimization. Later within that chapter, in the section ( Penalized evolutionary problem ), we introduce a practical recipe for a minimization case: MATH Precisely, we solve the problem ( Strong formulation of evolutionary problem ) using $\varepsilon$ -limit of the problem ( Evolutionary penalized problem ). Note carefully the role of the penalty term MATH in the equation MATH If $u_{\varepsilon}$ is greater than $\psi$ then the penalty term comes into life: MATH This is a backward equation. Hence, positive penalty term means positive time derivative and decreasing solution when going backward in time. The penalty term pushes the solution to the desired property MATH

We are presently considering a maximization case, see formula $\left( \#\right) $ . We need to keep the solution in the area $C$ where MATH Following the results of the section ( Evolutionary variational inequalities ), we insert the penalty term that comes into effect when the opposite happens: $u<\psi$ . Thus, the penalty term is MATH It needs to push the solution upward when going backward in time. Hence, the derivative $u_{t}$ should get negative component: MATH


(Variational inequalities in maximization case) In context of the section ( Variational inequalities ), the function MATH may be evaluated by using recipes of the section ( Evolutionary variational inequalities ) modified by the substitution MATH

In particular, the time derivative and penalty term should match signs according to the rule MATH

Downloads. Index. Contents.

Copyright 2007