I. Basic math.
 II. Pricing and Hedging.
 III. Explicit techniques.
 IV. Data Analysis.
 V. Implementation tools.
 1 Finite differences.
 2 Gauss-Hermite Integration.
 3 Asymptotic expansions.
 4 Monte-Carlo.
 A. Generation of random samples.
 B. Acceleration of convergence.
 C. Longstaff-Schwartz technique.
 D. Calculation of sensitivities.
 a. Pathwise differentiation.
 b. Calculation of sensitivities for Monte-Carlo with optimal control.
 5 Convex Analysis.
 VI. Basic Math II.
 VII. Implementation tools II.
 VIII. Bibliography
 Notation. Index. Contents.

## Calculation of sensitivities for Monte-Carlo with optimal control.

e are interested in evaluation of when where the is the optimal stopping strategy. The sup is taken over all functional forms of . In the section on backward induction ( Backward induction ) and Bellman equation ( Bellman equation section ) we saw that the is a function of the state variable and the final condition, . Here represent the time and the process state variables. The is the final time and is the payoff. In particular, the optimal stopping rule does not depend on the initial condition. Since we recover the when evaluating the itself we simply use the that we already have.

For valuation of Vega (or similar sensitivity) we need a different argument but arrive to the same result. Assuming that the sup is attained on some , we have

 (Optimal stopping)
for any variation . Hence, when evaluating Vega, where we abuse the notation slightly: the is the derivative with respect to any direction in -space or a derivative with respect to any parameterization of . In any case, by the ( Optimal stopping ), . Hence, again we may assume that the optimal stopping rule does not change.

The rest of the calculation may follow the section ( Pathwise differentiation ).

 Notation. Index. Contents.