Content of present website is being moved to . Registration of will be discontinued on 2020-08-14.
Quantitative Analysis
Parallel Processing
Numerical Analysis
C++ Multithreading
Python for Excel
Python Utilities
Printable PDF file
I. Basic math.
II. Pricing and Hedging.
III. Explicit techniques.
IV. Data Analysis.
V. Implementation tools.
1. Finite differences.
2. Gauss-Hermite Integration.
3. Asymptotic expansions.
4. Monte-Carlo.
A. Generation of random samples.
B. Acceleration of convergence.
C. Longstaff-Schwartz technique.
a. Normal Equations technique.
D. Calculation of sensitivities.
5. Convex Analysis.
VI. Basic Math II.
VII. Implementation tools II.
VIII. Bibliography
Notation. Index. Contents.

Longstaff-Schwartz technique.

his is a short summary of the technique presented in [Longstaff] .

We would like to calculate the quantity MATH where $X_{t}$ is a stochastic process in $R^{N}$ holding all the state variables, the $r\left( x\right) $ is some deterministic function representing the interest rate term structure, $h$ is the known payoff function depending on the path MATH up to the moment of exercise $\tau$ . The functional dependence of the moment of exercise $\tau$ on the state variables MATH is the subject of optimization.

Suppose the MATH is the result of Monte-Carlo simulation of the stochastic process $X_{t}$ with $k$ being the time index, $\omega$ being the simulation index and $n$ being the dimension index, MATH is the result of immediate exercise MATH , MATH is the discount factor between neighbor indexes MATH .

Introduce an array MATH and a set of functions MATH acting MATH .

Start with MATH

For $k=K-1,K-2,...,1$ do the following: MATH MATH MATH MATH The answer is MATH . We do not proceed to step k=0 because the cross sectional information MATH collapses to a point at this step. The obtained this way value is biased high because this is a forward looking procedure. If we continue the MC simulation on the obtained strategy MATH then we get a biased low value because the strategy is suboptimal.

The motivation for the steps above is the following. The $Y_{k,\omega}$ is the value of the quantity in question at time $k$ given the information MATH . Hence, the starting condition is obvious. The sum MATH is used to construct the function that depends only on available information and best approximates the Y. Hence, we discount the Y from the previous step with (a), find the best approximation in (b), chose the best strategy and calculate the new Y in (c) and (d).

The step (b) may be performed using the Normal Equations technique presented in the next section.

The step (b) is unstable if the time step is small and $k$ is close to the origin. In such situation the MATH does not contain much cross- $\omega$ information because all the $X_{k,\omega}$ originate from a single point $X_{0}$ and did not have time to evolve. For this reason the procedure is not effective if the exercise is immediately possible.

a. Normal Equations technique.

Notation. Index. Contents.

Copyright 2007