I. Basic math.
 II. Pricing and Hedging.
 III. Explicit techniques.
 IV. Data Analysis.
 V. Implementation tools.
 1 Finite differences.
 2 Gauss-Hermite Integration.
 3 Asymptotic expansions.
 4 Monte-Carlo.
 5 Convex Analysis.
 A. Basic concepts of convex analysis.
 B. Caratheodory's theorem.
 C. Relative interior.
 D. Recession cone.
 E. Intersection of nested convex sets.
 F. Preservation of closeness under linear transformation.
 G. Weierstrass Theorem.
 H. Local minima of convex function.
 I. Projection on convex set.
 J. Existence of solution of convex optimization problem.
 K. Partial minimization of convex functions.
 L. Hyperplanes and separation.
 M. Nonvertical separation.
 N. Minimal common and maximal crossing points.
 O. Minimax theory.
 Q. Polar cones.
 R. Polyhedral cones.
 S. Extreme points.
 T. Directional derivative and subdifferential.
 U. Feasible direction cone, tangent cone and normal cone.
 V. Optimality conditions.
 W. Lagrange multipliers for equality constraints.
 X. Fritz John optimality conditions.
 Y. Pseudonormality.
 Z. Lagrangian duality.
 [. Conjugate duality.
 VI. Basic Math II.
 VII. Implementation tools II.
 VIII. Bibliography
 Notation. Index. Contents.

## Pseudonormality.

e use the notation of the problem ( Smooth optimization problem ).

Definition

(Pseudonormality). The vector is called "pseudonormal" if one cannot find the vectors , and a sequence such that

1. ,

2. , and .

3. and .

Note that (1) implies that the proposition ( Fritz John conditions ) cannot take place with . The conditions (2),(3) imply that the components of are "informative" in the sense that the set of the proposition ( Fritz John conditions ) is nonempty and the non-zero components of mark those conditions and that are "active" ( of the proposition ( Fritz John conditions )'s proof violates these conditions and the lies on the boundary set by such conditions).

We introduce the notation The condition 2 of the above definition may be equivalently written as

Proposition

(Constraint qualification 1). If , and the vectors are linearly independent then the vector is pseudonormal.

Proof

Since we have . Hence, the conditions 1 and 2 of the definition ( Pseudonormality ), if true, would imply the linear dependence . Therefore, such and , as in the definition ( Pseudonormality ), cannot exist.

Proposition

(Constraint qualification 2). If , , and there exists a such that then the vector is pseudonormal.

Here the -sign after the brackets indicates that the summation of the scalar product is applied to the components of the gradient .

Proof

In the condition 1 of the definition ( Pseudonormality ) the LHS is a vector of components of the gradient . The scalar product applies to and indexes of and . We apply the scalar product with respect to components of the gradient and write the following consequence of the condition 1: Here we used that for . We rearrange the terms as follows Therefore, the and as in the definition ( Pseudonormality ) cannot exist because the first sum is zero by the condition of the proposition and the second sum is negative by the condition of the proposition and the condition 2 of the definition ( Pseudonormality ).

Proposition

(Constraint qualification 3). If , , the functions are affine and the functions are concave then the vector is pseudonormal.

Proof

By the conditions on and we have for any . Therefore, for any and By the inclusion , the first sum is zero and the second sum is non-positive. Hence, if and satisfy the condition 1 of the definition ( Pseudonormality ): then the condition 3 of the definition ( Pseudonormality ) must fail.

Proposition

(Constraint qualification 4). Let , , the is pseudonormal for the set

and for some . Furthermore, there exists a such that

Proof

Note that because if then and the conditions 1,2,3 of the definition ( Pseudonormality ) are satisfied for the set . The rest of the proof is a repetition of the proof of the proposition ( Constraint qualification 2 ).

Proposition

(Constraint qualification 5). Assume that the following conditions are satisfied.

1. The functions , are linear for some .

2. The does not exists a such that and not all are zero.

3. Let Either Interior or is convex and .

4. There exists a such that

Then the vector is pseudonormal.

Proof

We assume that all the conditions of the definition ( Pseudonormality ) hold and reach a contradiction.

We introduce the notation According to the condition 4 of this proposition and condition 2 of the definition ( Pseudonormality ), there exists a such that The condition 1 of the definition ( Pseudonormality ) requires that thus Hence, we already proven the statement for the case .

It remains to consider the case under the assumption that the conditions 1,2,3 of of the definition ( Pseudonormality ) and the conditions 1,2,3,4 of this proposition are true and arrive to contradiction. By the assumption , we have and by condition 2 of the definition ( Pseudonormality ) we have The condition 1 of the definition ( Pseudonormality ) implies Hence, by the condition 2 of the proposition, all are zero: By the condition 3 there is a from the interior of such that Hence, Hence, we have found a point and an interior point of such that This is a contradiction. For an interior point of a cone we must have

 Notation. Index. Contents.