You are on page 1of 16

Optimization notes, by Joe

July 11, 2012

1
1.1

Notation, Matrices, etc.


First derivative vectors/matrices and notation
f ( ) x x1

f ( ) = f (x1 , ...xn ) x f () = ...


f ( ) x xn

Df () = [f ()]T if there is only one function [f 1 ( )]T x [f 2 ( )]T x ... K functions, n variables: Df () = ... [f k ( )]T x

1 1 f1 ( ) f2 ( ) x x 2 x ... f1 ( ) ... ... ... ... k x ... f1 ( )

... ... ... ... ...

... ... ... ... ...

1 x fn ( ) ... ... ... k fn ( ) x

Importantly rank[D(f ( )] = k when there are as many LI functions as variables x

1.2

Those confusing bordered matrices


[ B (x) =
2 f ( ) x x1 2 f ( ) x x2 1 2 f ( ) x x1 x2 f ( ) x xn

An entire bordered Hessian of a single function of n variables looks like this: ] = 0


f ( ) x x1

...
f 2 ( ) x x1 x2

which is n+1 x n+1

0 f (x)

[f (x)]T H 2 (x)

... ...
f 2 ( ) x x2 n

...
f ( ) x xn

... ...

...

In this context Ck is some bordered k x k submatrix usually used to establish some form of deniteness and/or concavity Ck=n = B 2 (x) [ 0 Ck=1 = f ( ) x
x1

f ( ) x x1 f 2 ( ) x x2 1

] which is the smallest submatrix that we care about the sign of

Deniteness Usually follows same pattern as normal matrix but you exclude C0 = [0] So negative denite is (1)r |Cr | > 0 for r=1...k ] [ 0 f ( ) x f ( ) 0 x1 xx i.e. det f ( ) f 2 ( ) < 0 then det x x 1 2
x1 x1 f ( ) x x2 f ( ) x x1 2 f ( ) x x2 1 f 2 ( ) x x1 x2 f ( ) x x2 2 f ( ) x x1 x2 f 2 ( ) x x2 2

> 0, etc

Other things: See equality constraint notes for deniteness on a set and quasi-concavity notes for applications of this stu Generally, what is up with bordered matrices and deniteness? 1

Its never quite thoroughly explained, but I see it as the representation of any restriction of a set involving 0 i.e. deniteness on a matrix restricted by z(x) = {z Rn : Bz = 0} In the bordered Hessian case this is like the FOC f (x) = 0 - (so B is the gradient vector and z is any real) For quasi-concavity this appears to come from the rst derivative denition of it which involves Df (x)(y x) 0, (so Df is B and y-x = z?)

2
2.1

Weierstrass
Formal statement

D Rn , D compact and nonempty, f : D R1 is continuous = x1 , x2 D : f (x1 ) f (x) f (x2 ) x D

2.2
2.2.1

Elements
Nonempty

D = 2.2.2 Closed domain

Sequential: {xk } D : xk x = x D True if domain is comprised of weak inequalities (limits preserve weak inequalities) 2.2.3 Bounded domain

r > 0 : D B(0, r). Where B(0, r) = {x Rn : ||x|| < r} i.e. nd some positive real number that creates a ball around the origin that your domain is in This is Kims preferred denition 2.2.4 Continuous function

Sequential: {xk } D : xk x = f (xk ) f (x) All polynomial functions are continuous (limit laws)

2.3

Notes

Weierstrass guarantees existence of solution to problem: max / min{f (x)| x D} Jens notes 1-20: f : S R, S compact, f continuous = x S : f (x) f (x ) x S (use to prove Weierstrass) n 2 ||x|| = (x x = i=1 xi Limit laws Limits preserve weak inequalities Limits preserve all normal operations: exponents, multiplication, addition, subtraction Jens problem set 7 has good practice questions and solutions

2.4

Applications

My hopefully now correct solution to midterm 1, question 1: Show there is a solution to: min(x1 + 2x2 ) s.t. x1 , x2 V where V = {(x1 , x2 ) R2 : xa xb 1} a, b R++ + 1 2 Nonempty: let q = max(a, b), z = max(x1 , x2 ) z 2q 1 = z 1 2q = 1 (1, 1) D Bounded: at optimum, f (1, 1) x + 2x (minimization problem) 1 2 x + 2x 3 1 2 Let V = {(x1 , x2 ) R2 : xa xb 1, x1 + 2x2 3} + 1 2 z + 2z = 3z 3 = z 1 1 ||x|| = (x2 + x2 ) 2 1 + 1 = 2 = r bounded by 2 + 1 2 Therefore r = 2 + > 0 : V B(0, 2 + ) Where B(0, 2 + ) = {x Rn : ||x|| < 2 + } Closed Weak inequalities preserved by limits Let{xk } V : {xk } = (x1,k , x2,k ) V : (x1,k , x2,k ) (x1 , x2 ) = x {xk } V = xa xb 1, x1,k + 2x2,k 3 1,k 2,k lim xa xb lim 1, lim x1,k + 2x2,k lim 3 1,k 2,k xa xb 1, x1 + 2x2 3 (limit laws, limits preserve weak inequalities) 1 2 (x1 , x2 ) = V = V is closed x By Weierstrass a solution exists
1

3
3.1

Concavity notes and theorems


Denitions
D Rn , f : D R x, y D, (0, 1) f concave f (x + (1 )y) f (x) + (1 )f (y) f strictly concave f (x + (1 )y) > f (x) + (1 )f (y) C 1 denition D Rn , f : D R D open and convex f concave Df (x)(y x) f (y) f (x) x, y D C 2 denition f concave D2 f (x) is nsd x D (necessary and sucient) D2 f (x) is nd x D = f strictly concave (sucient but not necessary) Traditional denition

3.2

Useful (?) theorems


D Rn , f : D R, D convex D open, f concave = f continuous on D f concave = f continuous on interior(D)

Openness and continuity

Local maxes are global D Rn , f : D R, D convex, f concave Any local max of f is a global max of f Uniqueness of global max D Rn , f : D R, D convex, f strictly concave There is either one unique max or no solution Dealing with Rn not being an open set + If a function is continuous on Rn and concave on Rn it is concave on Rn + ++ +

3.3

Applications/Notes

These come up below, but here are some highlights: Unconstrained optimization D Rn , f : D R, D convex, f concave x* is a max Df (x) = 0 i.e. FOCs guarantee an unconstrained max with concave functions over convex domains Inequality constrained optimization Allows us to use Slaters condition with some other conditions to replace the rank condition Kim uses convexity/concavity in all his notes never (I think) rank like Jen and Sundaram do initially

4
4.1

Quasi-concavity notes and theorems


Denitions
D Rn , f : D R x, y D, (0, 1) f quasi-concave f (x + (1 )y) min(f (x), f (y)) f strictly quasi-concave f (x + (1 )y) > min(f (x), f (y)) C 1 denition D Rn , f : D R D open and convex f quasi-concave [f (y) f (x) = Df (x)(y x) 0 x, y D C 2 denition B (x) =
2

Traditional denition

0 f (x)

[f (x)]T H 2 (x)

] Ck (x) is k+1 x k+1 submatrix of B 2 (x)

f quasi concave = (1)k |Ck (x)| 0 x D (necessary) 4

This condition is identical to B 2 (x) being negative semi-denite excluding the 1x1 matrix [0] for the proof note that this is clearly true for all xs st f (x) = 0 because all determinants are 0! for the rest of the proof we would use the C 1 denition as a constraint of the problem max f (x) : Df (x)(y x) 0x, y D with some combination of the KT second order conditions which imply the rest of the bordered hessian denition. See Sundaram p217 for this proof (1)k |Ck (x)| >0 x D = f is quasi-concave (sucient but not necessary) This condition is identical to B 2 (x) being negative semi-denite excluding the 1x1 matrix [0] Note that Jens (8-15) and Kims (Theorem 22) notes appear to list this wrong and say that this implies strict quasi-concavity which contradicts Sundaram and De La Fuente. to prove strict quasi-concavity for sure, use the following: If D Rn and f is monotonically increasing AND (1)k |Ck (x)| >0 x D ++ STRICTLY quasi-concave (sucient but not necessary) Note the dierence between the SOCs for regular concavity NSD condition is just necessary but not sucient in this case ND condition is sucient but only for establishing quasi-concavity not STRICT quasiconcavity Need additional monotonicity and domain restriction to establish strictness = f is

4.2

Useful (?) theorems


D Rn , f : D R, D convex D open, f concave on D = f quasi-concave on D D open, f strictly concave on D = f strictly quasi-concave on D

Concavity and quasi-concavity

Non-decreasing functions and quasi-concavity 1 D Rn , f : D R, D convex, f quasi-concave Let : R R be monotone non-decreasing function h = (f ()) is quasi-concave (sucient but not necessary) Not sure of use of this theorem except that I think it implies the following which is useful: Non-decreasing functions and quasi-concavity 2 Any monotone transform of a concave function results in a quasi-concave function (sucient, but not necessary) e.g. g(x, y) = xy = (f (x))2 = (x1/2 y 1/2 )2 , since f(x,y) is concave, g(x,y) is quasi-concave easy way to establish quasi-concavity then would be to show that a function is a monotone transformation of a concave function! Uniqueness of global max D Rn , f : D R, D convex, f strictly quasi-concave All local maxes are global maxes and There is either one unique max or no solution Dealing with Rn not being an open set + If a function is continuous on Rn and quasi-concave on Rn it is quasi-concave on Rn + ++ +

4.3

Applications/Notes

5
5.1

Unconstrained Optimization
Goal

D Rn , f : D R, want points (x*) in D: Local max of f on D: > 0 : f (x) f (x) x B(x, ) D Global max of f on D: f (x) f (x) x D Strict local/global max if above inequalities are strict, x = x

5.2

Necessary Conditions for local (global) max

If x* does not satisfy the following it is not a local (and therefore not a global) max. 5.2.1 Setup

D is open set (excludes corner solutions) f is continuous and (twice) dierentiable on D 5.2.2 Necessary FOC for local max
f xi (xi )

f (x) = 0 (i.e. 5.2.3

= 0 i)

Necessary SOC for local max

Hf (x) is negative semi-denite, i.e. the matrix of cross-partials evaluated at x* is nsd

5.3

Suciency Conditions for local max

If x* satises the following it is a local max. 5.3.1 Setup

D is open set (excludes corner solutions) f is continuous and twice dierentiable on D 5.3.2 FOC for local max (+ SOCs is sucient)

f (x) = 0 5.3.3 SOC for local max (+ FOCs is sucient)

Hf (x) is negative denite, i.e. the matrix of cross-partials evaluated at x* is nd

5.4

Suciency Conditions for global max

If x* satises the following it is a global max. 5.4.1 Setup

D is open and convex set (excludes corner solutions) f is continuous and twice dierentiable on D 5.4.2 f is concave

x* is guaranteed to be a global max if: f (x) = 0 f is concave on D 6

5.4.3

Hessian is NSD

x* is guaranteed to be a global max if: f (x) = 0 Hf (x) is NSD x D (entire domain, not neighborhood around critical point)

5.5

Notes

x* is a vector in Rn : [x1 , x2 , ..., xn ] All of these functions map onto R1 All conditions (necessary and sucient) require D to be open Global max suciency conditions only require NSD, while local max suciency requires ND

5.6

Applications

6
6.1

Equality Constraints
Goal and Setup
D Rn , f : D R, g : D R Constraint set C = {x A : gi (x) = 0 i}, k constraints Solving the problem: max f (x) : x C Local/global max meet previous denitions

6.2

Lagrange: Necessary conditions for equality constrained local max

If x* does not satisfy the following (given the setup and constraint qualication) it is not a local (and therefore not a global) max. 6.2.1 Setup

D open f,g continuous and dierentiable on D 6.2.2 Constraint qualication

gi (x) = 0 i = 1...k or rank(Dg(x)) = k this ensures there is a k x k submatrix to nd 6.2.3 Lagrange rst order necessity conditions

Constraint qualication satised k Rk : Df (x) + i=1 i Dgi (x) = 0 i.e. FOCs of normal Lagrange multiplier: L = f (x1 , x2 , ...xn )+1 g1 (x1 , x2 , ...xn )+...+k gk (x1 , x2 , ...xn )

6.2.4

Lagrange second order necessity conditions

Dene D2 L as the hessian of the above L k D2 L(x, ) = D2 f (x) + i=1 i D2 gi (x) Dene z(x) = {z Rn : Dg(x)z = 0} If constraint qualication and FOCs satised: if f has a local max on D at x* D2 L(x, ) is NSD on z(x*) or equivalently zD2 L(x, )z 0 x* is a strict local max of f on D if (Note this is a suciency condition! - included here because it is in Sundaram/Jen at this point) D2 L(x, ) is ND on z(x*)(z = 0) or equivalently zD2 L(x, )z < 0 z = 0

6.3

Sucient conditions for equality constrained local max

If x* satises the following it is a local max. 6.3.1 Setup

D open f,g continuous and dierentiable on D 6.3.2 Conditions and conclusion

Lagrange FOCs satised CQ satised at x* SOCs satised zD2 L(x, )z < 0 z = 0 : zg(x) = 0 or equivalently D2 L(x, ) is ND on z(x*){z = 0} If above hold then each x* found by Lagrange are strict local maxima

6.4

Sucient conditions for equality constrained global max

If x* satises the following it is a global max. 6.4.1 Setup

D open and convex f,g continuous and dierentiable on D 6.4.2 Conditions and conclusion

Lagrange FOCs satised CQ satised at x* SOCs satised zD2 L(x, )z < 0 z = 0 : zg(x) = 0 or equivalently D2 L(x, ) is ND on z(x*){z = 0} If above hold then all x* found by Lagrange are strict global maxima

6.5

Notes

If there is a global max (x*) and the constraint qualication is met at x* then (x*,) is a critical point of L (i.e. Lagrange will nd that x*!) Method fails when: x, but the CQ is not met at x* CQ is met over the whole domain, but x! Deniteness on a set to establish SOCs! (see Jen 6-25; 13-11) b11 ... b1k . ... the rst k columns of B Let B be an m x k matrix and Bmk = ... bm1 ... bmk A be an n x n matrix and Ak is a k x k submatrix, keeping only the rst k rows and columns [ ] Bmk 0m Bordered matrix to test deniteness: Ck = (Bmk )T Ak Let Z = {z Rn : z = 0, Bz = 0} Deniteness of A on Z requires normal deniteness rules for submatrices that are not all 0 (i.e. leading principal minors for regular deniteness and the weird permutation based thing for semi-deniteness) [ ] 0 g(x) For SOCs with one constraint we have something like this: Ck = which g(x) D2 L(x, ) needs to be satisfy the deniteness pattern excluding the [0] submatrix

6.6

Applications

Kuhn Tucker conditions

Inequality and mixed constraints necessity conditions. Only with the restrictions established in later sections do these mean anything

7.1

Setup

D Rn f : D R1 h : D Rl g : D Rk all continuous functions, D open and convex n variables, l inequality constraints, k equality constraints Context of problem max{f (x) : g(x) = 0 h(x)} 0 Looking for a vector Rl+k and x Rn that satisfy KT-1 and KT-2 The conditions under which a pair, (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 are necessary and sucient for things are detailed in later sections

7.2

K-T 1 Complementary Slackness

0 and hi (x ) = 0 i k + 1, k + 2, ..., k + l i i i.e. When dealing with inequality constraints (the h functions): the sign matters (multiplier must be non-negative) and the constraint must bind (hi (x) = 0) or the multiplier doesnt matter ( = 0) i For equality constrains gj (x ) = 0 j {1...k} hold trivially because gj (x ) = 0! i the sign of the multiplier doesnt matter, but in practice probably better to construct constrains like g(x) 0 to avoid nding a contradiction if your multiplier is negative

7.3

KT-2 FOCs
k
j=1

Df (x ) +

Dgj (x ) + j

l
i=k+l

Dhi (x ) = 0 i

i.e. these are the normal Lagrange FOCs divided among the inequality constraints and equality constraints

Inequality constraint necessity theorems

These are the conditions under which KT will nd ALL local maxes. However, these do not guarantee that points that satisfy KT are actual local maxes.

8.1

Setup

D Rn f : D R1 h : D Rl g : D Rk all continuous functions, D open and convex n variables, l inequality constraints, k equality constraints Context of problem max{f (x) : g(x) = 0 h(x) 0} Looking for a vector Rl+k and x Rn that satisfy KT-1 and KT-2

8.2
8.2.1

Binding constraint notation


Index

Let E(x) 1, ...k + l be all constraints eective/binding at x* i.e.: gj (x) = 0 j E(x) (trivially true) and hi (x) = 0 i E(x) (only inequality constraints that bind at the optima) |E(x)| = number of elements in E, i.e. number of binding constraints 8.2.2 Binding functions (all constraints, equality stacked on inequality)

[ Let (x) =

g(x) h(x)

g1 (x) ... gk (x) hk+1 (x) ... ... hk+l (x)

Let E (x) = [i (x)]iE(x) (all binding constraints) 8.2.3 Slaters condition

x D : (x) 0 i.e there is at least one interior point in the constraint set! e.g. For T = [x1 , x2 ..., xn ], i (x) > 0 for i = 1...k + l x

8.3

Rank necessity theorem

Rank condition: rank(DE (x)) = |E(x)| If rank condition is met and x* is a local constrained maximum then (x , ) Rn X Rl+k that satisfy KT-1 and KT-2

10

8.4

KT Necessary and sucient

f concave, concave SC holds: x D : i (x) 0 for i = 1...l + k (this is used to prove necessity, not suciency) x* is a constrained global max (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 (necessary and sucient!) So if f and are concave and SC holds then all optima will be found via KT We dont need SC for suciency, but basically concavity guarantees that all KT points are global maxima! But if SC isnt satised then we cant guarantee that KT will nd any solution p 189-190 of Sundaram summarizes these things nicely enough

8.5
8.5.1

AHU necessity theorems


AHU Necessity 1 (Theorem 37)

AHU (a): E(x ) = i.e. there are no binding constraints at x*, this is eectively an unconstrained optimization problem AHUa is met and x* is a local constrained maximum then (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 8.5.2 AHU Necessity 2 (Theorem 37)

AHU (b): If D is convex and all functions in E (x) are convex i.e. the functions for all constraints that bind are convex at x* AHUb is met and x* is a local constrained maximum then (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 8.5.3 AHU Necessity Corollary 1 (Theorem 38)

i (x) is convex for i= 1...l+k If above is met and x* is a local constrained maximum then (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 8.5.4 AHU Necessity Corollary 2 (Theorem 38)

i (x) is concave for i = 1...l + k SC holds: x D : i (x) 0 for i = 1...l + k If above are met and x* is a local constrained maximum then (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 Note: not sure how this is dierent from KT, except f is not restricted to be concave or quasi-concave!

8.6

Arrow-Enthoven Necessity Theorem (quasi-concavity)

Rn D + i (x) is quasi-concave for i = 1...l + k SC holds: x D : i (x) 0 fori = 1...l + k i (x) = 0 for i = 1...l + k If above are met and x* is a global constrained maximum then (x , ) Rn X Rl+k that satisfy KT-1 and KT-2 11

8.7

To conrm local max is global max

If i (x) is concave for i= 1...l + k then any constrained local max is a constrained global max So if we add this condition to any of the AHU theorems the necessity conditions they are only necessary for a global max

Inequality constraint suciency theorems

Setup and notation follow previous section. These theorems provide conditions under which a point found by KT is guaranteed to be a global max

9.1
9.1.1

KT theorems (Sundaram)
Concave functions

If f and are concave on D If (x , ) satises KT 1 and 2 If the above are satised then x* is guaranteed to be a constrained global max If we add SC (see previous section) this become a necessity condition too! 9.1.2 Quasi-concave functions 1

If f and are quasi-concave on D If (x , ) satises KT 1 and 2 f (x ) = 0 If the above are satised then x* is guaranteed to be a constrained global max 9.1.3 Quasi-concave functions 2 (cocaable)

If f and are quasi-concave on D If f (x) = g[f (x)] where g is a strictly increasing transformation and f is concave If (x , ) satises KT 1 and 2 using f as the objective function If the above are satised then x* is guaranteed to be a constrained global max to the problem using f!

9.2

Arrow-Enthoven Suciency Theorems (Kim)

Given that (x , ) satises KT 1 and 2 Need Rn D + 9.2.1 AE 1

f and g are quasi-concave on Rn + f is concave on D If the above are satised then x* is guaranteed to be a constrained global max

12

9.2.2

AE 2

Setup Constraint set C = {x Rn : i (x) 0 i = 1...l + k} + Let xT = [1 , x2 ..., xn ] C x Dene I as j I if C : xj > 0 x i.e. I is a set of integers indexing the variables in the constraint set that have at least one positive value Theorem f and are quasi-concave on Rn + One of these two holds (call these AEa or AEb): Di (x ) < 0 for some i 1, ..., n (i.e. the derivative at the critical point is negative in at least one argument ) Di (x ) > 0 for some i I (i.e. the derivative at the critical point is positive in at least one argument that can be positive and satisfy C) Note if Df (x ) = 0 at least one of the above holds, this is what Sundaram uses, the corollary below adds additional things along with Df (x ) = 0, not sure why If the above are satised then x* is guaranteed to be a constrained global max 9.2.3 AE Corollary 1

f and are quasi-concave on Rn + x 0 : i (x) 0 This is NOT the same as Slaters condition: x D : (x) 0. In this case we are saying there is at least one vector of all positive values that satises all of the constraints, i.e I = l + k f (x ) = 0 If the above are satised then x* is guaranteed to be a constrained global max 9.2.4 AE Corollary 2

f and are quasi-concave on Rn + SC satised: x D : (x) 0 f (x ) = 0 If the above are satised then x* is guaranteed to be a constrained global max

13

10

Table of Necessity and Suciency Conditions for inequality constrained optimization


Necessity: if all the following hold, all (local and therefore global) maxima MUST satisfy KT 1 and 2 Suciency: if all of the following hold, all critical points found by KT 1 and 2 are guaranteed to be global maxima Suciency

Conditions/Theorem Satisfy KT D open D convex Rn D + SC: x D : (x) 0 x 0 : i (x) 0 f concave on D concave on D f quasi-concave (on Rn ?) + quasi-concave (on Rn ?) + f (x) = g[f (x)] AEa or AEb f (x ) = 0 rank(DE (x)) = |E(x)| E(x ) = E (x) are convex eachi (x) is convex each i (x) = 0

KT-C x x x

KT-QC1 x x x

KT-QC2 (see note) x x

AE 1 x x x x

AE 2 x x x x

AEC 1 x x x x

AEC 2 x x x x x

x x x x x x x x x x x x x x x x x x x x x

Necessity Conditions/Theorem Satisfy KT D open D convex Rn D + SC: x D : (x) 0 x 0 : i (x) 0 f concave on D concave on D f quasi-concave (on Rn ?) + quasi-concave (on Rn ?) + f (x) = g[f (x)] AEa or AEb f (x ) = 0 rank(DE (x)) = |E(x)| E(x ) = E (x) are convex eachi (x) is convex each i (x) = 0 x x x x x x x x x x Rank x x x KT-N x x x AHU1 x x x AHU2 x x x AHUC 1 x x x AHUC 2 x x x AE-N x x x x x

x Note: for a problem that has been rewritten because f (x) = g[f (x)] where g is a strictly increasing transformation and f is concave, the KT conditions we use here solve the problem using f

14

11
11.1
11.1.1

Applying Kuhn Tucker


Necessity
If a solution exists (Wierstrauss)

Necessity qualications are met KT is guarenteed to nd a solution However, not all points that satisfy KT are guarenteed to be solutions! So, if using a necessity theorem, one would need to nd all KT critical points and check them to see which is the optimal solution 11.1.2 Not sure if solution exists

Necessity conditions are met KT is not guarenteed to nd a solution If there are no KT critical points then you have proved there is no solution

11.2
11.2.1

Suciency
If a solution exists (Weirstrauss)

If the suciency conditions are met Any critical point of KT is guarenteed to be a solution However there are not guarenteed to be any critical points to the KT 11.2.2 Not sure if solution exists

If the suciency conditions are met Any critical point of KT is guarenteed to be a solution If there are no critical points of KT, you have NOT proved there is no solution

11.3
11.3.1

Uniqueness (assuming a solution exists)


If necessity conditions are met

All solutions must satisfy KT Must evaluate all KT critical points to make sure you have a global max. If there is only one global max to the KT critical points then it is unique. 11.3.2 If sucient and necessary conditions are met

All solutions must satisfy KT All critical points of KT are solutions If you do all the cases and only have one solution then it is unique

15

11.3.3

If suciency conditions are met

Concavity If all constraint functions are concave and your objective function is STRICTLY concave There there is at most one solution to KT You can stop once you nd a single solution! Quasi-concavity? Not sure, but i assume if you can make a function strictly concave through an increasing transformation and solve the problem then you are good

16

You might also like