You are on page 1of 5

Delft University of Technology

Faculty of Electrical Engineering, Mathematics, and Computer Science


Circuits and Systems Group

EE 4530 Applied Convex Optimization


25 January 2017, 9:00–12:00

This is an open book exam. Text books and slides are allowed. However,
solution manuals and handwritten notes/solutions are not allowed.

This exam has four questions (40 points).


Question 1 (10 points)
Consider a stochastic process with covariance matrix
 
1 ρ
C(ρ) = ; −1 ≤ ρ ≤ 1,
ρ 1
which is a positive semidefinite matrix.

Hint: To answer this question, you may want to use the following matrix
identities. The eigenvalues λ1 and λ2 of the 2 × 2 matrix
 
a b
A=
c d
are given by
(a + d) 1 p
λ1 = + (a + d)2 − 4(ad − bc)
2 2
and
(a + d) 1 p
λ2 = − (a + d)2 − 4(ad − bc).
2 2
The trace of A−1 is related to the eigenvalues of A as

trace{A−1 } = λ−1 −1
1 + λ2 .

(a) Is the function


f0 (ρ) = trace{C−1 (ρ)}
concave or convex? Why? Also, sketch f0 (ρ) for −1 < ρ < 1.

(b) Let λmax (C) denote the maximum eigenvalue of C. Is the function

f1 (ρ) = λmax {C(ρ)},

concave or convex? Why? Also, sketch the maximum eigenvalue func-


tion f1 (ρ) for −1 ≤ ρ ≤ 1.

(c) Is the function


f2 (ρ) = log det{C(ρ)},
convex or concave? Why? Also, plot the log-determinant function
f2 (ρ) for −1 < ρ < 1.

(d) Sketch the function f3 (ρ) = rank{C(ρ)} for −1 ≤ ρ ≤ 1 to show that


the function f3 (ρ) is nonconvex.

2
Question 2 (10 points)
Consider the quadratically constrained quadratic program (QCQP)

minimize x21 + x22


subject to (x1 − 1)2 + (x2 − 1)2 ≤ 1
(x1 − 1)2 + (x2 + 1)2 ≤ 1

with variable x = [x1 , x2 ]T ∈ R2 .

(a) Sketch the feasible set of the problem and level sets of the objective.
From this sketch, find the optimal point x∗ and optimal value p∗ .

(b) Give the KKT conditions. Do there exist Lagrange multipliers λ∗1 and
λ∗2 that prove x∗ is optimal?

(c) Derive the Lagrange dual problem.

(d) Solve the Lagrange dual problem. Is the dual optimum d∗ attained?
Does strong duality hold?

3
Question 3 (10 points)
Consider the problem of minimizing the function f : R2 → R, defined as

f (x) = (x1 + x22 )2

with x = [x1 , x2 ]T .

(a) For the starting point x(0) = [0, 1]T show that the search direction
∆x(0) = [1, −1]T is a valid descent direction for f at x(0) .

(b) Compute the step size t using exact line search:

t = arg min f (x(0) + s∆x(0) ).


s≥0

(c) Apply the gradient method with fixed step size t = 1/2, starting at
x(0) = [0, 1]T , for one iteration? Does f (x(k) ) converge to the optimal
value p∗ ?

(d) Derive the Hessian of f . Also, apply Newton’s method with fixed step
size t = 1, starting at x(0) = [0, 1]T , for one iteration? Does f (x(k) )
converge to the optimal value p∗ ?
Hint: You may want to use the following matrix identity:
 −1  
a b 1 d −b
= .
c d ad − bc −c a

4
Question 4 (10 points)
For α ∈ (0, 1), define the quantile loss function, hα : RN → R, as

hα (x) = α1T x+ + (1 − α)1T x− ,

where x+ = max{x, 0} and x− = max{−x, 0}. Here, the maximum is taken


elementwise.

(a) Find the subdifferential of hα at x. For α = 0.2, plot the quantile


function and its subdifferential ∂hα (x) as a function of x assuming
that N = 1. Explain what happens when α = 0.5.

(b) The quantile regression problem is

minimize hα (Ax − b)

with variable x ∈ RN , and parameters A ∈ RM ×N and b ∈ RM .


Explain how to write the quantile regression problem as a Linear Pro-
gram (LP).

(c) Explain how to use the Alternating Direction Method of Multipliers


(ADMM) algorithm to solve the quantile regression problem. Give
the detailed update equations (not just the update optimization pro-
blems).
Hint: Introduce a new variable (and constraint) z = Ax − b. Use the
scaled form ADMM by scaling the dual variable.

You might also like