You are on page 1of 6

L.

Vandenberghe

EE236B Winter 2016

Homework assignment #1

1. Least-squares fit of a circle to points. In this problem we use least-squares to fit a circle
to given points (ui , vi ) in a plane, as shown in the figure.

We use (uc , vc ) to denote the center of the circle and R for its radius. A point (u, v) is
on the circle if (u uc )2 + (v vc )2 = R2 . The fitting problem can be formulated as
an optimization problem
minimize

m
P

i=1

with three variables uc , vc , R.

((ui uc )2 + (vi vc )2 R2 )

(a) Show that this optimization problem can be written as a linear least-squares
minimize kAx bk22

(1)

if we make a change of variables and use as variables


x 1 = uc ,

x2 = v c ,

x3 = u2c + vc2 R2 .

(b) Use the normal equations AT Ax = AT b of the least-squares problem to show that
the optimal solution x of the least-squares problem satisfies
x21 + x22 x3 0.
This is necessary to compute R =

x21 + x22 x3 from x.

(c) Test your formulation on the problem data in the file circlefit.m on the course
website. The commands
circlefit;
plot(u, v, o);
axis equal
will create a plot of the m = 50 points (ui , vi ) in the figure.
Use the MATLAB command x = A \ b to solve the least squares problem (1).
2. This problem is an introduction to the MATLAB software package CVX that will be
used in the course. CVX can be downloaded from www.cvxr.com.
Python users are welcome to use CVXPY (cvxpy.org) instead of CVX and MATLAB.
Although the problem assignment is written for MATLAB and CVX, the modifications
should be straightforward if you are familiar with Python.
We consider the illumination problem of lecture 1. We take Ides = 1 and pmax = 1, so
the problem is
minimize f0 (p) = max | log(aTk p)|
k=1,...,n
(2)
subject to 0 pj 1, j = 1, . . . , m,
with variable p Rm . As mentioned in the lecture, the problem is equivalent to
max h(aTk p)

minimize

(3)

k=1,...,n

subject to 0 pj 1,

j = 1, . . . , m,

where h(u) = max{u, 1/u} for u > 0. The function h, shown in the figure below, is
nonlinear, nondifferentiable, and convex.
5

h(u)

0
0

To see the equivalence between (2) and (3), we note that


f0 (p) =
=

max | log(aTk p)|

k=1,...,n

max max{log(aTk p), log(1/aTk p)}

k=1,...,n

= log max max{aTk p, 1/aTk p}


k=1,...,n

= log max h(aTk p),


k=1,...,n

and since the logarithm is a monotonically increasing function, minimizing f0 is equivalent to minimizing maxk=1,...,n h(aTk p).
The specific problem data are given in the file illum_data.m posted on the course
website. Executing this file in MATLAB creates the n m-matrix A (which has rows
aTk ). There are 10 lamps (m = 10) and 20 patches (n = 20).
Use the following methods to compute five approximate solutions and the exact solution, and compare the answers (the vectors p and the corresponding values of f0 (p)).
(a) Equal lamp powers. Take pj = for j = 1, . . . , m. Plot f0 (p) versus over the
interval [0, 1]. Graphically determine the optimal value of , and the associated
objective value. The objective function f0 (p) can be evaluated in MATLAB as
max(abs(log(A*p))).
(b) Least-squares with saturation. Solve the least-squares problem
minimize

n
X

(aTk p 1)2 = kAp 1k22 .

k=1

If the solution has negative coefficients, set them to zero; if some coefficients are
greater than 1, set them to 1. Use the MATLAB command x = A \ b to solve a
least-squares problem (minimize kAx bk22 ).
(c) Regularized least-squares. Solve the regularized least-squares problem
minimize

n
X

(aTk p 1)2 +

m
X

(pj 0.5)2 = kAp 1k22 + kp (1/2)1k22 ,

j=1

k=1

where > 0 is a parameter. Increase until all coefficients of p are in the interval
[0, 1].
(d) Chebyshev approximation. Solve the problem
minimize

max |aTk p 1| = kAp 1k

k=1,...,n

subject to 0 pj 1,

j = 1, . . . , m.

We can think of this problem as obtained by approximating the nonlinear function


h(u) by a piecewise-linear function |u 1| + 1. As shown in the figure below, this
is a good approximation around u = 1.
3

4
3.5

|u 1| + 1

3
2.5
2
1.5
1
0.5
0
0

0.5

1.5

This problem can be converted to a linear program and solved using the MATLAB
function linprog. It can also be solved directly in CVX, using the expression
norm(A*p - 1, inf) to specify the cost function.
(e) Piecewise-linear approximation. We can improve the accuracy of the previous
method by using a piecewise-linear approximation of h with more than two segments. To construct a piecewise-linear approximation of 1/u, we take the pointwise maximum of the first-order approximations
h(u) 1/
u (1/
u2 )(u u) = 2/
u u/
u2 ,
at a number of different points u. This is shown below, for u = 0.5, 0.8, 1. In
other words,
hpwl (u) = max{u,

1
2
1
2

u,

u, 2 u}.
2
0.5 0.5
0.8 0.82

4
3.5

hpwl (u)

3
2.5
2
1.5
1
0.5
0
0

0.5

1.5

Solve the problem


minimize
subject to

max hpwl (aTk p)

k=1,...,n

0 pj 1,

j = 1, . . . , m

using linprog or CVX.


(f) Exact solution. Finally, use CVX to solve
minimize

max max(aTk p, 1/aTk p)

k=1,...,n

subject to 0 pj 1,

j = 1, . . . , m.

Use the CVX function inv_pos() to expresss the function f (x) = 1/x with
domain R++ .
3. Let X be a symmetric matrix partitioned as
X=

"

A B
BT C

(4)

If A is nonsingular, the matrix S = C B T A1 B is called the Schur complement of


A in X. It can be shown that if A is positive definite, then X  0 (X is positive
semidefinite) if and only if S  0 (see page 650 of the textbook). In this exercise we
prove the extension of this result to singular A mentioned on page 651 of the textbook.
(a) Suppose A = 0 in (4). Show that X  0 if and only if B = 0 and C  0.
(b) Let A be a symmetric n n matrix with eigenvalue decomposition
A = QQT ,
where Q is orthogonal (QT Q = QQT = I) and = diag(1 , 2 , . . . , n ). Assume
the first r eigenvalues i are nonzero and r+1 = = n = 0. Partition Q and
as
#
"
h
i
1 0
Q = Q1 Q2 ,
=
0 0
with Q1 of size n r, Q2 of size n (n r), and 1 = diag(1 , . . . , r ). The
matrix
T
A = Q1 1
1 Q1
is called the pseudo-inverse of A. Verify that
AA = A A = Q1 QT1 ,

I AA = I A A = Q2 QT2 .

The matrix-vector product AA x = Q1 QT1 x is the orthogonal projection of the


vector x on the range of A. The matrix-vector product (I AA )x = Q2 QT2 x is
the projection on the nullspace.
5

(c) Show that the block matrix X in (4) is positive semidefinite if and only if
A  0,

(I AA )B = 0,

C B T A B  0.

(The second condition means that the columns of B are in the range of A.)
Hint. Let A = QQT be the eigenvalue decomposition of A. The matrix X in (4)
is positive semidefinite if and only if the matrix
"

QT 0
0 I

#"

A B
BT C

#"

Q 0
0 I

"

QT B
BT Q
C

is positive semidefinite. Use the observation in part (a) and the Schur complement
characterization for positive definite 2 2 block matrices to show the result.