You are on page 1of 6

ECON711-Microeconomic Theory.

Solution to Homework #4.


By Enrique Martínez García †∗.

Due Date: 10/01/2004. (Fall 2004).

∗ † Enrique Martínez García, Ph.D Student, Department of Economics, University

of Wisconsin-Madison. 7439 Social Science Building, 1180 Observatory Drive, Madi-


son, WI 53706. Tel.: (608) 262-4389. E-mail: emartinezgar@wisc.edu. Webpage:
http://mywebspace.wisc.edu/emartinezgar/web/.

1
Problem 1 (1) (JR, 3.32) The Translog Cost Function. It has been
shown that the translog cost function is a (local) second-order approximation to
an arbitrary cost function such that c (w1 , ..., wn , y) = c (w1 , ..., wn ) y. It is given
implicitly by the linear-in-logs form:
n
X n n
1 XX
ln (c) = α0 + αi ln (wi ) + γ ln (wi ) ln (wj ) + ln (y) .
i=1
2 i=1 j=1 ij

If γ ij = γ ji , for i = 1, ..., n, the substitution matrix is symmetric, as required.

Solution 2 (1) (a) Show that the translog cost function is in fact a (local)
second-order approximation to a cost function c (w1 , ..., wn ) y.
Take the logarithms of this expression,
¡ ¢
ln c (w1 , ..., wn , y) = ln c (w1 , ..., wn ) + ln y = ln c eln w1 , ..., eln wn + ln y,
¡ ¢
and take a second-order Taylor approximation of ln c eln w1 , ..., eln wn with
respect to (ln w1 , ..., ln wn ) around the point (ln w1∗ , ..., ln wn∗ ) = (0, ..., 0) (alter-
natively, (w1∗ , ..., wn∗ ) = (1, ..., 1)),
¡ ∗¢
¯
³ ´¯
X
∗ ¯
n ∗
d ln c eln w1 , ..., eln wn ln w∗ ¯
∗ ¯
ln c (w1 , ..., wn , y) = ln c eln w1 , ..., eln wn ¯
+ e i (ln w − ln w ∗ )
i i ¯

wi =1, ∀i d ln wi ¯
i=1 w
¡ ln w∗ ∗¢
¯
n n
1 X X d ln c e 1 , ..., e n ln wi∗ +ln wj∗
2 ln w ¯
¯
+ e (ln wi − ln wi∗ ) (ln wi − ln wi∗ )¯ + ln y =
2 i=1 j=1 d ln wi d ln wj ¯ ∗
wi =1, ∀i
n
X n X
X n 2
d ln c (1, ..., 1) 1 d ln c (1, ..., 1)
= ln c (1, ..., 1) + ln wi + ln wi ln wi + ln y.
i=1
d ln wi 2 i=1 j=1
d ln wi d ln wj

Denote the constant terms in the above expression as: α0 = ln c (1, ..., 1),
2
αi = d lndc(1,...,1)
ln wi ∀i, and γ ij = dd lnlnwc(1,...,1)
i d ln wj
. If we substitute this terms back into
the equation, we get the translog cost function:
n
X n n
1 XX
ln c (w1 , ..., wn , y) = α0 + αi ln wi + γ ln wi ln wj + ln y.
i=1
2 i=1 j=1 ij

(b) What restrictions on the parameters are required to ensure homogeneity


of degree one in input prices?.
Take the logarithm of the cost function evaluated at the new vector of input-
prices (tw1 , ..., twn ),
n
X n n
1 XX
ln c (tw1 , ..., twn , y) = α0 + αi ln (twi ) + γ ln (twi ) ln (twj ) + ln y,
i=1
2 i=1 j=1 ij

2
and apply the properties of the logarithm to rearrange this expression as
follows:
n
X n n
1 XX
ln c (tw1 , ..., twn , y) = α0 + αi ln wi +γ ln wi ln wj + ln y+
i=1
2 i=1 j=1 ij
à n !   à n !  
X n n n X n
1 X X  2
X 1 X
+ αi ln t + γ ij (ln t) = ln c (w1 , ..., wn , y) + αi ln t +  γ ij  (ln t)
i=1
2 i=1 j=1 i=1
2 i=1 j=1

Let’s suppose that the parameter restrictions we need to impose to ensure


homogeneity in prices are:
n
X n
X
αi = 1, γ ij = 0.
i=1 j=1

It is easy to check that, indeed this conditions will suffice to ensure homogene-
ity. Substituting back into the equation for ln c (tw1 , ..., twn , y), we will obtain
that:

ln c (tw1 , ..., twn , y) = ln c (w1 , ..., wn , y) + ln t = ln [tc (w1 , ..., wn , y)]

Using the exponential function on both sides will not change the equality,

c (tw1 , ..., twn , y) = tc (w1 , ..., wn , y) ,

but it is enough to prove homogeneity, as required.


(c) For what values of the parameters does the translog reduce to the Cobb-
Douglas form?
Recall that the Cobb-Douglas cost function is:
n
Y n
X n
Y
c (w1 , ..., wn , y) = K wiai y, ai = 1, K = a−a
i
i
.
i=1 i=1 i=1

Take the logarithm of this expression,


n
X
ln c (w1 , ..., wn , y) = ln K + ai ln wi + ln y.
i=1

By the method of matching coefficients we can argue easily that the Translog
reduces to a simple Cobb-Douglas cost function whenever:

α0 = ln K,
αi = ai , ∀i = 1, ..., n,
γ ij = 0, ∀i, j = 1, ..., n.

(d) Show that input shares si (w1 , ..., wn , y) = wic(w


xi (w1 ,...,wn ,y)
1 ,...,wn ,y)
in the translog
cost function are linear in the logs of input prices.

3
First, notice that an easy way of computing si (w1 , ..., wn , y) uses the input-
price elasticity of the cost function:
d ln c (w1 , ..., wn , y) dc (w1 , ..., wn , y) wi
= ,
d ln wi dwi c (w1 , ..., wn , y)
dc(w1 ,...,wn ,y)
so by Shephard’s lemma (i.e. dwi = xi (w1 , ..., wn , y)) we can argue:

d ln c (w1 , ..., wn , y) wi
= xi (w1 , ..., wn , y) = si (w1 , ..., wn , y) .
d ln wi c (w1 , ..., wn , y)
Then, once we have shown this result, it is easy to prove that under the
assumption of symmetry of the substitution matrix (i.e., γ ij = γ ji , ∀i, j =
1, ..., n),
n n
d ln c (w1 , ..., wn , y) 1X 1X
si (w1 , ..., wn , y) = = αi + γ ij ln wj + γ ln wj =
d ln wi 2 i=1 2 i=1 ji
n
X
= αi + γ ij ln wj .
i=1

Indeed, input shares in the translog cost function are linear in the logs of
input prices.
Remark: If instead of approximating a cost function like c (w1 , ..., wn ) y, we
took a second order linear approximation over any well-behaver sot function
c (w1 , ..., wn , y), we would still find similar results. In that case, it is easy to
show that input shares would be linear in the logs of input prices and output.

Problem 3 (2) (JR, 3.32). Let u (x) represent some consumer’s monotonic
preferences over x ∈ <n+ . For each of the functions f (x) that follows, state
whether or not f also represents the preferences of this consumer. In each case,
be sure to justify your answer with either an argument or a counterexample.

Solution 4 (2) Recall that f represents the same preferences as u if and only
if there exists a (strictly) monotonic trnasformation v : < → < (v is (strictly)
increasing on the range of u) such that f = v ◦ u, i.e., for all x ∈ <n+ , f (x) =
v (u (x)). Denote the range of u by R (u).
3
(a) f (x) = u (x) + (u (x)) . Hence, f = v ◦ u where for all y ∈ R (u),
∂v(y)
v (y) = y + y . Note that ∂y = 1 + 3y 2 ≥ 0 for all y ∈ <. Thus, v is
3

increasing on R (u) and f represents the same preferences as u.


2
(b) f (x) = u (x) − (u (x)) . Hence, f = v ◦ u where for all y ∈ R (u),
∂v(y)
v (y) = y − y . Note that ∂y = 1 − 2y ≤ 0 for all y ≥ 12 (monotonicity is
2

violated in that case). Hence, if u (x) < 12 for all x ∈ <+ n , then f represents
the same preferences as u. If there exists x ∈ <n+ such that u (x) ≥ 12 , then f
cannot represent the sameP preferences as u.
(c) f (x) = u (x) + ni=1 xi . In this case, we don’t have enough
P information
to find a transformation v such that f = v◦u, because the term ni=1 xi generally

4
cannot be expressed in terms of u (x). When could we find a transformation such
that f = v ◦ u? If and only if for all bundles x0 , x00 ∈ <n+ ,
n
X n
X
u (x0 ) ≥ u (x00 ) ⇔ f (x0 ) ≥ f (x00 ) ⇔ u (x0 ) + x0i ≥ u (x00 ) + x0i .
i=1 i=1
Pn
If u (x) = h ( i=1 xi ) where h : <+ → < is a strictly increasing function,
then the transformation v exists. And v = y + h−1 (y) for all y ∈ R (u) = <+
would be in fact a strictly monotonic transformation such that f = v ◦ u. In
this case, f would represent the same preferences as u. Otherwise, most likely
it will not represent the same preferences.

Problem 5 (3) (JR, 3.32). A consumer of two goods faces positive prices
and has a positive income. His utility function is:

u (x1 , x2 ) = max [ax1 , ax2 ] + min [x1 , x2 ] , where 0 < a < 1.

Derive the Marshallian demand functions.

Solution 6 (3) First, consider the case where x1 ≥ x2 . Then, it follows that
u (x) = ax1 +x2 . Second, consider the case where x1 ≤ x2 . Then, it follows that
u (x) = ax2 +x1 . (Obviously, for x1 = x2 it must be true that u (x) = ax1 +x2 =
ax2 + x1 ) It is evident, then, that we are dealing with linear preferences whose
slope changes after crossing the 45o -line. In other (more technical) words, the
indifference curves are piecewise linear with slopes −a (if x1 > x2 ) and − a1 (if
x1 < x2 ). Furthermore, the utility is not dfferentiable at points where x1 = x2 .
Note that − a1 < −a or, equivalently, a1 > a, given the fact that we assume
a ∈ (0, 1). For the Marshallian demands, this implies five different cases must
be considered. (Use a grapical representation to gain further intuition on this
results).
Case 1. If pp12 < a, it must follow that:
µ ¶
y
x (p, y) = ,0 .
p1
p1
Case 2. If p2 = a, it must follow that:
½ µ ¶ µ ¶ ¾
y y y
x (p, y) = α , + (1 − α) , 0 : α ∈ [0, 1] =
p1 + p2 p1 + p2 p1
½ ¾
y p1 y
= (x1 , x2 ) : x1 ≥ x2 and x2 = − x1 = − ax1 .
p2 p2 p2
1 p1
Case 3. If a > p2 > a, it must follow that:
µ ¶
y y
x (p, y) = , .
p1 + p2 p1 + p2

5
p1
Case 4. If p2 = a1 , it must follow that:
½ µ ¶ µ ¶ ¾
y y y
x (p, y) = α , + (1 − α) 0, : α ∈ [0, 1] =
p1 + p2 p1 + p2 p2
½ ¾
y p2 y
= (x1 , x2 ) : x1 ≤ x2 and x1 = − x2 = − ax2 .
p1 p1 p1
p1
Case 5. If p2 > a1 , it must follow that:
µ ¶
y
x (p, y) = 0, .
p2

You might also like