You are on page 1of 16

Logit Multinomial

Profesor: Juan Manuel Rivas Castillo


Los datos empleados provienen de:
J. A. Herriges and C. L. Kling, "Nonlinear Income Effects in
Random Utility Models", Review of Economics and Statistics,
81(1999): 62-72

En este caso se refleja la decisión de los hogares de: i) ir a la


playa, ii) ir al puerto, iii) viajar en barco, iv) viajar en avión.
. mlogit mode ydiv1000, b(1)

Iteration 0: log likelihood = -1497.7229


Iteration 1: log likelihood = -1477.5265
Iteration 2: log likelihood = -1477.1514
Iteration 3: log likelihood = -1477.1506
Iteration 4: log likelihood = -1477.1506

Multinomial logistic regression Number of obs = 1182


LR chi2(3) = 41.14
Prob > chi2 = 0.0000
Log likelihood = -1477.1506 Pseudo R2 = 0.0137

mode Coef. Std. Err. z P>|z| [95% Conf. Interval]

beach (base outcome)

pier
ydiv1000 -.1434029 .0532884 -2.69 0.007 -.2478463 -.0389595
_cons .8141503 .228632 3.56 0.000 .3660399 1.262261

private
ydiv1000 .0919064 .0406637 2.26 0.024 .0122069 .1716058
_cons .7389208 .1967309 3.76 0.000 .3533352 1.124506

charter
ydiv1000 -.0316399 .0418463 -0.76 0.450 -.1136571 .0503774
_cons 1.341291 .1945167 6.90 0.000 .9600457 1.722537
. sum ydiv1000

Variable Obs Mean Std. Dev. Min Max

ydiv1000 1182 4.099337 2.461964 .4166667 12.5

. scalar z2 = [#2]_b[_cons]+[#2]_b[ydiv]*4.099337

. scalar z3 = [#3]_b[_cons]+[#3]_b[ydiv]*4.099337

. scalar z4 = [#4]_b[_cons]+[#4]_b[ydiv]*4.099337

.
. scalar p2 = exp(z2)/(1+exp(z2)+exp(z3)+exp(z4))

. scalar p3 = exp(z3)/(1+exp(z2)+exp(z3)+exp(z4))

. scalar p4 = exp(z4)/(1+exp(z2)+exp(z3)+exp(z4))

. scalar p1= 1/(1+exp(z2)+exp(z3)+exp(z4))

.
. scalar list p1 p2 p3 p4
p1 = .11541492
p2 = .14472379
p3 = .35220365
p4 = .38765763
. scalar ef2 = p2*(p1*[#2]_b[ydiv]+p3*([#2]_b[ydiv]-[#3]_b[ydiv])+p4*([#2]_b[ydiv]-[#4]_b[ydiv]))

. dis ef2
-.02065982

. mfx , predict(outcome(2))

Marginal effects after mlogit


y = Pr(mode==pier) (predict, outcome(2))
= .14472379

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 -.0206598 .00487 -4.24 0.000 -.030212 -.011108 4.09934

. scalar ef3 = p3*(p1*[#3]_b[ydiv]+p2*([#3]_b[ydiv]-[#2]_b[ydiv])+p4*([#3]_b[ydiv]-[#4]_b[ydiv]))

. dis ef3
.03259851

.
. mfx , predict(outcome(3))

Marginal effects after mlogit


y = Pr(mode==private) (predict, outcome(3))
= .35220366

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 .0325985 .00569 5.73 0.000 .021442 .043755 4.09934


. scalar ef4 = p4*(p1*[#4]_b[ydiv]+p2*([#4]_b[ydiv]-[#2]_b[ydiv])+p3*([#4]_b[ydiv]-[#3]_b[ydiv]))

. dis ef4
-.01201366

. mfx , predict(outcome(4))

Marginal effects after mlogit


y = Pr(mode==charter) (predict, outcome(4))
= .38765763

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 -.0120137 .00608 -1.98 0.048 -.023922 -.000106 4.09934

. scalar ef1 = -p1*(p2*[#2]_b[ydiv]+p3*[#3]_b[ydiv]+p4*[#4]_b[ydiv])

. dis ef1
.00007496

. mfx , predict(outcome(1))

Marginal effects after mlogit


y = Pr(mode==beach) (predict, outcome(1))
= .11541492

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 .000075 .00393 0.02 0.985 -.007635 .007785 4.09934


Efecto Precio
. mlogit mode price, b(1)

Iteration 0: log likelihood = -1497.7229


Iteration 1: log likelihood = -1416.6426
Iteration 2: log likelihood = -1413.6715
Iteration 3: log likelihood = -1413.6194
Iteration 4: log likelihood = -1413.6194

Multinomial logistic regression Number of obs = 1182


LR chi2(3) = 168.21
Prob > chi2 = 0.0000
Log likelihood = -1413.6194 Pseudo R2 = 0.0562

mode Coef. Std. Err. z P>|z| [95% Conf. Interval]

beach (base outcome)

pier
price -.0066313 .0040997 -1.62 0.106 -.0146666 .001404
_cons .502799 .1777871 2.83 0.005 .1543428 .8512553

private
price .0055643 .0032 1.74 0.082 -.0007076 .0118362
_cons .9234521 .1542538 5.99 0.000 .6211203 1.225784

charter
price .0194108 .0031263 6.21 0.000 .0132834 .0255382
_cons .245676 .1600961 1.53 0.125 -.0681067 .5594587
e11

. scalar eep2 = (p1*[#2]_b[price]+p3*([#2]_b[price]-[#3]_b[price])+p4*([#2]_b[price]-[#4]_b[price]))*52.08197

. scalar list eep2


eep2 = -.80100553

. mfx , predict(outcome(2)) eyex

Elasticities after mlogit


y = Pr(mode==pier) (predict, outcome(2))
= .12935859

variable ey/ex Std. Err. z P>|z| [ 95% C.I. ] X

price -.8010056 .14748 -5.43 0.000 -1.09007 -.511943 52.082

. . sum price if mode==2

Variable Obs Mean Std. Dev. Min Max

price 178 30.57133 35.58442 1.29 224.296

.
. dis (p1+p3+p4)*[#2]_b[price]*30.57133
-.1765028
e12

. sum price if mode==4

Variable Obs Mean Std. Dev. Min Max

price 452 75.09694 52.51942 27.29 387.208

.
. dis -p4*[#4]_b[price]*75.09694
-.5660282
IIA
. mlogtest, smhsiao

**** Small-Hsiao tests of IIA assumption (N=1182)

Ho: Odds(Outcome-J vs Outcome-K) are independent of other alternatives.

Omitted lnL(full) lnL(omit) chi2 df P>chi2 evidence

pier -484.432 -483.160 2.544 4 0.637 for Ho


private -322.805 -319.474 6.662 4 0.155 for Ho
charter -357.935 -354.735 6.401 4 0.171 for Ho

. mlogtest, hausman

**** Hausman tests of IIA assumption (N=1182)

Ho: Odds(Outcome-J vs Outcome-K) are independent of other alternatives.

Omitted chi2 df P>chi2 evidence

pier 8.828 4 0.066 for Ho


private 38.397 4 0.000 against Ho
charter 357.091 4 0.000 against Ho
Logit Ordenado

Profesor: Juan Manuel Rivas Castillo


. ologit mode ydiv1000 price

Iteration 0: log likelihood = -1497.7229


Iteration 1: log likelihood = -1414.5184
Iteration 2: log likelihood = -1413.2373
Iteration 3: log likelihood = -1413.2358
Iteration 4: log likelihood = -1413.2358

Ordered logistic regression Number of obs = 1182


LR chi2(2) = 168.97
Prob > chi2 = 0.0000
Log likelihood = -1413.2358 Pseudo R2 = 0.0564

mode Coef. Std. Err. z P>|z| [95% Conf. Interval]

ydiv1000 -.0990102 .0229347 -4.32 0.000 -.1439615 -.0540589


price .0175381 .0015895 11.03 0.000 .0144226 .0206535

/cut1 -1.798865 .1301158 -2.053887 -1.543843


/cut2 -.7321992 .1156622 -.958893 -.5055054
/cut3 .9560611 .1185288 .7237489 1.188373

. sum ydiv1000 price

Variable Obs Mean Std. Dev. Min Max

ydiv1000 1182 4.099337 2.461964 .4166667 12.5


price 1182 52.08197 53.82997 1.29 666.11
. scalar Z0 = _b[ydiv1000]*4.099337+_b[price]*52.08197

. scalar list Z0
Z0 = .50754202

. scalar p1 = 1/(1+exp(-_b[/cut1]+Z0))

. scalar list p1
p1 = .09059371

. mfx, predict(outcome(1))

Marginal effects after ologit


y = Pr(mode==1) (predict, outcome(1))
= .09059371

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 .0081571 .00194 4.19 0.000 .004345 .011969 4.09934


price -.0014449 .00015 -9.95 0.000 -.00173 -.00116 52.082
. scalar p2 = 1/(1+exp(-_b[/cut2]+Z0))-p1

. scalar list p2
p2 = .13388732

. mfx, predict(outcome(2))

Marginal effects after ologit


y = Pr(mode==2) (predict, outcome(2))
= .13388731

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 .0090795 .00215 4.22 0.000 .00486 .013299 4.09934


price -.0016083 .00017 -9.45 0.000 -.001942 -.001275 52.082

. scalar p3 = 1/(1+exp(-_b[/cut3]+Z0))-p2-p1

. scalar list p3
p3 = .38580603

. mfx, predict(outcome(3))

Marginal effects after ologit


y = Pr(mode==3) (predict, outcome(3))
= .38580603

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 .0063116 .00179 3.52 0.000 .002797 .009826 4.09934


price -.001118 .00021 -5.23 0.000 -.001537 -.000699 52.082
. scalar p4 = 1-p2-p3-p1

. scalar list p4
p4 = .38971293

. mfx, predict(outcome(4))

Marginal effects after ologit


y = Pr(mode==4) (predict, outcome(4))
= .38971295

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 -.0235483 .00548 -4.30 0.000 -.034286 -.012811 4.09934


price .0041712 .00039 10.70 0.000 .003407 .004936 52.082
MFX
. scalar mf1 = -p1*exp(-_b[/cut1]+Z0)/(1+exp(-_b[/cut1]+Z0))*_b[price]

. scalar lis mf1


mf1 = -.0014449

.
. mfx, predict(outcome(1))

Marginal effects after ologit


y = Pr(mode==1) (predict, outcome(1))
= .09059371

variable dy/dx Std. Err. z P>|z| [ 95% C.I. ] X

ydiv1000 .0081571 .00194 4.19 0.000 .004345 .011969 4.09934


price -.0014449 .00015 -9.95 0.000 -.00173 -.00116 52.082

You might also like