19 views

Uploaded by Marjo Kaci

eshte nje liber shum i bukur per ekonometrin.
slide del prof. nuk duhet me i pubblik teorikisht po ca ti bej ce me duhen piket.

- ch2
- Combination Forecasts
- 1xtscc_paper1.pdf
- Guidelines for calibration
- Chapter 3
- Lecture 4
- [UofC] Forecasting Lecture Notes Xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- Des Aigoo Ls Bee Final
- Calibration by Linear Regression - Tutorial
- Econometrics
- Principles of Least Squares
- Predictive Control
- 240 Transformations
- Presentation2(1)
- Least Squares Data Fitting with Applications
- Heteroskedasticity and Correlations Across Errors
- Autocorrelation
- percobaan tsset yang bener.pdf
- FECO2
- Least Square Method

You are on page 1of 30

17/02/2010

Notation

y1 y y2 yn column vector containing the n sample observations on the dependent variable y. (1)

x 1k xk x 2k x nk column vector containing the n sample observations on the independent variable x k , with k 1, 2, . . . , K. (2)

x1 x2

xK

(3)

n K data matrix containing the n sample observations on the K independent variables. Usually the vector x 1 is assumed to be a column of 1s (constant).

Assumption 1: linearity

Observed data are generated by the following linear model y i 1 x i1 2 x i2 . . . K x iK i x i i i 1, 2, . . . , n. (5)

The K unknown parameters of the model can be collected in a column vector, 1 2 K and the model can be rewritten in compact form: y x 1 1 x 2 2 . . . x K K y X (6)

(7)

The expected value of each disturbance, i conditional on all observations is zero E i |X 0 i 1, 2, . . . , n In compact form: E|X 0 (9)

(8)

First implication of strict exogeneity The unconditional mean is also zero. In fact, by the Law of Total Expectations: (10) E i E X E i |X 0 i 1, 2, . . . , n

Second implication of strict exogeneity The regressors are orthogonal to the error term for all observations: E i x jk 0 i, j 1, 2, . . . , n; k 1, 2, . . . , K

(11)

Third implication of strict exogeneity The orthogonality conditions are equivalent to zero-correlation conditions Covx jk , i Ex jk i Ex jk E i 0

(12)

The rank of the n K matrix, X, is K with probability 1. This implies that X has full column rank; the columns of X are linearly independent and there are at least K observations (n K.

Homoskedasticity assumption The conditional second moment of each disturbance, i is constant 2 E 2 i |X i 1, . . . , n

(13)

No-correlation assumption The conditional second cross-moment between i and j is zero for all i j E i j |X 0 for i j (14)

Since:

2 2 2 Var i |X E 2 i |X E i |X E i |X

and Cov i , j |X E i j |X E i |XE j |X E i j |X 0 the two assumptions can be written as: E |X Var|X 2 I where I is a n n identity matrix.

The parameters of the linear regression model, and the common variance of the error terms, 2 are unknown quantities. By using available sample data, estimation methods provide estimates of these unknown quantities. Although we do not observe the vector of disturbances, y X we can compute the vector of residuals implied by , of as any hypothetical value, y X (18) Fitting criterion: the least squares method chooses the value of which minimizes the sum of squared residuals.

the Given an arbitrary choice for the coefficient vector, to minimize the function minimization problem is to choose , where: S y X y X S Note that y X i y i x i 2 i 2 y X i The set of first order condition is: S 0 2X y 2X X

(19) (20)

(21)

Let b be the solution. Then b satisfies the least squares normal equations: X Xb X y If the inverse of X X exists (which follows from the full column rank assumption) the solution is: b X X 1 X y and the least squares residuals can be written as: e y Xb For this solution to minimize the sum of squares 2 Sb 2 X X b b must be a positive definite matrix (X has full column rank).

(22)

The normal equations X Xb X y imply that X y Xb X e 0 Therefore, for every column x k of X xke 0 and, if the first column of X is a column of 1s x1e i e 0 In words, the least squares residuals sum to zero.

b X X 1 X y can be rewritten as:

1 X y 1 S 1 s b X X 1 XX XY n n

(30) (31)

where:

1 1 S XX X X n n x i x i (32) i 1 xiyi s XY X y 1 n n i Intuition: S XX and s XY can be thought as sample averages of x i x i and x i y i respectively. This form is utilized in large sample theory.

Vector of fitted values Xb Vector of residuals e y Xb y (34) (33)

Projection matrix P XX X 1 X Annihilator matrix M IP Both P and M are n n, symmetric and idempotent PX X MX 0 (36) (37) (35)

Sum of Squared Residuals (RSS) RSS e e M In fact e y Xb y XX X 1 X y My MX M and, after squaring both terms we obtain: e e M

Estimate of the Variance of the Error Term The OLS estimate of 2 , denoted s 2 , is the sum of squared residuals divided by n K e e 2 s nK Standard Error of the Regression (SER) The square root of s 2 , s, is called the standard error of the regression.

(41)

Sampling Error b X X 1 X y X X X

1 X X X

(42) X

1 X X

1 X X X

X X X

1 X X

The least squares method chooses the vector of coefficients to minimize the sum of squared residuals. However, the fact the this sum is minimized does not tell us whether it is big or small. To circumvent this limitation it is necessary to compare the sum of squared residuals with an appropriate benchmark. This benchmark is given by the sum of squared deviations of the dependent variable from its sample mean (TSS, Total Sum of Squares).

Starting from y y M0y (43) we obtain 0 0 0 0 0 2 i y i y M y M y y M M y y M y (44) where M 0 is a n n symmetric idempotent matrix that transforms observations into deviations from sample means. Its diagonal elements are all 1 1 n and its off-diagonal elements are 1 n .

Derivation of the coefficient of determination y e after subtracting y from both sides, we obtain yy ye that can be rewritten as M 0 y M 0 e M 0 Xb e The total sum of squares is M 0 y M 0 y M 0 Xb e M 0 Xb e which can be rewritten as: y M y b X M 0 Xb e e

0

The total sum of squares (TSS) can be decomposed in two parts, measuring respectively the proportion of the TSS that is accounted for by variation in the regressors (ESS) and the proportion that is not (RSS) TSS ESS RSS The standard measure of the goodness of fit of a regression is simply the ratio between ESS and TSS. This measure called the coefficient of determination, R 2 is bounded between zero and one. R2

0 ESS X M Xb 1 e e b TSS y M0y y M0y

(50)

(51)

Problems with the use of the coefficient of determination It never decreases when an additional explanatory variable is added to the regression. For this reason alternative measures have been implemented, including the so-called adjusted R 2 (for the degrees of freedom) which is computed as follows: e e/n K 2 R 1 0 (52) y M y/n 1 This adjusted variable can decrease if the contribution of the additional variable to the fit of the regression is relatively low. If the constant term is not included in the model the coefficient of determination is not bounded between zero and one and can indeed turn negative.

- ch2Uploaded byODURODUA
- Combination ForecastsUploaded byAndreea Stefan
- 1xtscc_paper1.pdfUploaded byCostel Ionascu
- Guidelines for calibrationUploaded byfurious143
- Chapter 3Uploaded byMuhammad Arslan Ali
- Lecture 4Uploaded bykeyyongpark
- [UofC] Forecasting Lecture Notes XxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxUploaded byggerszte
- Des Aigoo Ls Bee FinalUploaded bySaina Chuh
- Calibration by Linear Regression - TutorialUploaded bypatrickjanssen
- EconometricsUploaded byRjnbk
- Principles of Least SquaresUploaded byRodger Millar Munthali
- Predictive ControlUploaded byWerner Friesenbichler
- 240 TransformationsUploaded byAyman Alhalfawy
- Presentation2(1)Uploaded bySaurabh Panwar
- Least Squares Data Fitting with ApplicationsUploaded bywolgast09durden1m2
- Heteroskedasticity and Correlations Across ErrorsUploaded byLeonardo Byon
- AutocorrelationUploaded byNur E Alom Siddique
- percobaan tsset yang bener.pdfUploaded byFairuz M. Ramadhan
- FECO2Uploaded byChinh Xuan
- Least Square MethodUploaded byZabdielzkhi Gwapoh
- Examples Model BuildingUploaded byHorace Choi
- Salomonson_RSE_2004.pdfUploaded byElmer Calizaya Llatasi
- LMnotes_04Uploaded byHendri Prabowo
- Cameron & Trivedi - Solution Manual Cap. 4-5Uploaded byLuchín Andrés González
- Homework 2- Event StudiesUploaded byEvgeny Oskirko
- Gravity ModelUploaded byYusriana Riana
- saloya06Uploaded byñ=?!2
- Panel Data Econometrics kenyaUploaded bydessietarko
- Diseño experimentalUploaded bycarolina
- fixUploaded byBalean Andreea

- Sas TutorialUploaded byfforappffor
- Notes on ProbabilityUploaded bychituuu
- Bachelor Thesis Til Gen KampUploaded byanonym
- SAS 11-2Uploaded byMarjo Kaci
- Tkinter TutorialUploaded bytheruvath
- Value at Risk - Philippe JorionUploaded byDanilo Trindade
- SAS 11-10Uploaded byMarjo Kaci
- SAS 11-10Uploaded byMarjo Kaci
- SAS 11-10Uploaded byMarjo Kaci
- chapt3Uploaded byMarjo Kaci
- Moto Browniano - WikipediaUploaded byMarjo Kaci
- calculus_11_Sequences_and_Series.pdfUploaded byMoonbob Gigi
- SAS 11-10Uploaded byMarjo Kaci
- Sqlite TutorialUploaded byzurisaddai84
- An Informal Introduction to Stochastic Calculus With ApplicationsUploaded byMarjo Kaci
- Corso Finanza Quantitativa 2018 3Uploaded byMarjo Kaci
- Bando Traineeship 2015-2016 Sme Saa (1)Uploaded byMarjo Kaci
- StochasticCalculus_2Uploaded byMarjo Kaci
- Corporate Relationship Manager 24.10.2016 AlbUploaded byMarjo Kaci
- Scheda Bando 2016Uploaded byMarjo Kaci
- Depliant TabaccoUploaded byMarjo Kaci
- MIT6_436JF08_lec05Uploaded byMarjo Kaci
- SAP-MITUploaded byBharath Kumar
- Calin o an Informal Introduction to Stochastic Calculus WithUploaded byjldelafuente
- calculus_11_Sequences_and_Series.pdfUploaded byMoonbob Gigi
- Python ReviewUploaded byMarjo Kaci
- Local Stochastic Volatility ModelUploaded byMarjo Kaci
- Linee Guida Tabacco (1)Uploaded byMarjo Kaci
- Bando-Nazionale-2016.pdfUploaded byMarjo Kaci
- An Informal Introduction to Stochastic Calculus With ApplicationsUploaded byMarjo Kaci

- VBA6 - Black-Scholes Option Pricing ModelUploaded bychichorro
- 37 4 Hyprgmytrc DistUploaded byfatcode27
- Regression Analysis Using ExcelUploaded byabguy
- Tut 10Uploaded bysitihidayah188277
- Quantitative TechniquesUploaded bymwakondo
- Brown1Uploaded byjayroldparcede
- math 1040 skittles projectUploaded byapi-302415417
- Normal Distribution and Probability Distribution FunctionUploaded byvs6886
- stat11t_Chapter11Uploaded byMelly Suci Hartari
- Ngailo EdwardUploaded bysharktale2828
- ISCI2012_paper_#30Uploaded bytanpreet_makkad
- t-test_z_testUploaded byJovenil Bacatan
- Modern Mathematical Statistics With Applications (2nd Edition)Uploaded byAlex Bond
- ASSG2__BATCH2_GROUP4Uploaded bypranshumanu
- Probability Study Material With Solved Practice PapersUploaded byRobert Casey
- Factor AnalysisUploaded bySheraz Khan
- 2006_IMPS_Non Graphical Solutions for the Cattell’s Scree Test PRESENTATIONUploaded bymbfreesdb
- Aldo Febriananto Kurniawan 22010112140205 Lap.kti Bab ViiUploaded byAfrilia Shafira
- GEE HandoutUploaded byRupert_Psmith
- ASTM E2862-12 Standard Practice for Probability of Detection Analysis for Hit-Miss DataUploaded byCARLOS ALBERTO FORONDA
- OUTPUTUploaded bycharnu1988
- 1Uploaded byRavi Agarwal
- Using Skellam's Distribution to Assess Soccer Team PerformanceUploaded byBartoszSowul
- Stochastic Processes With Mesure TheoryUploaded byRigoberto Escudero
- 2 Random Numbers (1)Uploaded byLeo Solano
- Unit 5_Testing of Hypothesis_SLMUploaded byVineet Sharma
- Homework StatisticsUploaded byLemery
- Sample Size Example SolutionsUploaded byJorge Perez
- Quick Primer on EFAUploaded bykaps_1973
- 9709_s06_qp_7.pdfUploaded bytess_15