Professional Documents
Culture Documents
Karanveer Mohan
Keegan Go
EE103
Stanford University
Stephen Boyd
Outline
Introduction
Linear operations
Least-squares
Prediction
Introduction
Introduction
Introduction
Melbourne temperature
I
25
20
15
10
5
0
Introduction
500
1000
1500
2000
2500
3000
3500
4000
Melbourne temperature
20
15
10
0
50
Introduction
100
150
200
250
300
350
log10 of Apple daily share price, over 30 years, 250 trading days/year
1.5
0.5
0.5
1
0
Introduction
1000
2000
3000
4000
5000
6000
7000
8000
9000
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
6000
Introduction
6050
6100
6150
6200
6250
0
0
0.5
1.5
2.5
3.5
4
4
x 10
Introduction
zoomed to 1 month
1
500
Introduction
1000
1500
2000
2500
10
Outline
Introduction
Linear operations
Least-squares
Prediction
Linear operations
11
Down-sampling
can be written as y = Ax
for 2 down-sampling,
1 0
0 0
0 0
A= . .
.. ..
0 0
0 0
Linear operations
T even,
0
0
0
..
.
0
0
0
..
.
0
0
0
..
.
0
0
0
..
.
0 0 0 0
0 0 0 0
1
0
0
0
0
1
0
1
0
..
.
0
0
0
..
.
0
0
1
..
.
0
0
0
..
.
0
0
12
0.16
0.15
0.14
0.13
0.12
0.11
0.1
0.09
5100
Linear operations
5102
5104
5106
5108
5110
5112
5114
5116
5118
5120
13
Up-sampling
can be written as y = Ax
for 2 up-sampling
1
1/2
A=
Linear operations
1/2
1
1/2
1/2
1
..
.
1
1/2
1/2
1
14
0.16
0.15
0.14
0.13
0.12
0.11
0.1
0.09
5100
Linear operations
5102
5104
5106
5108
5110
5112
5114
5116
5118
5120
15
Smoothing
1
(xi + xi+1 + + xi+k1 ),
k
1/3 1/3
1/3
A=
i = 1, . . . , T k + 1
e.g., for k = 3,
1/3
1/3 1/3
1/3 1/3 1/3
..
.
1/3
1/3
1/3
Linear operations
16
25
20
15
10
5
0
Linear operations
500
1000
1500
2000
2500
3000
3500
4000
17
First-order differences
1 1
...
1 1 . . .
D=
.
.
..
..
. . . 1 1
Linear operations
18
Outline
Introduction
Linear operations
Least-squares
Prediction
Least-squares
19
De-meaning
rms(
x) = std(x)
Least-squares
20
Least-squares
21
Trend
2.5
2
1.5
1
0.5
0
0.5
1
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
6000
7000
8000
9000
Residual
1
0.5
0.5
1
0
Least-squares
1000
2000
3000
4000
5000
22
IP
A = ...
IP
Least-squares
23
k = T /P
Least-squares
24
30
25
20
15
10
5
0
Least-squares
500
1000
1500
2000
2500
3000
3500
4000
25
1 1
1 1
.. ..
D=
.
.
1
1
Least-squares
26
split data into train and test sets, e.g., test set is last period (P
entries)
Least-squares
27
2.95
2.9
2.85
2.8
2.75
2.7
2.65
2.6
2
10
Least-squares
10
10
10
10
28
20
15
10
0
2900
Least-squares
3000
3100
3200
3300
3400
3500
3600
29
Outline
Introduction
Linear operations
Least-squares
Prediction
Prediction
30
Prediction
K = 1 is one-step-ahead prediction
x
t+K xt+K is prediction error
applications: predict
Prediction
asset price
product demand
electricity usage
economic activity
position of vehicle
31
constant: x
t+K = a
current value: x
t+K = xt
average to date: x
t+K = avg(x1:t )
Prediction
32
Auto-regressive predictor
auto-regressive predictor:
x
t+K = (xt , xt1 , . . . , x(tM ) )T + v
M is memory length
(M + 1)-vector gives predictor weights; v is offset
I
I
prediction x
t+K is affine function of past window xtM :t
(which of the simple predictors above have this form?)
Prediction
33
Prediction
34
I
I
Prediction
35
Example
Prediction
36
Coefficients
I
I
using M = 100
0 is the coefficient for today
2
1.5
0.5
0.5
1
0
Prediction
10
20
30
40
50
60
70
80
90
100
37
1
3
3.1
3.2
3.3
3.4
3.5
4
x 10
Prediction
38
0.6
0.4
0.2
0.2
0.4
0.6
0.8
1
3
3.1
3.2
3.3
3.4
3.5
4
x 10
Prediction
39
predictor
average (constant)
current value
auto-regressive (M = 10)
auto-regressive (M = 100)
Prediction
RMS error
1.20
0.119
0.073
0.051
40
Prediction
41
Example
I
I
8
3000
Prediction
3010
3020
3030
3040
3050
3060
3070
3080
3090
3100
42
10
10
3000
Prediction
3010
3020
3030
3040
3050
3060
3070
3080
3090
3100
43
Prediction
RMS error
4.12
2.57
2.71
2.62
2.44
2.27
2.22
44