Solutions 4 (STAT673)

Transcription

Solutions 4 (STAT673)
Solutions 4 (STAT673)
(3.1) Recall the AR(2) models considered in Exercise (2.4). Now we want to derive their
ACF functions.
(i) (a) Obtain the ACF corresponding to
2
7
Xt = Xt−1 − Xt−2 + εt ,
3
3
where {εt } are iid random variables with mean zero and variance σ 2 .
The roots are 3 and 1/2. This means the solution is non-causal.
However the autocovariance function will be same as the corresponding causal AR process with characteristic function φ(z) =
(1 − z/2)(1 − z/3) = 1 − 5z/6 + z 2 /6, which is Yt = 65 Yt−1 − 61 Yt−2 + εt
and has the ACF (see equation (3.6) in the notes)
c(k) = C1
1
1
+
C
.
2
2k
3k
In order to find the constants C1 and C2 we can consider the consider
the variance of Xt and covariance at lag one using the solution
∞
∞
1X 1
−6 X 1
ε
−
εt−j .
Xt =
t+j+1
5 j=0 2j+1
5 j=0 3j
and then solving for C1 and C2 .
(b) Obtain the ACF corresponding to
√
42
4× 3
Xt−1 − 2 Xt−2 + εt ,
Xt =
5
5
where {εt } are iid random variables with mean zero and variance σ 2 .
The roots are (5/4) exp(±iπ/6) hence the solution has the form
4
π
4
π
c(k) = C1 ( )k exp(i k) + C¯1 ( )k exp(−i k)
5
6
5
6
π
4 k
= a( ) cos k + b .
5
6
To find C1 we solve the system of equation
√
4× 3
42
2
E[Xt ] =
E[Xt−1 Xt ] − 2 E[Xt−2 Xt ] + E[εt Xt ],
5
5
which implies
√
4× 3
42
c(0) =
c(1) − 2 c(2) + σ 2 .
5
5
(1)
(2)
Similarly we premultiple by Xt−1 and Xt−2 take expectations and
use causality to give
√
42
4× 3
c(0) − 2 c(1)
c(1) =
5√
5
4× 3
42
c(2) =
c(1) − 2 c(0).
5
5
We substitute (1) into the above three equations and solve for C1
and σ 2 .
(c) Obtain the ACF corresponding to
Xt = Xt−1 − 4Xt−2 + εt ,
where {εt } are iid random variables with mean zero and variance σ 2 .
The roots√ are φ(z) = 1 − z + 4z 2 = (1 − 2e−iθ z)(1 − 2e−θ z), where
θ = tan−1 ( 15). This is non-causal, thus find the ACF of the causal
process with characteristic function (1 − 0.5e−iθ z)(1 − 0.5e−θ z).
(ii) For all these models plot the true ACF in R. You will need to use the function
ARMAacf. BEWARE of the ACF it gives for non-causal solutions. Find a method
of plotting a causal solution in the non-causal case.
(3.2) In Exercise 2.5 you constructed a causal AR(2) process with period 17.
Load Shumway and Stoffer’s package asta into R (use the command install.packages("astsa")
and then library("astsa").
Use the command arma.spec to make a plot of the corresponding spectral density
function. How does your periodogram compare with the ‘true’ spectral density function?
To summarize Solution 2.5 (in HW3), we simulate the model
Xt = 2r cos(
2π
)Xt−1 − r2 Xt−2 + εt .
17
for r=0.8. The periodogram of one realisation together with the true spectral density is given in Figure 1. We observe that that periodogram is very
rough. This is because the expectation of the periodogram is close to the
spectral density, however it variance will never converge to zero (it is known
as an inconsistent estimator of the spectral density function).
0.20
0.10
0.00
Periodogram
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●●
●
●
●
●
●
●
●
●
●
●● ●
● ●●
●
●
●
●
● ●
●
●
●●
● ●●
●●
●●
●●
● ●● ●
●●
●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●● ●●● ●●
●● ●●● ●●●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●●●
●
● ●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
1
2
3
4
5
6
frequency
40
20
0
spectrum
60
Autoregressive
0.0
0.1
0.2
0.3
0.4
0.5
frequency
Figure 1: The periodogram and the true spectral density (remember to multiple the x-axis
on true spectral density by 2π to get the x-axis on the periodogram.)
(3.3) By using the decomposition
var(X) = Σ =
Σ1
c1
0
c1 Σ−(1)
(3)
where Σ1 = var(X1 ), c1 = E[X1 X 0−1 ] and Σ−(1) = var[X −1 ] prove
X
βi,j Xj + εi ,
Xi =
j6=i
where {βi,j } are the coefficients of the best linear predictor where
βi,j = −
Σij
Σii
and
Σii =
E[Xi −
1
P
j6=i
βi,j Xj ]2
.
By using the inverse of Block matrices we have
P −1
−P −1 c1 Σ−1
−(1)
−1
Σ =
,
−1
−1 0 −1
−Σ−1
P −1 + Σ−1
c1 Σ−(1)
−(1) c1 P
−(1) c1 P
(4)
(5)
where P = var(X1 ) − E[X1 X 0−1 ]0 var(X −1 )−1 E[X1 X −1 ].
We know that the coefficients of the best linear predictor of X1 given X −1 is
0
0
−1
β = Σ−1
−(1) c1 , therefore the mean squared error is var(X1 )−E[X1 X −1 ] var(X −1 ) E[X1 X −1 ].
Comparing this with the top row of Σ−1 we see that Σ11 = P −1 , where P is
the mean squared error and that the rest of the row is
−1
−P −1 c1 Σ−1
β,
−(1) = −P
which gives the desired result.
(3.4) Let φt,t denote the partial correlation between Xt+1 and X1 . It is well known (this is
the Levinson-Durbin algorithm, which we cover in Chapter 5) that φt,t can be deduced
recursively from the autocovariance function using the algorithm
Step 1 φ1,1 = c(1)/c(0) and r(2) = E[X2 − X2|1 ]2 = E[X2 − φ1,1 X1 ]2 = 2c(0) − 2φ1,1 c(1).
Step 2 For j = t
Pt−1
φt−1,j c(t − j)
r(t)
φt,j = φt−1,j − φt,t φt−1,t−j
1 ≤ j ≤ t − 1,
2
and r(t + 1) = r(t)(1 − φt,t ).
φt,t =
c(t) −
j=1
(i) Using this algorithm show that the PACF of the MA(1) process Xt = εt + θεt−1 ,
where |θ| < 1 (so it is invertible) is
φt,t =
(−1)t+1 (θ)t (1 − θ2 )
.
1 − θ2(t+1)
Since the process is an M A(1) the autocovariance is c(0) = 1 + θ2 , c(1) = θ
and c(r) = 0 for r ≥ 2. Substituting this into the initial values given
above we have
φ1,1 = θ/(1 + θ2 )
r(2) = (1 + θ2 ) −
1 + θ2 + θ4
1 − θ6
θ4
=
=
.
(1 + θ2 )
(1 + θ2 )
1 − θ4
Using the Durbin-Levinson scheme we have the recursion:
−θφt,t
r(t + 1)
r(t + 1) = r(t)(1 − φ2t,t ).
φt+1,t+1 =
We prove the result by induction. We see that for the case t = 1 (see
(6)) that it holds. Let us assume that
φj,j =
(−1)j+1 (θ)j (1 − θ2 )
1 − θ2(j+1)
(6)
it true for j = 1, . . . , t. We now prove the result for t + 1. If the above
formula is true then for 1 ≤ j‘t we have
(θ)2j (1 − θ2 )2
(1 − θ2(j+1) )2 − (θ)2j (1 − θ2 )2
=
(1 − θ2(j+1) )2j
(1 − θ2(j+1) )2
1 − 2θ2(j+1) + θ4(j+1) − θ2j + 2θ2j+2 − θ2j+4
=
(1 − θ2(j+1) )2
(1 − θ2(j+2) )(1 − θ2j )
=
(1 − θ2(j+1) )2
1 − φ2j,j = 1 −
This implies that
r(t + 1) =
t
Y
(1 − φ2j,j ) =
j=1
1 − θ2(t+2)
.
1 − θ2(t+1)
(7)
Thus we an expression for r(t + 1) based on our inductive hypothesis.
Now we use (6) and (7) to derive an expression for φt+1,t+1 . Using (6)
we have
(−1)t+1 (θ)t (1 − θ2 ) 1 − θ2(t+1)
−θφt,t
= (−θ)
r(t + 1)
1 − θ2(t+1)
1 − θ2(t+2)
(θ)t+1 (1 − θ2 )
,
= (−1)t+2
1 − θ2(t+2)
φt+1,t+1 =
thus we have proved that the expression for φt+1,t+1 holds. Which proves
the result for all t ∈ Z+ .
We observe that
E[Xt+1 − PX t (Xt+1 )]2 =
1 − θ2(t+2)
,
1 − θ2(t+1)
thus as t grows (the number of predictors in the prediction) the mean
squared error gets closer to one (which we will show in Chapter 5 is
the mean squared error of the best linear predictor of Xt+1 given in the
infinite past).
(ii) Explain how this partial correlation is similar to the ACF of the AR(1) process
Xt = −θXt−1 + εt .
t+1
t
2
(1−θ )
. Thus
The partial correlation of the MA process which is (−1)1−θ(θ)
2(t+1)
t+1
t
2
for large t the partial correlation is equal to (−1) (θ) (1 − θ ). Comparing this to the ACF of the AR(1) process which has the ACF ρ(t) = (−θ)t
we see that they are asymptotically the same up to a mulitplicative constant. In other words, the partial correlation of an MA process is very
similar to the autocovariance of the corresponding AR process.
(3.5) Compare the below plots:
(i) Compare the ACF and PACF of the AR(2) model Xt = 1.5Xt−1 −0.75Xt−2 +εt using ARIMAacf(ar=c(1.5,-0.75),ma=0,30) and ARIMAacf(ar=c(1.5,-0.75),ma=0,pacf=T,30
(ii) Compare the ACF and PACF of the MA(1) model Xt = εt −0.5εt using ARIMAacf(ar=0,ma=c(-1.
and ARIMAacf(ar=0,ma=c(-1.5),pacf=T,30).
(ii) Compare the ACF and PACF of the ARMA(2, 1) model Xt −1.5Xt−1 +0.75Xt−2 =
εt −0.5εt using ARIMAacf(ar=c(1.5,-0.75),ma=c(-1.5),30) and ARIMAacf(ar=c(1.5,0.75),
(3.6) Compare the ACF and PACF plots of the monthly temperature data from 1996-2014.
Would you fit an AR, MA or ARMA model to this data?
We simply do an exploratory analysis here. In Figure 2 we give the ACF and
the PACF. Some care should be taken because we have done any formal
diagnostics (and using the error bars when there is dependence is very
tentative). However, we see what appears to be correlations at all length
in the ACF, though there are only four large partial correlations. This
suggests, possible that an AR(4) model may be appropriate. We use the
function ar.yw to fit an AR process to this time series. It uses the AIC
to select the order and chooses the autoregressive of order four. However,
remember we have not done a formal analysis.
0.4
0.0
ACF
0.8
Series monthlytemp
0
5
10
15
20
Lag
0.4
0.2
0.0
Partial ACF
0.6
Series monthlytemp
5
10
15
20
Lag
Figure 2: Top is the ACF of the monthly temperatures, the lower is the PACF of the monthly
temperatures.